Moving applications to the cloud is not easy. Existing applications cannot be simply dropped onto cloud servers and leverage the benefits of cloud computing. And this should not surprise people.
The cloud is a different environment than a web server. (Or a Windows desktop.) Moving to the cloud is a change in platform.
The history of IT has several examples of such changes. Each transition from one platform to another required changes to the code, and often changes to how we *think* about programs.
The operating system
The first changes occurred in the mainframe age. The very first was probably the shift from a raw hardware platform to hardware with an operating system. With raw hardware, the programmer has access to the entire computing system, including memory and devices. With an operating system, the program must request such access through the operating system. It was no longer possible to write directly to the printer; one had to request the use of each device. This change also saw the separation of tasks between programmers and system operators, the latter handling the scheduling and execution of programs. One could not use the older programs; they had to be rewritten to call the operating system rather that communicate with devices.
Timesharing and interactive systems
Timesharing was another change in the mainframe era. In contrast to batch processing (running one program at a time, each program reading and writing data as needed but with no direct interaction with the programmer), timeshare systems interacted with users. Timeshare systems saw the use of on-line terminals, something not available for batch systems. The BASIC language was developed to take advantage of these terminals. Programs had to wait for user input and verify that the input was correct and meaningful. While batch systems could merely write erroneous input to a 'reject' file, timeshare systems could prompt the user for a correction. (If they were written to detect errors.) One could not use a batch program in an interactive environment; programs had to be rewritten.
Minicomputers
The transition from mainframes to minicomputers was, interestingly, one of the simpler conversions in IT history. In many respects, minicomputers were smaller versions of mainframes. IBM minicomputers used the batch processing model that matched its mainframes. Minicomputers from manufacturers like DEC and Data General used interactive systems, following the lead of timeshare systems. In this case, is *was* possible to move programs from mainframes to minicomputers.
Microcomputers
If minicomputers allowed for an easy transition, microcomputers were the opposite. They were small and followed the path of interactive systems. Most ran BASIC in ROM with no other possible languages. The operating systems available (CP/M, MS-DOS, and a host of others) were limited and weak compared to today's, providing no protection for hardware and no multitasking. Every program for microcomputers had to be written from scratch.
Graphical operating systems
Windows (and OS/2 and other systems, for those who remember them) introduced a number of changes to programming. The obvious difference between Windows programs and the older DOS programs was, of course, the graphical user interface. From the programmer's perspective, Windows required event-driven programming, something not available in DOS. A Windows program had to respond to mouse clicks and keyboard entries anywhere on the program's window, which was very different from the DOS text-based input methods. Old DOS programs could not be simply dropped into Windows and run; they had to be rewritten. (Yes, technically one could run the older programs in the "DOS box", but that was not really "moving to Windows".)
Web applications
Web applications, with browsers and servers, HTML and "submit" requests, with CGI scripts and JavaScript and CSS and AJAX, were completely different from Windows "desktop" applications. The intense interaction of a window with fine-grained controls and events was replaced with the large-scale request, eventually getting smaller AJAX and AJAX-like web services. The separation of user interface (HTML, CSS, JavaScript, and browser) from "back end" (the server) required a complete rewrite of applications.
Mobile apps
Small screen. Touch-based. Storage on servers, not so much on the device. Device processor for handling input; main processing on servers.
One could not drop a web application (or an old Windows desktop application) onto a mobile device. (Yes, you can run Windows applications on Microsoft's Surface tablets. But the Surface tablets are really PCs in the shape of tablets, and they do not use the model used by iOS or Android.)
You had to write new apps for mobile devices. You had to build a collection of web services to be run on the back end. (Not too different from the web application back end, but not exactly the same.)
Which brings us to cloud applications
Cloud applications use multiple instances of servers (web servers, database servers, and others) each hosting services (called "microservices" because the service is less that a full application) communicating through message queues.
One cannot simply move a web application into the cloud. You have to rewrite them to split computation and coordination, the latter handled by queues. Computation must be split into small, discrete services. You must write controller services that make requests to multiple microservices. You must design your front-end apps (which run on mobile devices and web browsers) and establish an efficient API to bridge the front-end apps with the back-end services.
In other words, you have to rewrite your applications. (Again.)
A different platform requires a different design. This should not be a surprise.
Showing posts with label application design. Show all posts
Showing posts with label application design. Show all posts
Wednesday, December 28, 2016
Friday, January 31, 2014
Faster update cycles mean PC apps become expensive
Ah, for the good old days of slow hardware upgrades. It used to be that one could buy a computer system and use it for years, possibly even a decade. The software would be upgraded, but the hardware would last. One could run a business knowing the future of its IT (hardware and software) was predictable.
Today we have faster cycles for hardware upgrades. Cell phones, tablets, and some PCs (Apple) are updated in a matter of months, not decades. The causes are multiple: competition (especially among phone vendors), changes in technology, and a form of planned obsolescence (Apple) that sees existing customers buying new versions.
I expect that these faster cycles will move to the PC realm.
The change in the life span of PC hardware will affect consumers and businesses, with the greater impact on businesses. I expect individual consumers to move away from PCs and switch to phones, tablets, game consoles, and internet TV appliances.
Businesses have a challenge ahead. Corporate users typically don't want PCs; they want computing power. Specifically, they want computing power with a user interface that is consistent over time. (When a new version of Windows is introduced to a corporate environment, one of the first actions is to configure the user interface to look like the old version. The inability of Windows 8 to emulate Windows 7 exactly is probably the cause for corporate discomfort with it.)
But the challenge to business goes beyond the user interface. Corporations want stable computing platforms to hold their applications. They want to build a system (or buy one) and use it for a long time. Switching from one vendor's system to another's is an expensive proposition, and corporations amortize the conversion cost over a long life. A new system, or even a new version of a system, can impose changes to the user interface, interfaces to other systems, and interactions with the operating system and drivers. All of these changes are part of the cost of implementation.
In the corporation's mind, the fewer conversions, the better.
That philosophy is colliding with the faster pace of hardware. Apple is not alone in its rapid release of hardware and operating systems; Microsoft is releasing new versions of Windows at a rate much faster than the ten-year gap between Windows XP and Windows 7. (I'm ignoring Windows Vista.)
To adapt to the faster change, I expect corporations to shift from the PC platform to technologies that allow it to retain longer lifespans: virtual PCs and cloud computing. Virtual PCs are the easier change, allowing applications to be shifted directly onto the new platform. With remote access, a (fast-changing) real PC can access the (slow changing) "get the work done" virtual PC. In this case, virtualization and remote access act as a shock absorber for the change in technology.
Cloud computing offers a more efficient platform, but only after re-designing the application. The large monolithic PC applications must split into multiple services coordinated by (relatively) simple applications running on tablets and phones. In this case, the use of small, simple components on multiple platforms (server and tablet/phone) act as the buffer to changes in technology.
The PC platform will see faster update cycles and shorter life spans. Applications on this platform will be subject to more changes. A company's customer base will use more platforms, driving up the cost of development, testing, and support.
Moving to virtual PCs or to the cloud is a way of avoiding that increase in costs.
Today we have faster cycles for hardware upgrades. Cell phones, tablets, and some PCs (Apple) are updated in a matter of months, not decades. The causes are multiple: competition (especially among phone vendors), changes in technology, and a form of planned obsolescence (Apple) that sees existing customers buying new versions.
I expect that these faster cycles will move to the PC realm.
The change in the life span of PC hardware will affect consumers and businesses, with the greater impact on businesses. I expect individual consumers to move away from PCs and switch to phones, tablets, game consoles, and internet TV appliances.
Businesses have a challenge ahead. Corporate users typically don't want PCs; they want computing power. Specifically, they want computing power with a user interface that is consistent over time. (When a new version of Windows is introduced to a corporate environment, one of the first actions is to configure the user interface to look like the old version. The inability of Windows 8 to emulate Windows 7 exactly is probably the cause for corporate discomfort with it.)
But the challenge to business goes beyond the user interface. Corporations want stable computing platforms to hold their applications. They want to build a system (or buy one) and use it for a long time. Switching from one vendor's system to another's is an expensive proposition, and corporations amortize the conversion cost over a long life. A new system, or even a new version of a system, can impose changes to the user interface, interfaces to other systems, and interactions with the operating system and drivers. All of these changes are part of the cost of implementation.
In the corporation's mind, the fewer conversions, the better.
That philosophy is colliding with the faster pace of hardware. Apple is not alone in its rapid release of hardware and operating systems; Microsoft is releasing new versions of Windows at a rate much faster than the ten-year gap between Windows XP and Windows 7. (I'm ignoring Windows Vista.)
To adapt to the faster change, I expect corporations to shift from the PC platform to technologies that allow it to retain longer lifespans: virtual PCs and cloud computing. Virtual PCs are the easier change, allowing applications to be shifted directly onto the new platform. With remote access, a (fast-changing) real PC can access the (slow changing) "get the work done" virtual PC. In this case, virtualization and remote access act as a shock absorber for the change in technology.
Cloud computing offers a more efficient platform, but only after re-designing the application. The large monolithic PC applications must split into multiple services coordinated by (relatively) simple applications running on tablets and phones. In this case, the use of small, simple components on multiple platforms (server and tablet/phone) act as the buffer to changes in technology.
The PC platform will see faster update cycles and shorter life spans. Applications on this platform will be subject to more changes. A company's customer base will use more platforms, driving up the cost of development, testing, and support.
Moving to virtual PCs or to the cloud is a way of avoiding that increase in costs.
Wednesday, November 20, 2013
We need a new UML
The Object Management Group has released a new version of UML. The web site for Dr. Dobb's asks the question: Do You Even Care? It's a proper question.
It's proper because UML, despite a spike of interest in the late 1990s, has failed to move into the mainstream of software development. While the Dr. Dobb's article claims ubiquity ("dozens of UML books published, thousands of articles and blogs posted, and thousands of training classes delivered"), UML is anything but ubiquitous. If anything, UML has been ignored in the latest trends of software: agile development techniques and functional programming. It is designed for large projects and large teams designing the system up front and implementing it according to detailed documents. It is designed for systems built with mutable objects, and functional programming avoids both objects and mutable state.
UML was built to help us design and build large complex systems. It was meant to abstract away details and let us focus on the structure, using a standard notation that could be recognized and understood by all practitioners. We still need those things -- but UML doesn't work for a lot of projects. We need a new UML, one that can work with smaller projects, agile projects, and functional programming languages.
It's proper because UML, despite a spike of interest in the late 1990s, has failed to move into the mainstream of software development. While the Dr. Dobb's article claims ubiquity ("dozens of UML books published, thousands of articles and blogs posted, and thousands of training classes delivered"), UML is anything but ubiquitous. If anything, UML has been ignored in the latest trends of software: agile development techniques and functional programming. It is designed for large projects and large teams designing the system up front and implementing it according to detailed documents. It is designed for systems built with mutable objects, and functional programming avoids both objects and mutable state.
UML was built to help us design and build large complex systems. It was meant to abstract away details and let us focus on the structure, using a standard notation that could be recognized and understood by all practitioners. We still need those things -- but UML doesn't work for a lot of projects. We need a new UML, one that can work with smaller projects, agile projects, and functional programming languages.
Sunday, November 17, 2013
The end of complex PC apps
Businesses are facing a problem with technology: PCs (and tablets, and smart phones) are changing. Specifically, they are changing faster than businesses would like.
Corporations have many programs that they use internally. Some corporations build their own software, others buy software "off the shelf". Many companies use a combination of both.
All of the companies with whom I have worked wanted stable platforms on which to build their systems and processes. Whether it was a complex program built in C++, a comprehensive model built in a spreadsheet, or an office suite (word processor, spreadsheet, and e-mail), companies want to invest their effort in their custom solutions. They did not want to spend money or time on upgrades and changes to the operating system or commercially available applications.
While they dislike change, corporations are willing to upgrade systems. Corporations want long upgrade cycles. They want gentle upgrade paths, with easy transitions from one version to the next. They were happy with the old Microsoft world: Windows NT, Windows 2000, and Windows XP were excellent examples of the long, gentle upgrades desired by corporations.
That is no longer the world of PCs. The new world sees fast update cycles for operating systems, major updates that require changes to applications. For companies with custom-made applications, they have to invest time and effort in updating their applications to match the new operating systems. (Consider Windows Vista and Windows 8.) For companies with off-the-shelf applications, they have to purchase new versions that run on the new operating systems.
What is a corporation to do?
My guess is that corporations will seek out other platforms and move their apps to those platforms. My guess is that corporations will recognize the cost of frequent change in the PC and mobile platforms, and look for other solutions with lower cost.
If they do, then PCs will lose their title to the development world. The PC platform will not be the primary target for applications.
What are the new platforms? I suspect the two "winning" platforms will be web apps (browsers and servers), and mobile/cloud (tablets and phones with virtualized servers). While the front ends for these systems undergo frequent changes, the back ends are relatively stable. The browsers for web apps are mostly stable and they buffer the app from changes to the operating system. Tablets and smart phones undergo frequent updates; this cost can be minimized with simple apps that can be updated easily.
The big trend is away from complex PC applications. These are too expensive to maintain in the new world of frequent updates to operating systems.
Sunday, July 14, 2013
The politics of mobile apps
The politics of mobile apps are different from the politics of desktop and web apps.
Desktop and web apps can expand "forever". There are no practical limits to the number of dialogs one can add to a desktop application. There is no limit to the number of web pages on can include in a web application.
Mobile apps, unlike desktop and web apps, have limits on their functionality. These limits are set by the device and our expectations of mobile apps - an app on a smart phone can be only so complicated. Part of the limit comes from the screen size. The other limit comes from our ideas of mobile apps. We expect easy-to-use apps that "just do it".
What this means for development teams is that a mobile app may expand until it reaches that (nebulous) limit of complexity. At some point the app becomes Too Complicated. At that point, no features may be added -- unless you remove or simplify other features.
What this means for the managers of the development team is that the process to select features for a mobile app is very different from the process to select features for a desktop or web app.
The limits on development of these apps are not complexity, but team resources (and indirectly, money). A team can add features to desktop and web apps, and the constraint is the team size and the ability to test and deploy the new features. Development teams can be expanded, with additional funding. Test teams can be expanded. Deployment and support teams can be expanded. Funding can even overcome the limit of the building that houses the development team -- with enough money one can lease (or construct) another building.
The politics behind desktop and web applications are well known within organizations. The typical organization will have a committee (or perhaps a single person) who gathers requests for new features, assigns priorities, and feeds them to the development team. This committee acts as a regulator on requirements, and prevents the need for expansion of the teams (and hence limits the expense of the teams).
Discussions about feature selection and priorities may become heated as individuals disagree. Yet underneath the debate and the arguments is the understanding that all features can be accommodated, given enough time. If your request is not granted for one release, it is possible that it will be included in a later release. This understanding provides a level of civility in the discussions.
The politics for mobile apps is different. With hard limits on a feature set, there is no guarantee that a requested expansion will be included -- even eventually. Once a mobile app reaches a level of complexity that discourages customers, the app is essentially "full". Adding features to the app will drive away customers (and business).
The selection of features for a mobile app becomes a zero-sum game: features can be added only by removing others. This means that some requests will win and other will lose. And lose not in the "maybe for the next release" sense, but in the harder "not now and not ever" sense.
Once people realize this, they will fight much harder for their features. The debates and arguments about features will be more heated and possibly unpleasant, depending on the culture of the organization.
Savvy organizations will prepare for this change in politics. I can see three changes:
The organizations that prepare for this change will be in a better position to move to mobile apps.
Desktop and web apps can expand "forever". There are no practical limits to the number of dialogs one can add to a desktop application. There is no limit to the number of web pages on can include in a web application.
Mobile apps, unlike desktop and web apps, have limits on their functionality. These limits are set by the device and our expectations of mobile apps - an app on a smart phone can be only so complicated. Part of the limit comes from the screen size. The other limit comes from our ideas of mobile apps. We expect easy-to-use apps that "just do it".
What this means for development teams is that a mobile app may expand until it reaches that (nebulous) limit of complexity. At some point the app becomes Too Complicated. At that point, no features may be added -- unless you remove or simplify other features.
What this means for the managers of the development team is that the process to select features for a mobile app is very different from the process to select features for a desktop or web app.
The limits on development of these apps are not complexity, but team resources (and indirectly, money). A team can add features to desktop and web apps, and the constraint is the team size and the ability to test and deploy the new features. Development teams can be expanded, with additional funding. Test teams can be expanded. Deployment and support teams can be expanded. Funding can even overcome the limit of the building that houses the development team -- with enough money one can lease (or construct) another building.
The politics behind desktop and web applications are well known within organizations. The typical organization will have a committee (or perhaps a single person) who gathers requests for new features, assigns priorities, and feeds them to the development team. This committee acts as a regulator on requirements, and prevents the need for expansion of the teams (and hence limits the expense of the teams).
Discussions about feature selection and priorities may become heated as individuals disagree. Yet underneath the debate and the arguments is the understanding that all features can be accommodated, given enough time. If your request is not granted for one release, it is possible that it will be included in a later release. This understanding provides a level of civility in the discussions.
The politics for mobile apps is different. With hard limits on a feature set, there is no guarantee that a requested expansion will be included -- even eventually. Once a mobile app reaches a level of complexity that discourages customers, the app is essentially "full". Adding features to the app will drive away customers (and business).
The selection of features for a mobile app becomes a zero-sum game: features can be added only by removing others. This means that some requests will win and other will lose. And lose not in the "maybe for the next release" sense, but in the harder "not now and not ever" sense.
Once people realize this, they will fight much harder for their features. The debates and arguments about features will be more heated and possibly unpleasant, depending on the culture of the organization.
Savvy organizations will prepare for this change in politics. I can see three changes:
- Designate individuals or committees to define the user experience. These individuals will oversee the app design and review all features.
- Change the processes for feature requests to specify business benefits, preferably in terms of revenue or expense reduction. Also, define processes to hold requesters to their earlier estimates.
- Provide training for negotiation and conflict resolution.
The organizations that prepare for this change will be in a better position to move to mobile apps.
Tuesday, July 2, 2013
Limits to growth in mobile apps
Application development, at least in the PC era, has followed a standard pattern: start with a basic application, then issue new versions with each new version containing more features. Once released, an application has only one direction in terms of complexity: up.
This increase in complexity was possible due to improving technology (powerful processors, more memory, higher screen resolutions, faster network connections) and necessary due to competition (in some markets, the product with the larger checklist of features wins).
The pattern of increasing the complexity of an application has been with us since the very first PC applications (which I consider Wordstar, Microsoft BASIC, and Visicalc). I suspect that the pattern existed in the minicomputer and mainframe markets too.
The early era of microcomputers (before the IBM PC in 1981) had hard limits on resources. Processors were only so fast. You had only so much memory. Hobbyist computers such as the COSMAC ELF had tiny processors and 256 bytes of memory. (Notice that the word 'bytes' has no letter in front.) The complexity of applications was limited by hardware and the cleverness of programmers.
PC technology changed the rules. Hardware became powerful enough to handle complex tasks. Moreover, the design of the IBM PC was expandable, and manufacturers provided bigger and more capable machines. The limit to application growth was not the hardware but the capacity of the development team. Programmers needed not to be clever but to work in teams and write easily-understood code.
With the expansion of hardware and the expansion of development teams, the bounding factor for software was the capacity of the development team. This bounding factor eventually changed to the funding for the development team.
With funding as the limiting factor, a company could decide the level of complexity for software. A company could build an application as complex as it wanted. Yes, a company could specify a project more complex than the hardware would allow, but in general companies lived within the "envelope" of technology. That envelope was moving upwards, thanks to the PC's expandable design.
Mobile technology changes the rules again, and requires a new view towards complexity. Mobile phones are getting more powerful, but slowly, and their screens are at a practical maximum. A smart phone screen size can be a maximum of 5 inches, and that is not going to change. (A larger screen pushes the device into tablet territory.)
Tablets also are getting more powerful, but also slowly and their screens are also at a practical maximum. A tablet screen can be as large as 10 inches, and that is not going to change. (A larger screen makes for an unwieldy tablet.)
These practical maximums place limits on the complexity of the user interface. Those limits enforce limits on the complexity of the app.
More significantly, the limits in mobile apps come from hardware, not funding. A company cannot assume that they can expand an app forever. The envelope of technology is not moving upwards, and cannot move upwards: the limits are cause by human physiology.
All of this means that the normal process of application development must change. The old pattern of a "1.0 release" with limited functionality and subsequent releases with additional functionality (on a slope that continues upwards) cannot work for mobile apps. Once an app has a certain level of complexity, the process must change. New features are possible only at the expense of old features.
We are back in the situation of the early microcomputers: hard limits on application complexity. Like that earlier era, we will need cleverness to work within the limits. Unlike that earlier era, the cleverness is not in memory allocation or algorithmic sophistication, but in the design of user interfaces. We need cleverness in the user experience, and that will be the limiting factor for mobile apps.
This increase in complexity was possible due to improving technology (powerful processors, more memory, higher screen resolutions, faster network connections) and necessary due to competition (in some markets, the product with the larger checklist of features wins).
The pattern of increasing the complexity of an application has been with us since the very first PC applications (which I consider Wordstar, Microsoft BASIC, and Visicalc). I suspect that the pattern existed in the minicomputer and mainframe markets too.
The early era of microcomputers (before the IBM PC in 1981) had hard limits on resources. Processors were only so fast. You had only so much memory. Hobbyist computers such as the COSMAC ELF had tiny processors and 256 bytes of memory. (Notice that the word 'bytes' has no letter in front.) The complexity of applications was limited by hardware and the cleverness of programmers.
PC technology changed the rules. Hardware became powerful enough to handle complex tasks. Moreover, the design of the IBM PC was expandable, and manufacturers provided bigger and more capable machines. The limit to application growth was not the hardware but the capacity of the development team. Programmers needed not to be clever but to work in teams and write easily-understood code.
With the expansion of hardware and the expansion of development teams, the bounding factor for software was the capacity of the development team. This bounding factor eventually changed to the funding for the development team.
With funding as the limiting factor, a company could decide the level of complexity for software. A company could build an application as complex as it wanted. Yes, a company could specify a project more complex than the hardware would allow, but in general companies lived within the "envelope" of technology. That envelope was moving upwards, thanks to the PC's expandable design.
Mobile technology changes the rules again, and requires a new view towards complexity. Mobile phones are getting more powerful, but slowly, and their screens are at a practical maximum. A smart phone screen size can be a maximum of 5 inches, and that is not going to change. (A larger screen pushes the device into tablet territory.)
Tablets also are getting more powerful, but also slowly and their screens are also at a practical maximum. A tablet screen can be as large as 10 inches, and that is not going to change. (A larger screen makes for an unwieldy tablet.)
These practical maximums place limits on the complexity of the user interface. Those limits enforce limits on the complexity of the app.
More significantly, the limits in mobile apps come from hardware, not funding. A company cannot assume that they can expand an app forever. The envelope of technology is not moving upwards, and cannot move upwards: the limits are cause by human physiology.
All of this means that the normal process of application development must change. The old pattern of a "1.0 release" with limited functionality and subsequent releases with additional functionality (on a slope that continues upwards) cannot work for mobile apps. Once an app has a certain level of complexity, the process must change. New features are possible only at the expense of old features.
We are back in the situation of the early microcomputers: hard limits on application complexity. Like that earlier era, we will need cleverness to work within the limits. Unlike that earlier era, the cleverness is not in memory allocation or algorithmic sophistication, but in the design of user interfaces. We need cleverness in the user experience, and that will be the limiting factor for mobile apps.
Labels:
application design,
hardware,
limits,
mobile apps,
project funding
Friday, May 31, 2013
The rise of the simple UI
User interfaces are about to become simpler.
This change is driven by the rise of mobile devices. The UI for mobile apps must be simpler. A cell phone has a small screen and (when needed) a virtual keyboard. The user interacts through the touchscreen, not a keyboard and mouse. Tablets, while larger and often accompanied by a real (small-form) keyboard, also interact through the touchscreen.
For years, PC applications have accumulated features and complexity. Consider the Microsoft Word and Microsoft Excel applications. Each version has introduced new features. The 2007 versions introduced the "ribbon menu", which was an adjustment to the UI to accommodate the increase.
Mobile devices force us to simplify the user interface. Indirectly, they force us to simplify applications. In the desktop world, the application with the most features was (generally) considered the best. In the mobile world, that calculation changes. Instead of selecting an application on the raw number of features, we are selecting applications on simplicity and ease of use.
It is a trend that is ironic, as the early versions of Microsoft Windows were advertised as easy to use (a common adjective was "intuitive"). Yet while "intuitive" and "easy", Windows was never designed to be simple; configuration and administration were always complex. That complexity remained even with networks and Active Directory -- the complexity was centralized but not eliminated.
Apps on mobile don't have to be simple, but simple apps are the better sellers. Simple apps fit better on the small screens. Simple apps fit better into the mobile/cloud processing model. Even games demonstrate this trend (compare "Angry Birds" against the PC games like "Doom" or even "Minesweeper").
The move to simple apps on mobile devices will flow back to web applications and PC applications. The trend of adding features will reverse. This will affect the development of applications and the use of technology in offices. Job requisitions will list user interface (UI) and user experience (UX) skills. Office workflows will become more granular. Large, enterprise systems (like ERP) will mutate into collections of apps and collections of services. This will allow mobile apps, web apps, and PC apps to access the corporate data and perform work.
Sellers of PC applications will have to simplify their current offerings. It is a change that will affect the user interface and the internal organization of their application. Such a change is non-trivial and requires some hard decisions. Some features may be dropped, others may be deferred to a future version. Every feature must be considered and placed in either the mobile client or the cloud back-end, and such decisions must account for many aspects of mobile/cloud design (network accessibility, storage, availability of data on multiple devices, among others).
This change is driven by the rise of mobile devices. The UI for mobile apps must be simpler. A cell phone has a small screen and (when needed) a virtual keyboard. The user interacts through the touchscreen, not a keyboard and mouse. Tablets, while larger and often accompanied by a real (small-form) keyboard, also interact through the touchscreen.
For years, PC applications have accumulated features and complexity. Consider the Microsoft Word and Microsoft Excel applications. Each version has introduced new features. The 2007 versions introduced the "ribbon menu", which was an adjustment to the UI to accommodate the increase.
Mobile devices force us to simplify the user interface. Indirectly, they force us to simplify applications. In the desktop world, the application with the most features was (generally) considered the best. In the mobile world, that calculation changes. Instead of selecting an application on the raw number of features, we are selecting applications on simplicity and ease of use.
It is a trend that is ironic, as the early versions of Microsoft Windows were advertised as easy to use (a common adjective was "intuitive"). Yet while "intuitive" and "easy", Windows was never designed to be simple; configuration and administration were always complex. That complexity remained even with networks and Active Directory -- the complexity was centralized but not eliminated.
Apps on mobile don't have to be simple, but simple apps are the better sellers. Simple apps fit better on the small screens. Simple apps fit better into the mobile/cloud processing model. Even games demonstrate this trend (compare "Angry Birds" against the PC games like "Doom" or even "Minesweeper").
The move to simple apps on mobile devices will flow back to web applications and PC applications. The trend of adding features will reverse. This will affect the development of applications and the use of technology in offices. Job requisitions will list user interface (UI) and user experience (UX) skills. Office workflows will become more granular. Large, enterprise systems (like ERP) will mutate into collections of apps and collections of services. This will allow mobile apps, web apps, and PC apps to access the corporate data and perform work.
Sellers of PC applications will have to simplify their current offerings. It is a change that will affect the user interface and the internal organization of their application. Such a change is non-trivial and requires some hard decisions. Some features may be dropped, others may be deferred to a future version. Every feature must be considered and placed in either the mobile client or the cloud back-end, and such decisions must account for many aspects of mobile/cloud design (network accessibility, storage, availability of data on multiple devices, among others).
Sunday, June 10, 2012
Limits to growth
The shift from desktop and web applications to mobile applications entails many changes. There are changes in technology, new services and capabilities, and integration of apps and the sharing of data. Yet one change that has seen little discussion is the limits of an app's size.
Desktop applications could be as large as we wanted (and sometimes were larger than we wanted). It was easy to add a control to a dialog, or even to add a whole new dialog full of controls. A desktop application could start small and grow, and grow, and grow. After a short time, it was large. And after a little more time, it was "an unmanageable monstrosity".
The desktop PC technology supported this kind of growth. Desktop screens were large, and got larger over time. (Both in terms of absolute dimensions and pixel count.) The Windows UI philosophy allowed for (and encouraged) the use of dialogs to present information used less frequently that the information in the main window. Thus, application settings could be tucked away and kept out of sight, and users could go about their business without distractions.
But the world of mobile apps is different. I see three constraints on the size of apps.
First is the screen. Mobile apps must run on devices with smaller screens. Cell phones and tablets have screens that are smaller than desktop PC monitors, in both absolute dimensions and in pixel count. One cannot simply transfer a desktop UI to the mobile world; the screen is too small to display everything.
Second is the philosophy of app UI. Mobile apps show a few key pieces of information; desktop apps present dozens of fields and can use multiple dialogs to show more information. Dialogs and settings, encouraged in desktop applications, are discouraged in mobile apps. One cannot simply port a desktop application to the mobile world; the technique of hiding information is dialogs works poorly.
Third is the turnover in technology. Mobile apps are generally client-server apps with heavy processing on servers and minimal presentation on clients. The mobile app platforms change frequently, with new versions of cell phones and new types of devices (tablets and Windows 8 Metro devices). While there is some upward compatibility within product lines (apps from the iPhone will run on the iPad) there is a fair amount of work to make an app run on multiple platforms (such as porting an app from iPhone to Android or Windows 8). Desktop applications had a long, steady technology set for their UI; mobile apps have a technology set that changes quickly.
This third constraint interests me. The frequent changes in mobile devices and their operating systems means that app developers have incentive to update and revise their applications frequently. Certainly one can write an app for the earliest platforms such as iOS 1.0, but then you lose later functions. And the rise of competing platforms (Android, Windows 8) means new development efforts or you lose those shares of the market.
I expect that technology for mobile devices will continue to evolve at its rapid pace. (As some might say, this rapid pace is "the new normal" for mobile development.)
If mobile devices and operating systems continue to change, then apps will have to change with them. If the changes to devices and operating systems are large (say, voice recognition and gesture detection) then apps will need significant changes.
These kinds of changes will limit the size of a mobile app. One cannot start with a small app and let it grow, and grow, and grow as we did with desktop PC applications. Every so often we have to re-design the app, re-think our basic assumptions, and re-build it. Mobile apps will remain small because will be constantly re-writing them.
I recognize that I am building a house of cards here, with various assumptions depending on previous assumptions. So I give you fair warning that I may be wrong. But let's follow this line of thinking just a little further.
If mobile apps must remain small (the user interface portion at least), and mobile apps become dominant (not perhaps unreasonable), then any programs that a business uses (word processing, spreadsheets, e-mail, etc.) will have to be small. The world of apps will consist of small UI programs and large server back-ends. (Although I have given little thought to the changes for server technology and applications. But let's assume that they can be large apps in a stable environment.)
If business use the dominant form of computing (mobile apps) and those apps must be small, then business processes must change to use information in small, app-sized chunks. We cannot expect the large, complex data entry applications from the desktop to move to mobile computing, and we cannot expect the business processes that use large, complex data structures to run on mobile devices.
Therefore, business processes must change, to simplify their data needs. They may split data into smaller pieces, with coordinated apps each handling small parts of a larger dataset. Cooperative apps will allow for work to be distributed to multiple workers. Instead of a loan officer that reviews the entire loan, a bank may have several loan analysts performing smaller tasks such as credit history analysis, loan risk, interest rate analysis, and such.
These business changes will shift work from expert-based work to process-based work. Instead of highly trained individuals who know the entire process, a business can use specialists that combine their efforts as needed for each case or business event.
That's quite a change, for a mobile device.
Desktop applications could be as large as we wanted (and sometimes were larger than we wanted). It was easy to add a control to a dialog, or even to add a whole new dialog full of controls. A desktop application could start small and grow, and grow, and grow. After a short time, it was large. And after a little more time, it was "an unmanageable monstrosity".
The desktop PC technology supported this kind of growth. Desktop screens were large, and got larger over time. (Both in terms of absolute dimensions and pixel count.) The Windows UI philosophy allowed for (and encouraged) the use of dialogs to present information used less frequently that the information in the main window. Thus, application settings could be tucked away and kept out of sight, and users could go about their business without distractions.
But the world of mobile apps is different. I see three constraints on the size of apps.
First is the screen. Mobile apps must run on devices with smaller screens. Cell phones and tablets have screens that are smaller than desktop PC monitors, in both absolute dimensions and in pixel count. One cannot simply transfer a desktop UI to the mobile world; the screen is too small to display everything.
Second is the philosophy of app UI. Mobile apps show a few key pieces of information; desktop apps present dozens of fields and can use multiple dialogs to show more information. Dialogs and settings, encouraged in desktop applications, are discouraged in mobile apps. One cannot simply port a desktop application to the mobile world; the technique of hiding information is dialogs works poorly.
Third is the turnover in technology. Mobile apps are generally client-server apps with heavy processing on servers and minimal presentation on clients. The mobile app platforms change frequently, with new versions of cell phones and new types of devices (tablets and Windows 8 Metro devices). While there is some upward compatibility within product lines (apps from the iPhone will run on the iPad) there is a fair amount of work to make an app run on multiple platforms (such as porting an app from iPhone to Android or Windows 8). Desktop applications had a long, steady technology set for their UI; mobile apps have a technology set that changes quickly.
This third constraint interests me. The frequent changes in mobile devices and their operating systems means that app developers have incentive to update and revise their applications frequently. Certainly one can write an app for the earliest platforms such as iOS 1.0, but then you lose later functions. And the rise of competing platforms (Android, Windows 8) means new development efforts or you lose those shares of the market.
I expect that technology for mobile devices will continue to evolve at its rapid pace. (As some might say, this rapid pace is "the new normal" for mobile development.)
If mobile devices and operating systems continue to change, then apps will have to change with them. If the changes to devices and operating systems are large (say, voice recognition and gesture detection) then apps will need significant changes.
These kinds of changes will limit the size of a mobile app. One cannot start with a small app and let it grow, and grow, and grow as we did with desktop PC applications. Every so often we have to re-design the app, re-think our basic assumptions, and re-build it. Mobile apps will remain small because will be constantly re-writing them.
I recognize that I am building a house of cards here, with various assumptions depending on previous assumptions. So I give you fair warning that I may be wrong. But let's follow this line of thinking just a little further.
If mobile apps must remain small (the user interface portion at least), and mobile apps become dominant (not perhaps unreasonable), then any programs that a business uses (word processing, spreadsheets, e-mail, etc.) will have to be small. The world of apps will consist of small UI programs and large server back-ends. (Although I have given little thought to the changes for server technology and applications. But let's assume that they can be large apps in a stable environment.)
If business use the dominant form of computing (mobile apps) and those apps must be small, then business processes must change to use information in small, app-sized chunks. We cannot expect the large, complex data entry applications from the desktop to move to mobile computing, and we cannot expect the business processes that use large, complex data structures to run on mobile devices.
Therefore, business processes must change, to simplify their data needs. They may split data into smaller pieces, with coordinated apps each handling small parts of a larger dataset. Cooperative apps will allow for work to be distributed to multiple workers. Instead of a loan officer that reviews the entire loan, a bank may have several loan analysts performing smaller tasks such as credit history analysis, loan risk, interest rate analysis, and such.
These business changes will shift work from expert-based work to process-based work. Instead of highly trained individuals who know the entire process, a business can use specialists that combine their efforts as needed for each case or business event.
That's quite a change, for a mobile device.
Wednesday, February 23, 2011
CPU time rides again!
A long time ago, when computers were large, hulking beasts (and I mean truly large, hulking beasts, the types that filled rooms), there was the notion of "CPU time". Not only was there "CPU time", but there was a cost associated with CPU usage. In dollars.
CPU time was expensive and computations were precious. So expensive and so precious, in fact, that early IBM programmers were taught that when performing a "multiply" operation, one should load registers with the larger number in one particular register and the smaller number in a different register. While the operations "3 times 5" and "5 times 3" yield the same results, the early processors did not consider them identical. The multiplication operation was a series of add operations, and "3 times 5" was performed as five "add" operations, while "5 times 3" was performed as three "add" operations. The difference was two "add" operations. Not much, but the difference was larger for larger numbers. Repeated through the program, the total difference was significant. (That is, measurable in dollars.)
Advances in technology and the PC changed that mindset. Personal computers didn't have the notion of "CPU time". In part because the hardware didn't support the capture of CPU time, but also because the user didn't care. People cared about getting the job done, not about minimizing CPU time and maximizing the number of jobs run. There was only one job the user (who was also the system administrator) cared about -- the program that they were running.
For the past thirty years, people have not known or cared about CPU usage and program efficiency. I should rephrase that to "people in the PC/DOS/Windows world". Folks in the web world have cared about performance and still care about performance. But let's focus on the PC folks.
The PC folks have had a free ride for the past three decades, not worrying about performance. Oh, a few folks have worried: developers from the "old world" who learned frugality and programmers with really large data processing needs. But the vast majority of PC users have gotten by with the attitude of "if the program is slow, buy a faster PC".
This attitude is in for a change. The cause of the change? Virtualization.
With virtualization, PCs cease to be stand-alone machines. They become an "image" running under a virtualization engine. (That engine could be Virtual PC, VMWare, Virtualbox, Xen, or a few others. The engine doesn't matter; this issue applies to all of them.)
By shifting from a stand-alone machine to a job in a virtualization host, the PC becomes a job in a datacenter. It also becomes someone else's headache. The PC user is no longer the administrator. (Actually, the role of administrator in corporations shifted long ago, with Windows NT, domain controllers, centralized authentication, and group policies. Virtualization shifts the burden of CPU management to the central support team.)
The system administrators for virtualized PCs are true administrators, not PC owners who have the role thrust upon them. Real sysadmins pay attention to lots of performance indicators, including CPU usage, disk activity, and network activity. They pay attention because the operations cost money.
With virtual PCs, the processing occurs in the datacenter, and sysadmins will quickly spot the inefficient applications. The programs that consume lots of CPU and I/O will make themselves known, by standing out from the others.
Here's what I see happening:
- The shift to virtual PCs will continue, with today's PC users migrating to low-cost PCs and using Remote Desktop Connection (for windows) and Virtual Network Computing (for Linux) to connect to virtualized hosts. Users will keep their current applications.
- Some applications will exhibit poor response through RDP and VNC. These will be the applications with poorly written GUI routines, programs that require the virtualization software to perform extra work to make them work.
- Users will complain to the system administrators, who will tweak settings but in general be unable to fix the problem.
- Some applications will consume lots of CPU or I/O operations. System administrators will identify them and ask users to fix their applications. Users (for the most part) will have no clue about performance of their applications, either because they were written by someone else or because the user has no experience with performance programming.
- At this point, most folks (users and sysadmins) are frustrated with the changes enforced by management and the lack of fixes for performance issues. But folks will carry on.
- System administrators will provide reports on resource usage. Reports will be broken down by subunits within the organization, and show the cost of resources consumed by each subgroup.
- Some shops will introduce charge-back systems, to allocate usage charges to organization groups. The charged groups may ignore the charges at first, or consider them an uncontrollable cost of business. I expect pressure to reduce expenses will get managers looking at costs.
- Eventually, someone will observe that application Y performs well under virtualization (that is, more cheaply) while application X does not. Applications X and Y provide the same functions (say, word processing) and are mostly equivalent.
- Once the system administrators learn about the performance difference, they will push for the more efficient application. Armed with statistics and cost figures, they will be in a good position to advocate the adoption of application Y as an organization standard.
- User teams and managers will be willing to adopt the proposed application, to reduce their monthly charges.
And over time, the market will reward those applications that perform well under virtualization. Notice that this change occurs without marketing. It also forces the trade-off of features against performance, something that has been absent from the PC world.
Your job, if you are building applications, is to build the 'Y' version. You want an application that wins on performance. You do not want the 'X' version.
You have to measure your application and learn how to write programs that are efficient. You need the tools to measure your application's performance, environments in which to test, and the desire to run these tests and improve your application. You will have a new set of requirements for your application: performance requirements. All while meeting the same (unreduced) set of functional requirements.
Remember, "3 times 5" is not the same as "5 times 3".
CPU time was expensive and computations were precious. So expensive and so precious, in fact, that early IBM programmers were taught that when performing a "multiply" operation, one should load registers with the larger number in one particular register and the smaller number in a different register. While the operations "3 times 5" and "5 times 3" yield the same results, the early processors did not consider them identical. The multiplication operation was a series of add operations, and "3 times 5" was performed as five "add" operations, while "5 times 3" was performed as three "add" operations. The difference was two "add" operations. Not much, but the difference was larger for larger numbers. Repeated through the program, the total difference was significant. (That is, measurable in dollars.)
Advances in technology and the PC changed that mindset. Personal computers didn't have the notion of "CPU time". In part because the hardware didn't support the capture of CPU time, but also because the user didn't care. People cared about getting the job done, not about minimizing CPU time and maximizing the number of jobs run. There was only one job the user (who was also the system administrator) cared about -- the program that they were running.
For the past thirty years, people have not known or cared about CPU usage and program efficiency. I should rephrase that to "people in the PC/DOS/Windows world". Folks in the web world have cared about performance and still care about performance. But let's focus on the PC folks.
The PC folks have had a free ride for the past three decades, not worrying about performance. Oh, a few folks have worried: developers from the "old world" who learned frugality and programmers with really large data processing needs. But the vast majority of PC users have gotten by with the attitude of "if the program is slow, buy a faster PC".
This attitude is in for a change. The cause of the change? Virtualization.
With virtualization, PCs cease to be stand-alone machines. They become an "image" running under a virtualization engine. (That engine could be Virtual PC, VMWare, Virtualbox, Xen, or a few others. The engine doesn't matter; this issue applies to all of them.)
By shifting from a stand-alone machine to a job in a virtualization host, the PC becomes a job in a datacenter. It also becomes someone else's headache. The PC user is no longer the administrator. (Actually, the role of administrator in corporations shifted long ago, with Windows NT, domain controllers, centralized authentication, and group policies. Virtualization shifts the burden of CPU management to the central support team.)
The system administrators for virtualized PCs are true administrators, not PC owners who have the role thrust upon them. Real sysadmins pay attention to lots of performance indicators, including CPU usage, disk activity, and network activity. They pay attention because the operations cost money.
With virtual PCs, the processing occurs in the datacenter, and sysadmins will quickly spot the inefficient applications. The programs that consume lots of CPU and I/O will make themselves known, by standing out from the others.
Here's what I see happening:
- The shift to virtual PCs will continue, with today's PC users migrating to low-cost PCs and using Remote Desktop Connection (for windows) and Virtual Network Computing (for Linux) to connect to virtualized hosts. Users will keep their current applications.
- Some applications will exhibit poor response through RDP and VNC. These will be the applications with poorly written GUI routines, programs that require the virtualization software to perform extra work to make them work.
- Users will complain to the system administrators, who will tweak settings but in general be unable to fix the problem.
- Some applications will consume lots of CPU or I/O operations. System administrators will identify them and ask users to fix their applications. Users (for the most part) will have no clue about performance of their applications, either because they were written by someone else or because the user has no experience with performance programming.
- At this point, most folks (users and sysadmins) are frustrated with the changes enforced by management and the lack of fixes for performance issues. But folks will carry on.
- System administrators will provide reports on resource usage. Reports will be broken down by subunits within the organization, and show the cost of resources consumed by each subgroup.
- Some shops will introduce charge-back systems, to allocate usage charges to organization groups. The charged groups may ignore the charges at first, or consider them an uncontrollable cost of business. I expect pressure to reduce expenses will get managers looking at costs.
- Eventually, someone will observe that application Y performs well under virtualization (that is, more cheaply) while application X does not. Applications X and Y provide the same functions (say, word processing) and are mostly equivalent.
- Once the system administrators learn about the performance difference, they will push for the more efficient application. Armed with statistics and cost figures, they will be in a good position to advocate the adoption of application Y as an organization standard.
- User teams and managers will be willing to adopt the proposed application, to reduce their monthly charges.
And over time, the market will reward those applications that perform well under virtualization. Notice that this change occurs without marketing. It also forces the trade-off of features against performance, something that has been absent from the PC world.
Your job, if you are building applications, is to build the 'Y' version. You want an application that wins on performance. You do not want the 'X' version.
You have to measure your application and learn how to write programs that are efficient. You need the tools to measure your application's performance, environments in which to test, and the desire to run these tests and improve your application. You will have a new set of requirements for your application: performance requirements. All while meeting the same (unreduced) set of functional requirements.
Remember, "3 times 5" is not the same as "5 times 3".
Subscribe to:
Posts (Atom)