We tend to think about BYOD (Bring Your Own Device) as freeing the company from expensive software, and freeing employees to select the device and software that works best for them. Individuals can use desktop PCs, laptops, tablets, or smartphones. They can use Microsoft Office or LibreOffice -- or some other tool that lets them exchange files with the team.
One aspect that has been given little thought is upgrades.
Software changes over time. Vendors release new versions. (So do open source projects.)
An obvious question is: How does an organization coordinate changes to new versions?
But the question is not about software.
The real question is: How does an organization coordinate changes to file formats?
Traditional (that is, non-BYOD) shops distribute software to employees. The company buys software licenses and deploys it. Large companies have teams dedicated to this task. In general, everyone has the same version of software. Software is selected by a standards committee, or the office administrator. The file formats "come along for the ride".
With BYOD, the organization must pick the file formats and the employees pick the software.
An organization needs agreement for the exchange of documents. There must be agreement for the project files for development teams. (Microsoft Visual Studio has one format, the Eclipse IDE has another.) A single individual cannot enforce their choice upon the organization.
For an organization that supports BYOD, think about the formats of information. That standards committee that has nothing to do now that BYOD has been implemented? Assign them the task of file format standardization.
Tuesday, July 16, 2013
Sunday, July 14, 2013
The politics of mobile apps
The politics of mobile apps are different from the politics of desktop and web apps.
Desktop and web apps can expand "forever". There are no practical limits to the number of dialogs one can add to a desktop application. There is no limit to the number of web pages on can include in a web application.
Mobile apps, unlike desktop and web apps, have limits on their functionality. These limits are set by the device and our expectations of mobile apps - an app on a smart phone can be only so complicated. Part of the limit comes from the screen size. The other limit comes from our ideas of mobile apps. We expect easy-to-use apps that "just do it".
What this means for development teams is that a mobile app may expand until it reaches that (nebulous) limit of complexity. At some point the app becomes Too Complicated. At that point, no features may be added -- unless you remove or simplify other features.
What this means for the managers of the development team is that the process to select features for a mobile app is very different from the process to select features for a desktop or web app.
The limits on development of these apps are not complexity, but team resources (and indirectly, money). A team can add features to desktop and web apps, and the constraint is the team size and the ability to test and deploy the new features. Development teams can be expanded, with additional funding. Test teams can be expanded. Deployment and support teams can be expanded. Funding can even overcome the limit of the building that houses the development team -- with enough money one can lease (or construct) another building.
The politics behind desktop and web applications are well known within organizations. The typical organization will have a committee (or perhaps a single person) who gathers requests for new features, assigns priorities, and feeds them to the development team. This committee acts as a regulator on requirements, and prevents the need for expansion of the teams (and hence limits the expense of the teams).
Discussions about feature selection and priorities may become heated as individuals disagree. Yet underneath the debate and the arguments is the understanding that all features can be accommodated, given enough time. If your request is not granted for one release, it is possible that it will be included in a later release. This understanding provides a level of civility in the discussions.
The politics for mobile apps is different. With hard limits on a feature set, there is no guarantee that a requested expansion will be included -- even eventually. Once a mobile app reaches a level of complexity that discourages customers, the app is essentially "full". Adding features to the app will drive away customers (and business).
The selection of features for a mobile app becomes a zero-sum game: features can be added only by removing others. This means that some requests will win and other will lose. And lose not in the "maybe for the next release" sense, but in the harder "not now and not ever" sense.
Once people realize this, they will fight much harder for their features. The debates and arguments about features will be more heated and possibly unpleasant, depending on the culture of the organization.
Savvy organizations will prepare for this change in politics. I can see three changes:
The organizations that prepare for this change will be in a better position to move to mobile apps.
Desktop and web apps can expand "forever". There are no practical limits to the number of dialogs one can add to a desktop application. There is no limit to the number of web pages on can include in a web application.
Mobile apps, unlike desktop and web apps, have limits on their functionality. These limits are set by the device and our expectations of mobile apps - an app on a smart phone can be only so complicated. Part of the limit comes from the screen size. The other limit comes from our ideas of mobile apps. We expect easy-to-use apps that "just do it".
What this means for development teams is that a mobile app may expand until it reaches that (nebulous) limit of complexity. At some point the app becomes Too Complicated. At that point, no features may be added -- unless you remove or simplify other features.
What this means for the managers of the development team is that the process to select features for a mobile app is very different from the process to select features for a desktop or web app.
The limits on development of these apps are not complexity, but team resources (and indirectly, money). A team can add features to desktop and web apps, and the constraint is the team size and the ability to test and deploy the new features. Development teams can be expanded, with additional funding. Test teams can be expanded. Deployment and support teams can be expanded. Funding can even overcome the limit of the building that houses the development team -- with enough money one can lease (or construct) another building.
The politics behind desktop and web applications are well known within organizations. The typical organization will have a committee (or perhaps a single person) who gathers requests for new features, assigns priorities, and feeds them to the development team. This committee acts as a regulator on requirements, and prevents the need for expansion of the teams (and hence limits the expense of the teams).
Discussions about feature selection and priorities may become heated as individuals disagree. Yet underneath the debate and the arguments is the understanding that all features can be accommodated, given enough time. If your request is not granted for one release, it is possible that it will be included in a later release. This understanding provides a level of civility in the discussions.
The politics for mobile apps is different. With hard limits on a feature set, there is no guarantee that a requested expansion will be included -- even eventually. Once a mobile app reaches a level of complexity that discourages customers, the app is essentially "full". Adding features to the app will drive away customers (and business).
The selection of features for a mobile app becomes a zero-sum game: features can be added only by removing others. This means that some requests will win and other will lose. And lose not in the "maybe for the next release" sense, but in the harder "not now and not ever" sense.
Once people realize this, they will fight much harder for their features. The debates and arguments about features will be more heated and possibly unpleasant, depending on the culture of the organization.
Savvy organizations will prepare for this change in politics. I can see three changes:
- Designate individuals or committees to define the user experience. These individuals will oversee the app design and review all features.
- Change the processes for feature requests to specify business benefits, preferably in terms of revenue or expense reduction. Also, define processes to hold requesters to their earlier estimates.
- Provide training for negotiation and conflict resolution.
The organizations that prepare for this change will be in a better position to move to mobile apps.
Friday, July 12, 2013
In the cloud, simple will be big
The cloud uses virtualized computers, usually virtualized PCs or PC-based servers.
The temptation is to build (well, instantiate) larger virtualized PCs. More powerful processors, more cores, more memory, more storage. It is a temptation that is based on the ideas of the pre-cloud era, when computers stood alone.
In the mainframe era, bigger was better. It was also more expensive, which in addition to creating a tension between larger and smaller computers, defined a status ranking of computer owners. Similar thinking held in the PC era: a larger, more capable PC was better than a smaller one. (I suppose that similar thinking happens with car owners.)
In the cloud, the size of individual PCs is less important. The cloud is built of many (virtualized) computers, and more importantly, able to increase the number of these computers. This ability shifts the equation. Bigger is still better, but now the measure of bigger is the cloud, not an individual computer.
The desire to improve virtual PCs has merit. Our current virtual PCs duplicate the common PC architecture of several years ago. That design includes the virtual processor type, the virtual hard disk controller, and the virtual video card. They are copies of the common devices of the time, chosen for compatibility with existing software. As copies of those devices, they replicate not only the good attributes but the foibles as well. For example, the typical virtualized environment emulates IDE and SCSI disk controllers, but allows you to boot only from the IDE controllers. (Why? Because the real-world configurations of those devices worked that way.)
An improved PC for the cloud is not bigger but simpler. Cloud-based systems use multiple servers and "spin up" new instances of virtual servers when they need additional capacity. One does not need a larger server when one can create, on demand, more instances of that server and share work among them.
The design of cloud-based systems is subtle. I have asserted that simpler computers are better than complex ones. This is true, but only up to a point. A cloud of servers so simple that they cannot run the network stack would be useless. Clearly, a minimum level of computation is required.
Our first generation of virtual computers were clones of existing machines. Some vendors have explored the use of simpler systems running on a sophisticated virtualization environment. (VMware's ESX and ESXi offerings, for example.)
Future generations of cloud computers will blur the lines between the virtualization manager, the virtualized machine, the operating system, and what is now called the language run-time (the JVM or CLR).
The entire system will be complex, yet I believe the successful configurations will have simplicity in each of the layers.
The temptation is to build (well, instantiate) larger virtualized PCs. More powerful processors, more cores, more memory, more storage. It is a temptation that is based on the ideas of the pre-cloud era, when computers stood alone.
In the mainframe era, bigger was better. It was also more expensive, which in addition to creating a tension between larger and smaller computers, defined a status ranking of computer owners. Similar thinking held in the PC era: a larger, more capable PC was better than a smaller one. (I suppose that similar thinking happens with car owners.)
In the cloud, the size of individual PCs is less important. The cloud is built of many (virtualized) computers, and more importantly, able to increase the number of these computers. This ability shifts the equation. Bigger is still better, but now the measure of bigger is the cloud, not an individual computer.
The desire to improve virtual PCs has merit. Our current virtual PCs duplicate the common PC architecture of several years ago. That design includes the virtual processor type, the virtual hard disk controller, and the virtual video card. They are copies of the common devices of the time, chosen for compatibility with existing software. As copies of those devices, they replicate not only the good attributes but the foibles as well. For example, the typical virtualized environment emulates IDE and SCSI disk controllers, but allows you to boot only from the IDE controllers. (Why? Because the real-world configurations of those devices worked that way.)
An improved PC for the cloud is not bigger but simpler. Cloud-based systems use multiple servers and "spin up" new instances of virtual servers when they need additional capacity. One does not need a larger server when one can create, on demand, more instances of that server and share work among them.
The design of cloud-based systems is subtle. I have asserted that simpler computers are better than complex ones. This is true, but only up to a point. A cloud of servers so simple that they cannot run the network stack would be useless. Clearly, a minimum level of computation is required.
Our first generation of virtual computers were clones of existing machines. Some vendors have explored the use of simpler systems running on a sophisticated virtualization environment. (VMware's ESX and ESXi offerings, for example.)
Future generations of cloud computers will blur the lines between the virtualization manager, the virtualized machine, the operating system, and what is now called the language run-time (the JVM or CLR).
The entire system will be complex, yet I believe the successful configurations will have simplicity in each of the layers.
Labels:
cloud computing,
complexity,
simplicity,
virtualization
Thursday, July 11, 2013
Computers are bricks that compute
Lenovo is marketing their "ThinkCentre M72e Tiny Desktop", a desktop PC that runs Windows. Yet the Lenovo M72e is quite different from the IBM model 5150.
The Lenovo is a more capable machine than its predecessor of 30-plus years ago. It has a processor that is faster and much more powerful, It has more memory, and larger storage capacity. But those are not the differences that catch my attention.
Unlike the typical desktop PC, the Lenovo M72e is a small unit, just large enough to hold its contents. This small factor is not new; it was used for the Apple Mac Mini. Small and sleek, it almost disappears. Indeed, one can mount it behind a flat-screen monitor where it is out of sight.
The original IBM PC was expandable. It had a motherboard with five expansion slots and room for internal devices like disk drives.
The IBM PC (and PC clones) needed expansion options. The basic unit was very basic: operable but not necessarily useful. The typical purchase included a video card, a display monitor, the floppy controller card, floppy disks, and PC-DOS. (Hard disks would be available with the IBM PC XT, released two years later.)
Other expansion options included additional memory, a real-time clock, terminal emulator cards, and network cards. The PC was a base, a platform, on which to build a tailored system. You added the devices for your specific needs.
IBM designed the PC to be expandable. The idea fit the sensibilities of the time. IBM had sold mainframes and minicomputers with expansion capabilities; its business model was to sell low-end equipment and then "grow" customers into more expensive configurations. Selling a basic PC with expansion options appealed to IBM and was recognized as "the way to buy computing equipment" by businesses. (The model was also used by other computer manufacturers.)
Today, we have different sensibilities for computing equipment. Instead of base units with expansion adapters, we purchase a complete system. Smartphones, tablets, and laptops are self-contained. Video is included in the device. Memory (in sufficient quantity) is included. Storage (also in sufficient quantity) is included. Additional devices, when needed, are handled with USB ports rather than adapter cards.
Desktop PCs, for the most part, still contain expansion slots. Expansion slots that most people, I suspect, never use.
Today's view of a PC is not a platform for expansion but a box that performs computations. The smaller form-factors for PCs fit with this sensibility.
That is the difference with the Mac Mini and the Lenovo M72e. They are not platforms to build systems. They are complete, self-contained systems. When we need additional capacity, we replace them; we do not expand them.
Computers today are bricks that compute.
The Lenovo is a more capable machine than its predecessor of 30-plus years ago. It has a processor that is faster and much more powerful, It has more memory, and larger storage capacity. But those are not the differences that catch my attention.
Unlike the typical desktop PC, the Lenovo M72e is a small unit, just large enough to hold its contents. This small factor is not new; it was used for the Apple Mac Mini. Small and sleek, it almost disappears. Indeed, one can mount it behind a flat-screen monitor where it is out of sight.
The original IBM PC was expandable. It had a motherboard with five expansion slots and room for internal devices like disk drives.
The IBM PC (and PC clones) needed expansion options. The basic unit was very basic: operable but not necessarily useful. The typical purchase included a video card, a display monitor, the floppy controller card, floppy disks, and PC-DOS. (Hard disks would be available with the IBM PC XT, released two years later.)
Other expansion options included additional memory, a real-time clock, terminal emulator cards, and network cards. The PC was a base, a platform, on which to build a tailored system. You added the devices for your specific needs.
IBM designed the PC to be expandable. The idea fit the sensibilities of the time. IBM had sold mainframes and minicomputers with expansion capabilities; its business model was to sell low-end equipment and then "grow" customers into more expensive configurations. Selling a basic PC with expansion options appealed to IBM and was recognized as "the way to buy computing equipment" by businesses. (The model was also used by other computer manufacturers.)
Today, we have different sensibilities for computing equipment. Instead of base units with expansion adapters, we purchase a complete system. Smartphones, tablets, and laptops are self-contained. Video is included in the device. Memory (in sufficient quantity) is included. Storage (also in sufficient quantity) is included. Additional devices, when needed, are handled with USB ports rather than adapter cards.
Desktop PCs, for the most part, still contain expansion slots. Expansion slots that most people, I suspect, never use.
Today's view of a PC is not a platform for expansion but a box that performs computations. The smaller form-factors for PCs fit with this sensibility.
That is the difference with the Mac Mini and the Lenovo M72e. They are not platforms to build systems. They are complete, self-contained systems. When we need additional capacity, we replace them; we do not expand them.
Computers today are bricks that compute.
Labels:
expandability,
IBM PC,
Lenovo,
Mac Mini,
small-form PCs
Tuesday, July 9, 2013
The Last Picture Show
The PC revolution brought many changes, but the biggest was distribution of computing power. Personal computers smashed the centralization of mainframe processing and allowed individuals to define when (and where, to the modest extent that PCs were portable) computing would be done.
Yet one application (a tool of the PC era, ironically) follows a centralized model. It requires people to come and meet in a single location, and to wait for a single individual to provide data. It is a mainframe model, with central control over the location and time of processing.
That application is PowerPoint.
Or, more generally, it is presentations.
Presentations are the one application that requires people to come together into a single space, at a specific time, and observe a speaker or group of speakers. The attendees have no control over the flow of information, no control over the timing, and no control over the content. (Although they usually have an idea of the content in advance.)
The presentation is a hold-over from an earlier era, when instructors had information, students desired information, and the technology made mass attendance the most efficient form of distribution. Whether it be a college lecture, a sermon at a religious service, or a review of corporate strategy, the model of "one person speaks and everyone listens" has remained unchanged.
Technology is is a position to change that. We're seeing it with online classes and new tools for presentations. Business meetings can be streamed to PCs and tablets, eliminating the need for employees to travel and meet in a large (probably rented) space. Lectures can be recorded and viewed at the leisure of the student. Sermons can be recorded and provided to those who are unable to attend, perhaps due to infirmities or other commitments.
We don't need presentations on large screens (and on large screens only). We need the information, on a large or small screen.
We don't need presentations in real time (and in real time only). We need information at the right time, with the ability to replay sections to clarify questions.
Look for successors to PowerPoint and its colleagues that combine presentation (video and audio) with time-shifting, multiple devices (PCs, web browsers, and tablets), annotations (written and oral), and indexing for private searches.
I think of the new presentation technologies as an enhanced notebook, one with multimedia capabilities.
Yet one application (a tool of the PC era, ironically) follows a centralized model. It requires people to come and meet in a single location, and to wait for a single individual to provide data. It is a mainframe model, with central control over the location and time of processing.
That application is PowerPoint.
Or, more generally, it is presentations.
Presentations are the one application that requires people to come together into a single space, at a specific time, and observe a speaker or group of speakers. The attendees have no control over the flow of information, no control over the timing, and no control over the content. (Although they usually have an idea of the content in advance.)
The presentation is a hold-over from an earlier era, when instructors had information, students desired information, and the technology made mass attendance the most efficient form of distribution. Whether it be a college lecture, a sermon at a religious service, or a review of corporate strategy, the model of "one person speaks and everyone listens" has remained unchanged.
Technology is is a position to change that. We're seeing it with online classes and new tools for presentations. Business meetings can be streamed to PCs and tablets, eliminating the need for employees to travel and meet in a large (probably rented) space. Lectures can be recorded and viewed at the leisure of the student. Sermons can be recorded and provided to those who are unable to attend, perhaps due to infirmities or other commitments.
We don't need presentations on large screens (and on large screens only). We need the information, on a large or small screen.
We don't need presentations in real time (and in real time only). We need information at the right time, with the ability to replay sections to clarify questions.
Look for successors to PowerPoint and its colleagues that combine presentation (video and audio) with time-shifting, multiple devices (PCs, web browsers, and tablets), annotations (written and oral), and indexing for private searches.
I think of the new presentation technologies as an enhanced notebook, one with multimedia capabilities.
Monday, July 8, 2013
Microsoft's Future is not Windows
Many Windows users are hostile to Windows 8 and the Modern interface. They want Windows to remain what it is.
I think most people have defined Windows as "the thing that runs Microsoft Office". Graphic artists have defined Windows as "the thing that runs Photoshop". Developers have defined Windows as "the thing that runs Visual Studio". A few shops build turnkey scheduling and billing systems for doctors offices, a few others build turnkey point-of-sale systems. Those are the bounds of Windows.
Microsoft, on the other hand, views Windows as a product, a source of revenue.
A bounded market is limited. Microsoft knows this, and I suspect that the limited nature of Windows was a motivation for the Surface RT and Windows RT.
This difference between Microsoft and Windows users is the problem: Microsoft wants a growing market, and users want the existing technology. The two are not compatible.
As I see it, Microsoft's path forward is Windows RT, the new operating system that does not run existing Windows applications. Windows 8, with its support of classic Windows apps and its ability to run "Modern" Windows apps is a transition, a step from the old Windows to the new.
Abandoning the classic Windows API with its collection of applications is not a simple task. Many people rely on various applications. Users are unhappy with the idea of changing to a new platform.
But from Microsoft's view, the revenues (and profits) of classic Windows are limited. The sales of PCs are declining, as consumers switch from desktops to tablets. Businesses have PCs and may replace existing units, but they can also hang on to PCs and buy new ones only when necessary. And keeping the classic Windows product line (Windows, Office, Visual Studio, etc.) alive is expensive. The competition of products like Linux, LibreOffice, and Eclipse puts further limits on revenue. Classic Windows has declining revenue and constant (or growing) expenses.
Microsoft is moving its products to mobile/cloud not because it wants to, not because it delights in the torment of its customers, but because it must. The economics of the market are forcing this change.
I think most people have defined Windows as "the thing that runs Microsoft Office". Graphic artists have defined Windows as "the thing that runs Photoshop". Developers have defined Windows as "the thing that runs Visual Studio". A few shops build turnkey scheduling and billing systems for doctors offices, a few others build turnkey point-of-sale systems. Those are the bounds of Windows.
Microsoft, on the other hand, views Windows as a product, a source of revenue.
A bounded market is limited. Microsoft knows this, and I suspect that the limited nature of Windows was a motivation for the Surface RT and Windows RT.
This difference between Microsoft and Windows users is the problem: Microsoft wants a growing market, and users want the existing technology. The two are not compatible.
As I see it, Microsoft's path forward is Windows RT, the new operating system that does not run existing Windows applications. Windows 8, with its support of classic Windows apps and its ability to run "Modern" Windows apps is a transition, a step from the old Windows to the new.
Abandoning the classic Windows API with its collection of applications is not a simple task. Many people rely on various applications. Users are unhappy with the idea of changing to a new platform.
But from Microsoft's view, the revenues (and profits) of classic Windows are limited. The sales of PCs are declining, as consumers switch from desktops to tablets. Businesses have PCs and may replace existing units, but they can also hang on to PCs and buy new ones only when necessary. And keeping the classic Windows product line (Windows, Office, Visual Studio, etc.) alive is expensive. The competition of products like Linux, LibreOffice, and Eclipse puts further limits on revenue. Classic Windows has declining revenue and constant (or growing) expenses.
Microsoft is moving its products to mobile/cloud not because it wants to, not because it delights in the torment of its customers, but because it must. The economics of the market are forcing this change.
Tuesday, July 2, 2013
Limits to growth in mobile apps
Application development, at least in the PC era, has followed a standard pattern: start with a basic application, then issue new versions with each new version containing more features. Once released, an application has only one direction in terms of complexity: up.
This increase in complexity was possible due to improving technology (powerful processors, more memory, higher screen resolutions, faster network connections) and necessary due to competition (in some markets, the product with the larger checklist of features wins).
The pattern of increasing the complexity of an application has been with us since the very first PC applications (which I consider Wordstar, Microsoft BASIC, and Visicalc). I suspect that the pattern existed in the minicomputer and mainframe markets too.
The early era of microcomputers (before the IBM PC in 1981) had hard limits on resources. Processors were only so fast. You had only so much memory. Hobbyist computers such as the COSMAC ELF had tiny processors and 256 bytes of memory. (Notice that the word 'bytes' has no letter in front.) The complexity of applications was limited by hardware and the cleverness of programmers.
PC technology changed the rules. Hardware became powerful enough to handle complex tasks. Moreover, the design of the IBM PC was expandable, and manufacturers provided bigger and more capable machines. The limit to application growth was not the hardware but the capacity of the development team. Programmers needed not to be clever but to work in teams and write easily-understood code.
With the expansion of hardware and the expansion of development teams, the bounding factor for software was the capacity of the development team. This bounding factor eventually changed to the funding for the development team.
With funding as the limiting factor, a company could decide the level of complexity for software. A company could build an application as complex as it wanted. Yes, a company could specify a project more complex than the hardware would allow, but in general companies lived within the "envelope" of technology. That envelope was moving upwards, thanks to the PC's expandable design.
Mobile technology changes the rules again, and requires a new view towards complexity. Mobile phones are getting more powerful, but slowly, and their screens are at a practical maximum. A smart phone screen size can be a maximum of 5 inches, and that is not going to change. (A larger screen pushes the device into tablet territory.)
Tablets also are getting more powerful, but also slowly and their screens are also at a practical maximum. A tablet screen can be as large as 10 inches, and that is not going to change. (A larger screen makes for an unwieldy tablet.)
These practical maximums place limits on the complexity of the user interface. Those limits enforce limits on the complexity of the app.
More significantly, the limits in mobile apps come from hardware, not funding. A company cannot assume that they can expand an app forever. The envelope of technology is not moving upwards, and cannot move upwards: the limits are cause by human physiology.
All of this means that the normal process of application development must change. The old pattern of a "1.0 release" with limited functionality and subsequent releases with additional functionality (on a slope that continues upwards) cannot work for mobile apps. Once an app has a certain level of complexity, the process must change. New features are possible only at the expense of old features.
We are back in the situation of the early microcomputers: hard limits on application complexity. Like that earlier era, we will need cleverness to work within the limits. Unlike that earlier era, the cleverness is not in memory allocation or algorithmic sophistication, but in the design of user interfaces. We need cleverness in the user experience, and that will be the limiting factor for mobile apps.
This increase in complexity was possible due to improving technology (powerful processors, more memory, higher screen resolutions, faster network connections) and necessary due to competition (in some markets, the product with the larger checklist of features wins).
The pattern of increasing the complexity of an application has been with us since the very first PC applications (which I consider Wordstar, Microsoft BASIC, and Visicalc). I suspect that the pattern existed in the minicomputer and mainframe markets too.
The early era of microcomputers (before the IBM PC in 1981) had hard limits on resources. Processors were only so fast. You had only so much memory. Hobbyist computers such as the COSMAC ELF had tiny processors and 256 bytes of memory. (Notice that the word 'bytes' has no letter in front.) The complexity of applications was limited by hardware and the cleverness of programmers.
PC technology changed the rules. Hardware became powerful enough to handle complex tasks. Moreover, the design of the IBM PC was expandable, and manufacturers provided bigger and more capable machines. The limit to application growth was not the hardware but the capacity of the development team. Programmers needed not to be clever but to work in teams and write easily-understood code.
With the expansion of hardware and the expansion of development teams, the bounding factor for software was the capacity of the development team. This bounding factor eventually changed to the funding for the development team.
With funding as the limiting factor, a company could decide the level of complexity for software. A company could build an application as complex as it wanted. Yes, a company could specify a project more complex than the hardware would allow, but in general companies lived within the "envelope" of technology. That envelope was moving upwards, thanks to the PC's expandable design.
Mobile technology changes the rules again, and requires a new view towards complexity. Mobile phones are getting more powerful, but slowly, and their screens are at a practical maximum. A smart phone screen size can be a maximum of 5 inches, and that is not going to change. (A larger screen pushes the device into tablet territory.)
Tablets also are getting more powerful, but also slowly and their screens are also at a practical maximum. A tablet screen can be as large as 10 inches, and that is not going to change. (A larger screen makes for an unwieldy tablet.)
These practical maximums place limits on the complexity of the user interface. Those limits enforce limits on the complexity of the app.
More significantly, the limits in mobile apps come from hardware, not funding. A company cannot assume that they can expand an app forever. The envelope of technology is not moving upwards, and cannot move upwards: the limits are cause by human physiology.
All of this means that the normal process of application development must change. The old pattern of a "1.0 release" with limited functionality and subsequent releases with additional functionality (on a slope that continues upwards) cannot work for mobile apps. Once an app has a certain level of complexity, the process must change. New features are possible only at the expense of old features.
We are back in the situation of the early microcomputers: hard limits on application complexity. Like that earlier era, we will need cleverness to work within the limits. Unlike that earlier era, the cleverness is not in memory allocation or algorithmic sophistication, but in the design of user interfaces. We need cleverness in the user experience, and that will be the limiting factor for mobile apps.
Labels:
application design,
hardware,
limits,
mobile apps,
project funding
Subscribe to:
Posts (Atom)