At a recent conference, a fellow attendee asked about best practices for the live tiles on Windows 8.
Live tiles are different from the standard icons in that they can show information and change over time. Windows 8 comes with a number of live tiles: the clock, news, Bing search, and entertainment are a few.
For the question of best practices, my take is that we're too early in what I call the "style curve" of Windows live tiles. The style curve is similar to the "hype curve", in which new technologies are born, receive some hype, then become disparaged as they fail to cure all of our ills, and finally become accepted as useful. See more about the hype curve on wikipedia.
The style curve applies to new technologies, and is similar to the hype curve in that a technology is created, given lots of attention, disparaged, and then accepted. Here are the phases:
Creation The technology is created and made available.
Experimentation People test out the new technology and test its limits.
Overuse People adopt the new technology and use it, but with poor judgement. They use it for too many things, or too many situations, or with too many combinations.
Avoidance People dislike the overuse (or the poor taste) and complain. Some actively avoid the new technology.
Best practices A few folks use the technology with good taste. They demonstrate that the technology can be used without offending people's sensibilities. The techniques they use are dubbed "best practices".
Acceptance The techniques of restrained use (the best practices) are adopted by most folks.
Previous technologies have followed this curve. Examples include typefaces and fonts in desktop publishing (and later word processing) and animated images in web pages.
Some readers will remember the early days of the web and some of the garish designs that were used. The memories of spinning icons and blinking text may still be painful. This was the "overuse" phase of the style cycle for web pages. Several shops banned outright the use of the blink tag -- the "avoidance" phase. Now people understand good design principles for web pages. (Which do not include the blink tag, thankfully.)
Desktop publishing, powered by Windows and laser printers, allowed people to use a multitude of typefaces and fonts in their documents. And use them they did. Today we have use a limited set of typefaces and fonts in any one document, and shops have style guides.
Coming back to live tiles, I think we are at the "experimentation" phase of the style cycle. We don't know the limits on live tiles and we don't know the best practices. We have to go through the "overuse" and "avoidance" phases before we can get to "best practices". In other words, the best practices are a matter of knowing what not to do. But we have to try everything to see what works and what doesn't work.
Be prepared for some ugly, garish, and annoying live tiles. But know that style will arrive in the future.
Sunday, July 28, 2013
Friday, July 26, 2013
Projects must move and grow
Software development projects are like people, in that they grow over time. Technology changes around us, and we must allow for changes to our projects.
The things that change are:
Tools evolve over time. New versions of compilers are released. Sometimes a new version will break existing code. The C++ standards committee works very hard to prevent such breakages. (Although they have made some changes that broke code, after long deliberations.)
A project that practices good hygiene will upgrade tools. The change does not have to be immediate, but it is within a reasonable period of time. (Perhaps six months.)
The "best practices" for software development change over time. Often these changes are made possible with the invention of tools or the release of a new product. Past changes have included the use of version control, the use of lint utilities, code reviews, and automated testing. None of these were available (cheaply) in the 1990s, and they are today. Version control has undergone several generations, from the early PVCS and CVS systems to SourceSafe and Subversion and today's TFS and 'git'.
The source code changes over time, too. Not just for the addition of new features, but improvements to the code. Programming techniques, like tools and best practices, change over time. We moved from procedural programming to object-oriented programming. We've developed patterns such as "Model View Controller" and "Model View ViewModel" which help organize our code and reduce complexity.
Changes for tools, techniques, and source code take time and effort. They must be planned and incorporated into releases. They entail risk; any change can introduce a defect. To make matters worse, such changes are "internal" and offer no direct benefit to the users. The changes are for the benefit of the development team.
I have seen a number of projects start with the then-current set of tools and techniques, only to become established and stay with those tools and techniques. The once-modern project ages into a legacy effort. It is a trap that is all too easy: the demand for new features and bug fixes overwhelm the team and there is no time for non-revenue improvements.
The "no return on investment" argument is difficult to counter. Given finite resources and the choice between a feature that provides revenue against a change that provides no revenue, it is sensible to go with the revenue feature.
Without these internal changes, the project cost rises. The increases are caused by two factors: code complexity and ability to hire staff.
Over time, changes (especially rushed changes) increase the complexity of the code and changes become much more difficult. The code, once neat and organized, becomes messy. Features are added quickly, with compromises made to quality for delivery time. Each additional change adds to the "mess" of the code.
The world of software development advances, but the project remains stuck in its original era. The tools, initially the most modern, age. They become yesterday's tools and techniques.
Another problem is staffing. Few developers are willing to work on a project that has hard-to-maintain code, old tools, and outdated processes. The few that are will do so only at elevated rates. This increases the cost of future maintenance.
Allocating time and effort (and perhaps money) to keep the project up to date is not easy. The payoff is in the long term. A good project manager balances the short-term needs and the long-term goals.
The things that change are:
- Tools, such as editors and compilers
- Processes, such as design reviews and code walk-throughs
- Source code
Tools evolve over time. New versions of compilers are released. Sometimes a new version will break existing code. The C++ standards committee works very hard to prevent such breakages. (Although they have made some changes that broke code, after long deliberations.)
A project that practices good hygiene will upgrade tools. The change does not have to be immediate, but it is within a reasonable period of time. (Perhaps six months.)
The "best practices" for software development change over time. Often these changes are made possible with the invention of tools or the release of a new product. Past changes have included the use of version control, the use of lint utilities, code reviews, and automated testing. None of these were available (cheaply) in the 1990s, and they are today. Version control has undergone several generations, from the early PVCS and CVS systems to SourceSafe and Subversion and today's TFS and 'git'.
The source code changes over time, too. Not just for the addition of new features, but improvements to the code. Programming techniques, like tools and best practices, change over time. We moved from procedural programming to object-oriented programming. We've developed patterns such as "Model View Controller" and "Model View ViewModel" which help organize our code and reduce complexity.
Changes for tools, techniques, and source code take time and effort. They must be planned and incorporated into releases. They entail risk; any change can introduce a defect. To make matters worse, such changes are "internal" and offer no direct benefit to the users. The changes are for the benefit of the development team.
I have seen a number of projects start with the then-current set of tools and techniques, only to become established and stay with those tools and techniques. The once-modern project ages into a legacy effort. It is a trap that is all too easy: the demand for new features and bug fixes overwhelm the team and there is no time for non-revenue improvements.
The "no return on investment" argument is difficult to counter. Given finite resources and the choice between a feature that provides revenue against a change that provides no revenue, it is sensible to go with the revenue feature.
Without these internal changes, the project cost rises. The increases are caused by two factors: code complexity and ability to hire staff.
Over time, changes (especially rushed changes) increase the complexity of the code and changes become much more difficult. The code, once neat and organized, becomes messy. Features are added quickly, with compromises made to quality for delivery time. Each additional change adds to the "mess" of the code.
The world of software development advances, but the project remains stuck in its original era. The tools, initially the most modern, age. They become yesterday's tools and techniques.
Another problem is staffing. Few developers are willing to work on a project that has hard-to-maintain code, old tools, and outdated processes. The few that are will do so only at elevated rates. This increases the cost of future maintenance.
Allocating time and effort (and perhaps money) to keep the project up to date is not easy. The payoff is in the long term. A good project manager balances the short-term needs and the long-term goals.
Labels:
changes,
programming techniques,
project management,
toolsets
Tuesday, July 23, 2013
The killer app for Microsoft Surface is collaboration
People brought PCs into the office because PCs let people become more effective. The early days were difficult, as we struggled with them. We didn't know how to use PCs well, and software was difficult to use.
Eventually, we found the right mix of hardware and software. Windows XP was powerful enough to be useful for corporations and individuals, and it was successful. (And still is.)
Now, people are struggling with tablets. We don't know how to use them well -- especially in business. But our transition from PC to tablet will be more difficult than the transition from typewriter to PC.
Apple and Google built a new experience, one oriented for consumers, into the iPad and Android tablet. They left the desktop experience behind and started fresh.
Microsoft, in targeting the commercial market, delivered word processing and spreadsheets. But the tablet versions of Word and Excel are poor cousins to their desktop versions. Microsoft has an uphill battle to convince people to switch -- even for short periods -- from the desktop to the tablet for word processing and spreadsheets.
In short, Apple and Google have green fields, and Microsoft is competing with its own applications. For the tablet, Microsoft has to go beyond the desktop experience. Word processing and spreadsheets are not enough; it has to deliver something more. It needs a "killer app", a compelling use for tablets.
I have a few ideas for compelling office applications:
The shift is a one from individual work to collaborative work. Develop apps to help not individuals but teams become more effective.
If Microsoft can let people use tablets to work with other people, they will have something.
Eventually, we found the right mix of hardware and software. Windows XP was powerful enough to be useful for corporations and individuals, and it was successful. (And still is.)
Now, people are struggling with tablets. We don't know how to use them well -- especially in business. But our transition from PC to tablet will be more difficult than the transition from typewriter to PC.
Apple and Google built a new experience, one oriented for consumers, into the iPad and Android tablet. They left the desktop experience behind and started fresh.
Microsoft, in targeting the commercial market, delivered word processing and spreadsheets. But the tablet versions of Word and Excel are poor cousins to their desktop versions. Microsoft has an uphill battle to convince people to switch -- even for short periods -- from the desktop to the tablet for word processing and spreadsheets.
In short, Apple and Google have green fields, and Microsoft is competing with its own applications. For the tablet, Microsoft has to go beyond the desktop experience. Word processing and spreadsheets are not enough; it has to deliver something more. It needs a "killer app", a compelling use for tablets.
I have a few ideas for compelling office applications:
- calendars and scheduling
- conference calls and video calls
- presentations not just on projectors but device-to-device
- multi-author documents and spreadsheets
The shift is a one from individual work to collaborative work. Develop apps to help not individuals but teams become more effective.
If Microsoft can let people use tablets to work with other people, they will have something.
Wednesday, July 17, 2013
The Surface RT needs fanboys
The response to Microsoft's Surface tablets has been less than enthusiastic. Various people have speculated on reasons. Some blame the technology, others blame the price. I look at the Surface from a market viewpoint.
I start by asking the question: Who would want a Surface?
Before I get very far, I ask another question: What are the groups who would want (or not want) a Surface tablet?
I divide the market into three groups: the fanboys, the haters, and the pragmatists.
The Microsoft fans are the people who dig in to anything that Microsoft produces. They would buy a Surface (and probably already have).
Fanboy groups are not limited to Microsoft. There are fanboys for Apple. There are fanboys for Linux (and Android, and Blackberry...).
Just as there are fans for each of the major vendors, there are also the haters. There is the "Anyone But Microsoft" crowd. They will not be buying the Surface. If anything, they will go out of their way to buy another product. (There are also "Anyone But Apple" and "Anything But Linux" crowds, too.)
In between these groups are the pragmatists. They buy technology not because they like it but because it works, it is popular, and it is low-risk. For desktops and servers, they have purchased Microsoft technologies over other technologies -- by large margins.
The pragmatists are the majority. The fanboys and the haters are fringe groups. Vocal, perhaps, but small populations within the larger set.
It was not always this way.
In the pre-PC days, people were fanboys for hardware: the Radio Shack TRS-80, the Apple II, the Commodore 64... even the Timex Sinclair had fans. Microsoft was hardware-neutral: Microsoft BASIC ran on just about everything. Microsoft was part of the "rebel alliance" against big, expensive mainframe computers.
This loyalty continued in the PC-DOS era. With the PC, the empire of IBM was clearly present in the market. Microsoft was still viewed as "on our side".
Things changed with Windows and Microsoft's expansion into the software market. After Microsoft split Windows from OS/2 and started developing primary applications for Windows, it was Microsoft that became the empire. Microsoft's grinding competition destroyed Digital Research, Borland, Wordperfect, Netscape, and countless other companies -- and we saw Microsoft as the new evil. Microsoft was no longer one of "us".
Fanboys care if a vendor is one of "us"; pragmatists don't. Microsoft worked very hard to please the pragmatists, focussing on enterprise software and corporate customers. The result was that the pragmatist market share increased at the expense of the fanboys. (The "Anyone But Microsoft" crowd picked up some share, too.)
Over the years the pragmatists have served Microsoft well. Microsoft dominated the desktop market and had a large share of the server market. While Microsoft danced with the pragmatists, the fanboys migrated to other markets: Blackberry, Apple, Linux. Talk with Microsoft users and they generally fall into three categories: people who pick Microsoft products for corporate use, people who use Microsoft products because the job forces them to, or people who use Microsoft products at home because that is what came with the computer. Very few people go out of their way to purchase Microsoft products. (No one is erasing Linux and installing Windows.)
Microsoft's market base is pragmatists.
Pragmatists are a problem for Microsoft: they are only weakly loyal. Pragmatists are, well, pragmatic. They don't buy a vendors technology because they like the vendor. They buy technology to achieve specific goals (perhaps running a company). They tend to follow the herd and buy what other folks buy. The herd is not buying Surface tablets, especially Surface RT tablets.
Microsoft destroyed the fanboy market base. Or perhaps I should say "their fanboy market base", as Apple has retained (and grown) theirs.
Without a sufficiently large set of people willing to take chances with new technologies, a vendor is condemned to their existing product designs (or mild changes).
I start by asking the question: Who would want a Surface?
Before I get very far, I ask another question: What are the groups who would want (or not want) a Surface tablet?
I divide the market into three groups: the fanboys, the haters, and the pragmatists.
The Microsoft fans are the people who dig in to anything that Microsoft produces. They would buy a Surface (and probably already have).
Fanboy groups are not limited to Microsoft. There are fanboys for Apple. There are fanboys for Linux (and Android, and Blackberry...).
Just as there are fans for each of the major vendors, there are also the haters. There is the "Anyone But Microsoft" crowd. They will not be buying the Surface. If anything, they will go out of their way to buy another product. (There are also "Anyone But Apple" and "Anything But Linux" crowds, too.)
In between these groups are the pragmatists. They buy technology not because they like it but because it works, it is popular, and it is low-risk. For desktops and servers, they have purchased Microsoft technologies over other technologies -- by large margins.
The pragmatists are the majority. The fanboys and the haters are fringe groups. Vocal, perhaps, but small populations within the larger set.
It was not always this way.
In the pre-PC days, people were fanboys for hardware: the Radio Shack TRS-80, the Apple II, the Commodore 64... even the Timex Sinclair had fans. Microsoft was hardware-neutral: Microsoft BASIC ran on just about everything. Microsoft was part of the "rebel alliance" against big, expensive mainframe computers.
This loyalty continued in the PC-DOS era. With the PC, the empire of IBM was clearly present in the market. Microsoft was still viewed as "on our side".
Things changed with Windows and Microsoft's expansion into the software market. After Microsoft split Windows from OS/2 and started developing primary applications for Windows, it was Microsoft that became the empire. Microsoft's grinding competition destroyed Digital Research, Borland, Wordperfect, Netscape, and countless other companies -- and we saw Microsoft as the new evil. Microsoft was no longer one of "us".
Fanboys care if a vendor is one of "us"; pragmatists don't. Microsoft worked very hard to please the pragmatists, focussing on enterprise software and corporate customers. The result was that the pragmatist market share increased at the expense of the fanboys. (The "Anyone But Microsoft" crowd picked up some share, too.)
Over the years the pragmatists have served Microsoft well. Microsoft dominated the desktop market and had a large share of the server market. While Microsoft danced with the pragmatists, the fanboys migrated to other markets: Blackberry, Apple, Linux. Talk with Microsoft users and they generally fall into three categories: people who pick Microsoft products for corporate use, people who use Microsoft products because the job forces them to, or people who use Microsoft products at home because that is what came with the computer. Very few people go out of their way to purchase Microsoft products. (No one is erasing Linux and installing Windows.)
Microsoft's market base is pragmatists.
Pragmatists are a problem for Microsoft: they are only weakly loyal. Pragmatists are, well, pragmatic. They don't buy a vendors technology because they like the vendor. They buy technology to achieve specific goals (perhaps running a company). They tend to follow the herd and buy what other folks buy. The herd is not buying Surface tablets, especially Surface RT tablets.
Microsoft destroyed the fanboy market base. Or perhaps I should say "their fanboy market base", as Apple has retained (and grown) theirs.
Without a sufficiently large set of people willing to take chances with new technologies, a vendor is condemned to their existing product designs (or mild changes).
For Microsoft to sell tablets, they need fanboys.
Tuesday, July 16, 2013
BYOD is about file formats, not software
We tend to think about BYOD (Bring Your Own Device) as freeing the company from expensive software, and freeing employees to select the device and software that works best for them. Individuals can use desktop PCs, laptops, tablets, or smartphones. They can use Microsoft Office or LibreOffice -- or some other tool that lets them exchange files with the team.
One aspect that has been given little thought is upgrades.
Software changes over time. Vendors release new versions. (So do open source projects.)
An obvious question is: How does an organization coordinate changes to new versions?
But the question is not about software.
The real question is: How does an organization coordinate changes to file formats?
Traditional (that is, non-BYOD) shops distribute software to employees. The company buys software licenses and deploys it. Large companies have teams dedicated to this task. In general, everyone has the same version of software. Software is selected by a standards committee, or the office administrator. The file formats "come along for the ride".
With BYOD, the organization must pick the file formats and the employees pick the software.
An organization needs agreement for the exchange of documents. There must be agreement for the project files for development teams. (Microsoft Visual Studio has one format, the Eclipse IDE has another.) A single individual cannot enforce their choice upon the organization.
For an organization that supports BYOD, think about the formats of information. That standards committee that has nothing to do now that BYOD has been implemented? Assign them the task of file format standardization.
One aspect that has been given little thought is upgrades.
Software changes over time. Vendors release new versions. (So do open source projects.)
An obvious question is: How does an organization coordinate changes to new versions?
But the question is not about software.
The real question is: How does an organization coordinate changes to file formats?
Traditional (that is, non-BYOD) shops distribute software to employees. The company buys software licenses and deploys it. Large companies have teams dedicated to this task. In general, everyone has the same version of software. Software is selected by a standards committee, or the office administrator. The file formats "come along for the ride".
With BYOD, the organization must pick the file formats and the employees pick the software.
An organization needs agreement for the exchange of documents. There must be agreement for the project files for development teams. (Microsoft Visual Studio has one format, the Eclipse IDE has another.) A single individual cannot enforce their choice upon the organization.
For an organization that supports BYOD, think about the formats of information. That standards committee that has nothing to do now that BYOD has been implemented? Assign them the task of file format standardization.
Sunday, July 14, 2013
The politics of mobile apps
The politics of mobile apps are different from the politics of desktop and web apps.
Desktop and web apps can expand "forever". There are no practical limits to the number of dialogs one can add to a desktop application. There is no limit to the number of web pages on can include in a web application.
Mobile apps, unlike desktop and web apps, have limits on their functionality. These limits are set by the device and our expectations of mobile apps - an app on a smart phone can be only so complicated. Part of the limit comes from the screen size. The other limit comes from our ideas of mobile apps. We expect easy-to-use apps that "just do it".
What this means for development teams is that a mobile app may expand until it reaches that (nebulous) limit of complexity. At some point the app becomes Too Complicated. At that point, no features may be added -- unless you remove or simplify other features.
What this means for the managers of the development team is that the process to select features for a mobile app is very different from the process to select features for a desktop or web app.
The limits on development of these apps are not complexity, but team resources (and indirectly, money). A team can add features to desktop and web apps, and the constraint is the team size and the ability to test and deploy the new features. Development teams can be expanded, with additional funding. Test teams can be expanded. Deployment and support teams can be expanded. Funding can even overcome the limit of the building that houses the development team -- with enough money one can lease (or construct) another building.
The politics behind desktop and web applications are well known within organizations. The typical organization will have a committee (or perhaps a single person) who gathers requests for new features, assigns priorities, and feeds them to the development team. This committee acts as a regulator on requirements, and prevents the need for expansion of the teams (and hence limits the expense of the teams).
Discussions about feature selection and priorities may become heated as individuals disagree. Yet underneath the debate and the arguments is the understanding that all features can be accommodated, given enough time. If your request is not granted for one release, it is possible that it will be included in a later release. This understanding provides a level of civility in the discussions.
The politics for mobile apps is different. With hard limits on a feature set, there is no guarantee that a requested expansion will be included -- even eventually. Once a mobile app reaches a level of complexity that discourages customers, the app is essentially "full". Adding features to the app will drive away customers (and business).
The selection of features for a mobile app becomes a zero-sum game: features can be added only by removing others. This means that some requests will win and other will lose. And lose not in the "maybe for the next release" sense, but in the harder "not now and not ever" sense.
Once people realize this, they will fight much harder for their features. The debates and arguments about features will be more heated and possibly unpleasant, depending on the culture of the organization.
Savvy organizations will prepare for this change in politics. I can see three changes:
The organizations that prepare for this change will be in a better position to move to mobile apps.
Desktop and web apps can expand "forever". There are no practical limits to the number of dialogs one can add to a desktop application. There is no limit to the number of web pages on can include in a web application.
Mobile apps, unlike desktop and web apps, have limits on their functionality. These limits are set by the device and our expectations of mobile apps - an app on a smart phone can be only so complicated. Part of the limit comes from the screen size. The other limit comes from our ideas of mobile apps. We expect easy-to-use apps that "just do it".
What this means for development teams is that a mobile app may expand until it reaches that (nebulous) limit of complexity. At some point the app becomes Too Complicated. At that point, no features may be added -- unless you remove or simplify other features.
What this means for the managers of the development team is that the process to select features for a mobile app is very different from the process to select features for a desktop or web app.
The limits on development of these apps are not complexity, but team resources (and indirectly, money). A team can add features to desktop and web apps, and the constraint is the team size and the ability to test and deploy the new features. Development teams can be expanded, with additional funding. Test teams can be expanded. Deployment and support teams can be expanded. Funding can even overcome the limit of the building that houses the development team -- with enough money one can lease (or construct) another building.
The politics behind desktop and web applications are well known within organizations. The typical organization will have a committee (or perhaps a single person) who gathers requests for new features, assigns priorities, and feeds them to the development team. This committee acts as a regulator on requirements, and prevents the need for expansion of the teams (and hence limits the expense of the teams).
Discussions about feature selection and priorities may become heated as individuals disagree. Yet underneath the debate and the arguments is the understanding that all features can be accommodated, given enough time. If your request is not granted for one release, it is possible that it will be included in a later release. This understanding provides a level of civility in the discussions.
The politics for mobile apps is different. With hard limits on a feature set, there is no guarantee that a requested expansion will be included -- even eventually. Once a mobile app reaches a level of complexity that discourages customers, the app is essentially "full". Adding features to the app will drive away customers (and business).
The selection of features for a mobile app becomes a zero-sum game: features can be added only by removing others. This means that some requests will win and other will lose. And lose not in the "maybe for the next release" sense, but in the harder "not now and not ever" sense.
Once people realize this, they will fight much harder for their features. The debates and arguments about features will be more heated and possibly unpleasant, depending on the culture of the organization.
Savvy organizations will prepare for this change in politics. I can see three changes:
- Designate individuals or committees to define the user experience. These individuals will oversee the app design and review all features.
- Change the processes for feature requests to specify business benefits, preferably in terms of revenue or expense reduction. Also, define processes to hold requesters to their earlier estimates.
- Provide training for negotiation and conflict resolution.
The organizations that prepare for this change will be in a better position to move to mobile apps.
Friday, July 12, 2013
In the cloud, simple will be big
The cloud uses virtualized computers, usually virtualized PCs or PC-based servers.
The temptation is to build (well, instantiate) larger virtualized PCs. More powerful processors, more cores, more memory, more storage. It is a temptation that is based on the ideas of the pre-cloud era, when computers stood alone.
In the mainframe era, bigger was better. It was also more expensive, which in addition to creating a tension between larger and smaller computers, defined a status ranking of computer owners. Similar thinking held in the PC era: a larger, more capable PC was better than a smaller one. (I suppose that similar thinking happens with car owners.)
In the cloud, the size of individual PCs is less important. The cloud is built of many (virtualized) computers, and more importantly, able to increase the number of these computers. This ability shifts the equation. Bigger is still better, but now the measure of bigger is the cloud, not an individual computer.
The desire to improve virtual PCs has merit. Our current virtual PCs duplicate the common PC architecture of several years ago. That design includes the virtual processor type, the virtual hard disk controller, and the virtual video card. They are copies of the common devices of the time, chosen for compatibility with existing software. As copies of those devices, they replicate not only the good attributes but the foibles as well. For example, the typical virtualized environment emulates IDE and SCSI disk controllers, but allows you to boot only from the IDE controllers. (Why? Because the real-world configurations of those devices worked that way.)
An improved PC for the cloud is not bigger but simpler. Cloud-based systems use multiple servers and "spin up" new instances of virtual servers when they need additional capacity. One does not need a larger server when one can create, on demand, more instances of that server and share work among them.
The design of cloud-based systems is subtle. I have asserted that simpler computers are better than complex ones. This is true, but only up to a point. A cloud of servers so simple that they cannot run the network stack would be useless. Clearly, a minimum level of computation is required.
Our first generation of virtual computers were clones of existing machines. Some vendors have explored the use of simpler systems running on a sophisticated virtualization environment. (VMware's ESX and ESXi offerings, for example.)
Future generations of cloud computers will blur the lines between the virtualization manager, the virtualized machine, the operating system, and what is now called the language run-time (the JVM or CLR).
The entire system will be complex, yet I believe the successful configurations will have simplicity in each of the layers.
The temptation is to build (well, instantiate) larger virtualized PCs. More powerful processors, more cores, more memory, more storage. It is a temptation that is based on the ideas of the pre-cloud era, when computers stood alone.
In the mainframe era, bigger was better. It was also more expensive, which in addition to creating a tension between larger and smaller computers, defined a status ranking of computer owners. Similar thinking held in the PC era: a larger, more capable PC was better than a smaller one. (I suppose that similar thinking happens with car owners.)
In the cloud, the size of individual PCs is less important. The cloud is built of many (virtualized) computers, and more importantly, able to increase the number of these computers. This ability shifts the equation. Bigger is still better, but now the measure of bigger is the cloud, not an individual computer.
The desire to improve virtual PCs has merit. Our current virtual PCs duplicate the common PC architecture of several years ago. That design includes the virtual processor type, the virtual hard disk controller, and the virtual video card. They are copies of the common devices of the time, chosen for compatibility with existing software. As copies of those devices, they replicate not only the good attributes but the foibles as well. For example, the typical virtualized environment emulates IDE and SCSI disk controllers, but allows you to boot only from the IDE controllers. (Why? Because the real-world configurations of those devices worked that way.)
An improved PC for the cloud is not bigger but simpler. Cloud-based systems use multiple servers and "spin up" new instances of virtual servers when they need additional capacity. One does not need a larger server when one can create, on demand, more instances of that server and share work among them.
The design of cloud-based systems is subtle. I have asserted that simpler computers are better than complex ones. This is true, but only up to a point. A cloud of servers so simple that they cannot run the network stack would be useless. Clearly, a minimum level of computation is required.
Our first generation of virtual computers were clones of existing machines. Some vendors have explored the use of simpler systems running on a sophisticated virtualization environment. (VMware's ESX and ESXi offerings, for example.)
Future generations of cloud computers will blur the lines between the virtualization manager, the virtualized machine, the operating system, and what is now called the language run-time (the JVM or CLR).
The entire system will be complex, yet I believe the successful configurations will have simplicity in each of the layers.
Labels:
cloud computing,
complexity,
simplicity,
virtualization
Thursday, July 11, 2013
Computers are bricks that compute
Lenovo is marketing their "ThinkCentre M72e Tiny Desktop", a desktop PC that runs Windows. Yet the Lenovo M72e is quite different from the IBM model 5150.
The Lenovo is a more capable machine than its predecessor of 30-plus years ago. It has a processor that is faster and much more powerful, It has more memory, and larger storage capacity. But those are not the differences that catch my attention.
Unlike the typical desktop PC, the Lenovo M72e is a small unit, just large enough to hold its contents. This small factor is not new; it was used for the Apple Mac Mini. Small and sleek, it almost disappears. Indeed, one can mount it behind a flat-screen monitor where it is out of sight.
The original IBM PC was expandable. It had a motherboard with five expansion slots and room for internal devices like disk drives.
The IBM PC (and PC clones) needed expansion options. The basic unit was very basic: operable but not necessarily useful. The typical purchase included a video card, a display monitor, the floppy controller card, floppy disks, and PC-DOS. (Hard disks would be available with the IBM PC XT, released two years later.)
Other expansion options included additional memory, a real-time clock, terminal emulator cards, and network cards. The PC was a base, a platform, on which to build a tailored system. You added the devices for your specific needs.
IBM designed the PC to be expandable. The idea fit the sensibilities of the time. IBM had sold mainframes and minicomputers with expansion capabilities; its business model was to sell low-end equipment and then "grow" customers into more expensive configurations. Selling a basic PC with expansion options appealed to IBM and was recognized as "the way to buy computing equipment" by businesses. (The model was also used by other computer manufacturers.)
Today, we have different sensibilities for computing equipment. Instead of base units with expansion adapters, we purchase a complete system. Smartphones, tablets, and laptops are self-contained. Video is included in the device. Memory (in sufficient quantity) is included. Storage (also in sufficient quantity) is included. Additional devices, when needed, are handled with USB ports rather than adapter cards.
Desktop PCs, for the most part, still contain expansion slots. Expansion slots that most people, I suspect, never use.
Today's view of a PC is not a platform for expansion but a box that performs computations. The smaller form-factors for PCs fit with this sensibility.
That is the difference with the Mac Mini and the Lenovo M72e. They are not platforms to build systems. They are complete, self-contained systems. When we need additional capacity, we replace them; we do not expand them.
Computers today are bricks that compute.
The Lenovo is a more capable machine than its predecessor of 30-plus years ago. It has a processor that is faster and much more powerful, It has more memory, and larger storage capacity. But those are not the differences that catch my attention.
Unlike the typical desktop PC, the Lenovo M72e is a small unit, just large enough to hold its contents. This small factor is not new; it was used for the Apple Mac Mini. Small and sleek, it almost disappears. Indeed, one can mount it behind a flat-screen monitor where it is out of sight.
The original IBM PC was expandable. It had a motherboard with five expansion slots and room for internal devices like disk drives.
The IBM PC (and PC clones) needed expansion options. The basic unit was very basic: operable but not necessarily useful. The typical purchase included a video card, a display monitor, the floppy controller card, floppy disks, and PC-DOS. (Hard disks would be available with the IBM PC XT, released two years later.)
Other expansion options included additional memory, a real-time clock, terminal emulator cards, and network cards. The PC was a base, a platform, on which to build a tailored system. You added the devices for your specific needs.
IBM designed the PC to be expandable. The idea fit the sensibilities of the time. IBM had sold mainframes and minicomputers with expansion capabilities; its business model was to sell low-end equipment and then "grow" customers into more expensive configurations. Selling a basic PC with expansion options appealed to IBM and was recognized as "the way to buy computing equipment" by businesses. (The model was also used by other computer manufacturers.)
Today, we have different sensibilities for computing equipment. Instead of base units with expansion adapters, we purchase a complete system. Smartphones, tablets, and laptops are self-contained. Video is included in the device. Memory (in sufficient quantity) is included. Storage (also in sufficient quantity) is included. Additional devices, when needed, are handled with USB ports rather than adapter cards.
Desktop PCs, for the most part, still contain expansion slots. Expansion slots that most people, I suspect, never use.
Today's view of a PC is not a platform for expansion but a box that performs computations. The smaller form-factors for PCs fit with this sensibility.
That is the difference with the Mac Mini and the Lenovo M72e. They are not platforms to build systems. They are complete, self-contained systems. When we need additional capacity, we replace them; we do not expand them.
Computers today are bricks that compute.
Labels:
expandability,
IBM PC,
Lenovo,
Mac Mini,
small-form PCs
Tuesday, July 9, 2013
The Last Picture Show
The PC revolution brought many changes, but the biggest was distribution of computing power. Personal computers smashed the centralization of mainframe processing and allowed individuals to define when (and where, to the modest extent that PCs were portable) computing would be done.
Yet one application (a tool of the PC era, ironically) follows a centralized model. It requires people to come and meet in a single location, and to wait for a single individual to provide data. It is a mainframe model, with central control over the location and time of processing.
That application is PowerPoint.
Or, more generally, it is presentations.
Presentations are the one application that requires people to come together into a single space, at a specific time, and observe a speaker or group of speakers. The attendees have no control over the flow of information, no control over the timing, and no control over the content. (Although they usually have an idea of the content in advance.)
The presentation is a hold-over from an earlier era, when instructors had information, students desired information, and the technology made mass attendance the most efficient form of distribution. Whether it be a college lecture, a sermon at a religious service, or a review of corporate strategy, the model of "one person speaks and everyone listens" has remained unchanged.
Technology is is a position to change that. We're seeing it with online classes and new tools for presentations. Business meetings can be streamed to PCs and tablets, eliminating the need for employees to travel and meet in a large (probably rented) space. Lectures can be recorded and viewed at the leisure of the student. Sermons can be recorded and provided to those who are unable to attend, perhaps due to infirmities or other commitments.
We don't need presentations on large screens (and on large screens only). We need the information, on a large or small screen.
We don't need presentations in real time (and in real time only). We need information at the right time, with the ability to replay sections to clarify questions.
Look for successors to PowerPoint and its colleagues that combine presentation (video and audio) with time-shifting, multiple devices (PCs, web browsers, and tablets), annotations (written and oral), and indexing for private searches.
I think of the new presentation technologies as an enhanced notebook, one with multimedia capabilities.
Yet one application (a tool of the PC era, ironically) follows a centralized model. It requires people to come and meet in a single location, and to wait for a single individual to provide data. It is a mainframe model, with central control over the location and time of processing.
That application is PowerPoint.
Or, more generally, it is presentations.
Presentations are the one application that requires people to come together into a single space, at a specific time, and observe a speaker or group of speakers. The attendees have no control over the flow of information, no control over the timing, and no control over the content. (Although they usually have an idea of the content in advance.)
The presentation is a hold-over from an earlier era, when instructors had information, students desired information, and the technology made mass attendance the most efficient form of distribution. Whether it be a college lecture, a sermon at a religious service, or a review of corporate strategy, the model of "one person speaks and everyone listens" has remained unchanged.
Technology is is a position to change that. We're seeing it with online classes and new tools for presentations. Business meetings can be streamed to PCs and tablets, eliminating the need for employees to travel and meet in a large (probably rented) space. Lectures can be recorded and viewed at the leisure of the student. Sermons can be recorded and provided to those who are unable to attend, perhaps due to infirmities or other commitments.
We don't need presentations on large screens (and on large screens only). We need the information, on a large or small screen.
We don't need presentations in real time (and in real time only). We need information at the right time, with the ability to replay sections to clarify questions.
Look for successors to PowerPoint and its colleagues that combine presentation (video and audio) with time-shifting, multiple devices (PCs, web browsers, and tablets), annotations (written and oral), and indexing for private searches.
I think of the new presentation technologies as an enhanced notebook, one with multimedia capabilities.
Monday, July 8, 2013
Microsoft's Future is not Windows
Many Windows users are hostile to Windows 8 and the Modern interface. They want Windows to remain what it is.
I think most people have defined Windows as "the thing that runs Microsoft Office". Graphic artists have defined Windows as "the thing that runs Photoshop". Developers have defined Windows as "the thing that runs Visual Studio". A few shops build turnkey scheduling and billing systems for doctors offices, a few others build turnkey point-of-sale systems. Those are the bounds of Windows.
Microsoft, on the other hand, views Windows as a product, a source of revenue.
A bounded market is limited. Microsoft knows this, and I suspect that the limited nature of Windows was a motivation for the Surface RT and Windows RT.
This difference between Microsoft and Windows users is the problem: Microsoft wants a growing market, and users want the existing technology. The two are not compatible.
As I see it, Microsoft's path forward is Windows RT, the new operating system that does not run existing Windows applications. Windows 8, with its support of classic Windows apps and its ability to run "Modern" Windows apps is a transition, a step from the old Windows to the new.
Abandoning the classic Windows API with its collection of applications is not a simple task. Many people rely on various applications. Users are unhappy with the idea of changing to a new platform.
But from Microsoft's view, the revenues (and profits) of classic Windows are limited. The sales of PCs are declining, as consumers switch from desktops to tablets. Businesses have PCs and may replace existing units, but they can also hang on to PCs and buy new ones only when necessary. And keeping the classic Windows product line (Windows, Office, Visual Studio, etc.) alive is expensive. The competition of products like Linux, LibreOffice, and Eclipse puts further limits on revenue. Classic Windows has declining revenue and constant (or growing) expenses.
Microsoft is moving its products to mobile/cloud not because it wants to, not because it delights in the torment of its customers, but because it must. The economics of the market are forcing this change.
I think most people have defined Windows as "the thing that runs Microsoft Office". Graphic artists have defined Windows as "the thing that runs Photoshop". Developers have defined Windows as "the thing that runs Visual Studio". A few shops build turnkey scheduling and billing systems for doctors offices, a few others build turnkey point-of-sale systems. Those are the bounds of Windows.
Microsoft, on the other hand, views Windows as a product, a source of revenue.
A bounded market is limited. Microsoft knows this, and I suspect that the limited nature of Windows was a motivation for the Surface RT and Windows RT.
This difference between Microsoft and Windows users is the problem: Microsoft wants a growing market, and users want the existing technology. The two are not compatible.
As I see it, Microsoft's path forward is Windows RT, the new operating system that does not run existing Windows applications. Windows 8, with its support of classic Windows apps and its ability to run "Modern" Windows apps is a transition, a step from the old Windows to the new.
Abandoning the classic Windows API with its collection of applications is not a simple task. Many people rely on various applications. Users are unhappy with the idea of changing to a new platform.
But from Microsoft's view, the revenues (and profits) of classic Windows are limited. The sales of PCs are declining, as consumers switch from desktops to tablets. Businesses have PCs and may replace existing units, but they can also hang on to PCs and buy new ones only when necessary. And keeping the classic Windows product line (Windows, Office, Visual Studio, etc.) alive is expensive. The competition of products like Linux, LibreOffice, and Eclipse puts further limits on revenue. Classic Windows has declining revenue and constant (or growing) expenses.
Microsoft is moving its products to mobile/cloud not because it wants to, not because it delights in the torment of its customers, but because it must. The economics of the market are forcing this change.
Tuesday, July 2, 2013
Limits to growth in mobile apps
Application development, at least in the PC era, has followed a standard pattern: start with a basic application, then issue new versions with each new version containing more features. Once released, an application has only one direction in terms of complexity: up.
This increase in complexity was possible due to improving technology (powerful processors, more memory, higher screen resolutions, faster network connections) and necessary due to competition (in some markets, the product with the larger checklist of features wins).
The pattern of increasing the complexity of an application has been with us since the very first PC applications (which I consider Wordstar, Microsoft BASIC, and Visicalc). I suspect that the pattern existed in the minicomputer and mainframe markets too.
The early era of microcomputers (before the IBM PC in 1981) had hard limits on resources. Processors were only so fast. You had only so much memory. Hobbyist computers such as the COSMAC ELF had tiny processors and 256 bytes of memory. (Notice that the word 'bytes' has no letter in front.) The complexity of applications was limited by hardware and the cleverness of programmers.
PC technology changed the rules. Hardware became powerful enough to handle complex tasks. Moreover, the design of the IBM PC was expandable, and manufacturers provided bigger and more capable machines. The limit to application growth was not the hardware but the capacity of the development team. Programmers needed not to be clever but to work in teams and write easily-understood code.
With the expansion of hardware and the expansion of development teams, the bounding factor for software was the capacity of the development team. This bounding factor eventually changed to the funding for the development team.
With funding as the limiting factor, a company could decide the level of complexity for software. A company could build an application as complex as it wanted. Yes, a company could specify a project more complex than the hardware would allow, but in general companies lived within the "envelope" of technology. That envelope was moving upwards, thanks to the PC's expandable design.
Mobile technology changes the rules again, and requires a new view towards complexity. Mobile phones are getting more powerful, but slowly, and their screens are at a practical maximum. A smart phone screen size can be a maximum of 5 inches, and that is not going to change. (A larger screen pushes the device into tablet territory.)
Tablets also are getting more powerful, but also slowly and their screens are also at a practical maximum. A tablet screen can be as large as 10 inches, and that is not going to change. (A larger screen makes for an unwieldy tablet.)
These practical maximums place limits on the complexity of the user interface. Those limits enforce limits on the complexity of the app.
More significantly, the limits in mobile apps come from hardware, not funding. A company cannot assume that they can expand an app forever. The envelope of technology is not moving upwards, and cannot move upwards: the limits are cause by human physiology.
All of this means that the normal process of application development must change. The old pattern of a "1.0 release" with limited functionality and subsequent releases with additional functionality (on a slope that continues upwards) cannot work for mobile apps. Once an app has a certain level of complexity, the process must change. New features are possible only at the expense of old features.
We are back in the situation of the early microcomputers: hard limits on application complexity. Like that earlier era, we will need cleverness to work within the limits. Unlike that earlier era, the cleverness is not in memory allocation or algorithmic sophistication, but in the design of user interfaces. We need cleverness in the user experience, and that will be the limiting factor for mobile apps.
This increase in complexity was possible due to improving technology (powerful processors, more memory, higher screen resolutions, faster network connections) and necessary due to competition (in some markets, the product with the larger checklist of features wins).
The pattern of increasing the complexity of an application has been with us since the very first PC applications (which I consider Wordstar, Microsoft BASIC, and Visicalc). I suspect that the pattern existed in the minicomputer and mainframe markets too.
The early era of microcomputers (before the IBM PC in 1981) had hard limits on resources. Processors were only so fast. You had only so much memory. Hobbyist computers such as the COSMAC ELF had tiny processors and 256 bytes of memory. (Notice that the word 'bytes' has no letter in front.) The complexity of applications was limited by hardware and the cleverness of programmers.
PC technology changed the rules. Hardware became powerful enough to handle complex tasks. Moreover, the design of the IBM PC was expandable, and manufacturers provided bigger and more capable machines. The limit to application growth was not the hardware but the capacity of the development team. Programmers needed not to be clever but to work in teams and write easily-understood code.
With the expansion of hardware and the expansion of development teams, the bounding factor for software was the capacity of the development team. This bounding factor eventually changed to the funding for the development team.
With funding as the limiting factor, a company could decide the level of complexity for software. A company could build an application as complex as it wanted. Yes, a company could specify a project more complex than the hardware would allow, but in general companies lived within the "envelope" of technology. That envelope was moving upwards, thanks to the PC's expandable design.
Mobile technology changes the rules again, and requires a new view towards complexity. Mobile phones are getting more powerful, but slowly, and their screens are at a practical maximum. A smart phone screen size can be a maximum of 5 inches, and that is not going to change. (A larger screen pushes the device into tablet territory.)
Tablets also are getting more powerful, but also slowly and their screens are also at a practical maximum. A tablet screen can be as large as 10 inches, and that is not going to change. (A larger screen makes for an unwieldy tablet.)
These practical maximums place limits on the complexity of the user interface. Those limits enforce limits on the complexity of the app.
More significantly, the limits in mobile apps come from hardware, not funding. A company cannot assume that they can expand an app forever. The envelope of technology is not moving upwards, and cannot move upwards: the limits are cause by human physiology.
All of this means that the normal process of application development must change. The old pattern of a "1.0 release" with limited functionality and subsequent releases with additional functionality (on a slope that continues upwards) cannot work for mobile apps. Once an app has a certain level of complexity, the process must change. New features are possible only at the expense of old features.
We are back in the situation of the early microcomputers: hard limits on application complexity. Like that earlier era, we will need cleverness to work within the limits. Unlike that earlier era, the cleverness is not in memory allocation or algorithmic sophistication, but in the design of user interfaces. We need cleverness in the user experience, and that will be the limiting factor for mobile apps.
Labels:
application design,
hardware,
limits,
mobile apps,
project funding
Subscribe to:
Posts (Atom)