Biology has the theory of "Punctuated Evolution": periods of stable configurations of species interspersed with short periods of change. I think we can see similar patterns in technology.
Let's start with the IBM PC. It arrived in 1981 and introduced a new age of computing. While preceded by a number of microcomputer systems (the Apple II, the Radio Shack TRS-80, the Commodore PET, and others) the IBM PC gained wide acceptance and set a new standard for computing hardware. The Combination of IBM PC and PC-DOS was the norm from its introduction until 1990 when Microsoft Windows replaced PC-DOS and more importantly, networks arrived.
The PC/Windows/network combination maintained dominance from 1990 until just recently. The PC mutated from a desktop device to a laptop device, and Windows changed from its early incarnation to the Windows-95 and later the Windows Vista "skin". Networks were the thing that really defined this era of computing.
We now have a new transition, from PC/Windows/network to tablet/cloud/wireless. Each transition requires new ideas for processing, storage, and user interfaces, and this transition is no exception. In the PC/DOS era, the user interface was text, the storage was local, and the processing was local. In the PC/Windows/network era, the user interface was graphical, the storage was networked (reliably), and the processing was local.
In the tablet/cloud/wireless era, the user interface is graphical and oriented to touch, the storage is networked (over an unreliable wireless network), and the processing is remote.
The tech for tablet/cloud/wireless is different from the previous age, and requires a different approach to programming and systems design. Processing in the cloud gives us more capacity; communicating over an unreliable network means that our systems must be opportunistic (process when you can) and patient (wait while you cannot).
The PC/DOS era stood for almost twenty years. So did the PC/Windows/network era. If that trend continues, the tablet/cloud/wireless era will run for about the same. Look for tablet/cloud/wireless to run from 2010 to 2030.
Sunday, July 8, 2012
Wednesday, July 4, 2012
Twitter's mistake
Twitter this week decided to stop feeding tweets to LinkedIn. They still allow LinkedIn to feed tweets to Twitter. I think that these moves are mistakes for Twitter.
When providers of competing services cooperate, the race is to the bottom. That is, to the foundational level, the lowest layer of the system. Microsoft and IBM have been circling each other for years, attempting to cooperate on authentication services. Microsoft is willing to make RACF work with ActiveDirectory, as long as RACF is the authoritative source and ActiveDirectory is merely a client. Microsoft, in turn, is willing to make ActiveDirectory work with RACF, as long as ActiveDirectory is the authoritative source and RACF is the client. Both IBM and Microsoft want to be the base for authentication and are unwilling to yield that position to another.
Similar dances occur in the virtualization world. Microsoft is willing to host Linux in its environments (run by a Windows hypervisor) but is not willing to let Windows run on a foreign hypervisor. (In this specific situation, the dance is asymmetric, as Linux is more than happy to host others or run as a client.)
Twitter, in its move to kick out LinkedIn, has gotten the dance backwards. By turning off the feed to LinkedIn, it has removed itself from the bottom of the hierarchy. (Perhaps "center" is a better description of the desired position among social networks. Let's change our term.)
Twitter and LinkedIn compete, in some sense, in the social network realm. They have different client bases (although with a lot of overlap) and they have different execution models. Twitter users send short (hopefully pithy) blurbs to their followers. LinkedIn users describe themselves and look for business opportunities. Yet both sets of users look for attention, and both Twitter and LinkedIn compete for eyeballs to feed to advertisers.
Okay, that last sentence was a bit more picturesque than I expected. But let's press on.
As I see it, social networks live in an ocean of competitors. They cannot exist on their own -- witness Microsoft's "Live" network that is closed to others. I visit from time to time, but only because I need the Microsoft LiveID. It seems a lonely place.
Social networks like Facebook, Twitter, and LinkedIn tolerate each other. From LinkedIn, I can post messages to my LinkedIn and my Twitter networks. With the Seesmic app, I can post messages to Facebook and Twitter. Sometimes I post a message on only one network; it depends on the tone and content of the message.
The best place for a social network is the center of a person's attention. (This is a game of attention, after all.) Twitter's move pushes me out of Twitter and encourages me to spend more time in LinkedIn, sending messages to LinkedIn and occasionally Twitter. The odds of me sending a Twitter-only message are less than before.
And that's why I think it was a mistake for Twitter.
When providers of competing services cooperate, the race is to the bottom. That is, to the foundational level, the lowest layer of the system. Microsoft and IBM have been circling each other for years, attempting to cooperate on authentication services. Microsoft is willing to make RACF work with ActiveDirectory, as long as RACF is the authoritative source and ActiveDirectory is merely a client. Microsoft, in turn, is willing to make ActiveDirectory work with RACF, as long as ActiveDirectory is the authoritative source and RACF is the client. Both IBM and Microsoft want to be the base for authentication and are unwilling to yield that position to another.
Similar dances occur in the virtualization world. Microsoft is willing to host Linux in its environments (run by a Windows hypervisor) but is not willing to let Windows run on a foreign hypervisor. (In this specific situation, the dance is asymmetric, as Linux is more than happy to host others or run as a client.)
Twitter, in its move to kick out LinkedIn, has gotten the dance backwards. By turning off the feed to LinkedIn, it has removed itself from the bottom of the hierarchy. (Perhaps "center" is a better description of the desired position among social networks. Let's change our term.)
Twitter and LinkedIn compete, in some sense, in the social network realm. They have different client bases (although with a lot of overlap) and they have different execution models. Twitter users send short (hopefully pithy) blurbs to their followers. LinkedIn users describe themselves and look for business opportunities. Yet both sets of users look for attention, and both Twitter and LinkedIn compete for eyeballs to feed to advertisers.
Okay, that last sentence was a bit more picturesque than I expected. But let's press on.
As I see it, social networks live in an ocean of competitors. They cannot exist on their own -- witness Microsoft's "Live" network that is closed to others. I visit from time to time, but only because I need the Microsoft LiveID. It seems a lonely place.
Social networks like Facebook, Twitter, and LinkedIn tolerate each other. From LinkedIn, I can post messages to my LinkedIn and my Twitter networks. With the Seesmic app, I can post messages to Facebook and Twitter. Sometimes I post a message on only one network; it depends on the tone and content of the message.
The best place for a social network is the center of a person's attention. (This is a game of attention, after all.) Twitter's move pushes me out of Twitter and encourages me to spend more time in LinkedIn, sending messages to LinkedIn and occasionally Twitter. The odds of me sending a Twitter-only message are less than before.
And that's why I think it was a mistake for Twitter.
Labels:
attention economy,
linkedin,
social networks,
Twitter
Sunday, July 1, 2012
Our technology shapes our systems
In the old days, computer programs were fairly linear things. They processed data in a linear fashion, and the source code often appeared in a linear fashion.
Early business applications of computers were for accounting applications: general ledger, accounts payable, payroll, inventory... etc. These systems were often designed with a master file and one or more transaction files. The master file held information about customers, accounts, and inventory, and the transaction files held information about, well, transactions, discrete events that changed something in the master file. (For example, a bank's checking accounts would have balances in the master file, and records in the transaction file would adjust those balances. Deposits would increase a balance, checks or fees would decrease a balance.)
The files were not stored on modern devices such as USB drives or even floppy disks. In the early days, the master file was on magnetic tape, and the transactions were on punch cards.
The thing about magnetic tape is that you must run through it from beginning to end. (Much like a tour through an Ikea store.) You cannot jump around from one position to another; you must start with the first record, then process the second record, and in sequence process every record until the last.
The same holds for punch cards. Paper punch cards were placed in a hopper and read and processed one at a time.
You might wonder how you can handle processing of accounts with such restrictions in place. One pass through the master file? And only one pass through the transactions? How can we match transactions to master records if we cannot move to the proper record?
The trick was to align the input files, keeping the master file sorted and sorting the transactions before starting the update process. With a bit of thought, you can imagine a system that reads a master record and a transaction record, compares the account numbers on each (both records need a key for matching) and if they match then updates the master record and moves on to the next transaction. If they don't match then the system stores the master record (on another tape, the output tape) and runs the comparison again. The algorithm does work (although I have simplified it somewhat) and this was a common model for program design.
The rise of direct-access storage devices and complex data structures has changed programming. As processors became less expensive and more powerful, as programming languages became more expressive and allowed complex data structures such as lists and trees, as memory became available to hold complex data structures in their entirety, our model for programming became more complex. No longer were programs limited to the simple cycle of "read-master, read-transaction, compare, update, write-master, repeat".
Programming in that model (perhaps we could call it the "Transaction Pattern") was easy and low-risk because clever people figured out the algorithm and other people could copy it.
This notion of a common system model is not unique to 1960s-style programming. Microsoft Windows programs at the API level follow a specific pattern of messages sent by the Windows core "message pump". Android programs use a similar technique.
Tablet/cloud systems will probably develop one (or perhaps a handful) of common patterns, repeated (perhaps with some variations) for the majority of applications. The trick will be to identify the patterns that let us leverage the platform with minimal thought and risk. Keep your eyes open for common templates for systems. When you find one that works, when you find one that lets lots of people leverage the cleverness of a few individuals, stick with it.
I'm guessing that the system model will not be a purely linear one, as we had in the 1960s. But it may have linear aspects, with message queues serializing transactions and updates.
Early business applications of computers were for accounting applications: general ledger, accounts payable, payroll, inventory... etc. These systems were often designed with a master file and one or more transaction files. The master file held information about customers, accounts, and inventory, and the transaction files held information about, well, transactions, discrete events that changed something in the master file. (For example, a bank's checking accounts would have balances in the master file, and records in the transaction file would adjust those balances. Deposits would increase a balance, checks or fees would decrease a balance.)
The files were not stored on modern devices such as USB drives or even floppy disks. In the early days, the master file was on magnetic tape, and the transactions were on punch cards.
The thing about magnetic tape is that you must run through it from beginning to end. (Much like a tour through an Ikea store.) You cannot jump around from one position to another; you must start with the first record, then process the second record, and in sequence process every record until the last.
The same holds for punch cards. Paper punch cards were placed in a hopper and read and processed one at a time.
You might wonder how you can handle processing of accounts with such restrictions in place. One pass through the master file? And only one pass through the transactions? How can we match transactions to master records if we cannot move to the proper record?
The trick was to align the input files, keeping the master file sorted and sorting the transactions before starting the update process. With a bit of thought, you can imagine a system that reads a master record and a transaction record, compares the account numbers on each (both records need a key for matching) and if they match then updates the master record and moves on to the next transaction. If they don't match then the system stores the master record (on another tape, the output tape) and runs the comparison again. The algorithm does work (although I have simplified it somewhat) and this was a common model for program design.
The rise of direct-access storage devices and complex data structures has changed programming. As processors became less expensive and more powerful, as programming languages became more expressive and allowed complex data structures such as lists and trees, as memory became available to hold complex data structures in their entirety, our model for programming became more complex. No longer were programs limited to the simple cycle of "read-master, read-transaction, compare, update, write-master, repeat".
Programming in that model (perhaps we could call it the "Transaction Pattern") was easy and low-risk because clever people figured out the algorithm and other people could copy it.
This notion of a common system model is not unique to 1960s-style programming. Microsoft Windows programs at the API level follow a specific pattern of messages sent by the Windows core "message pump". Android programs use a similar technique.
Tablet/cloud systems will probably develop one (or perhaps a handful) of common patterns, repeated (perhaps with some variations) for the majority of applications. The trick will be to identify the patterns that let us leverage the platform with minimal thought and risk. Keep your eyes open for common templates for systems. When you find one that works, when you find one that lets lots of people leverage the cleverness of a few individuals, stick with it.
I'm guessing that the system model will not be a purely linear one, as we had in the 1960s. But it may have linear aspects, with message queues serializing transactions and updates.
Labels:
data structures,
disciplined thought,
program design
Wednesday, June 20, 2012
The death of PC brands
We're about to see the death of PC brands.
Let's be honest, do we really care about the brand of PC? Does it matter if the box is made by HP or Dell, or Asus? We merely want a box to perform computations.
Some people are sensitive to brands. Some are loyal to specific companies, but most of the sensitive-to-brand people are anti-loyal. That is, they pick certain brands of PCs and avoid them. You can identify them by their call: "I will never use a brand ABC PC again." Most likely they had a bad experience with the specific brand at some point in the past.
When choosing to buy a PC, brand, if considered at all, is usually a proxy for quality. If someone has good experiences with a specific brand, they will continue to buy that brand. If they have poor experiences, they will choose other brands.
People can make decisions along these lines because PCs are commodities. Each PC brand (except Apple equipment, which is the exception) is virtually identical. They all run Windows. They all have processors and memory to run Windows and the popular applications. I suspect that when people select a PC, they use the following criteria:
I think that these have always been the criteria for selecting computing equipment. But in the early PC days, there was more thought given to the first question.
With today's commodity PCs, the first question is rarely considered. It is assumed that the PC will run the desired software. A few folks with specific processing needs may be sensitive to memory and disk space, and certainly anyone provisioning a server room will ask lots of questions. But for the average consumer (and the average end-user in corporate environments) the average PC will do the job.
In contrast, the early days of PCs saw great variations in hardware. Prior to the IBM PC, each manufacturer had their own architecture and their own software base. An Apple II used a 6502 processor, was able to display color and graphics, and ran Apple DOS. A Radio Shack TRS-80 used a Z-80 (or a 68000), displayed black-and-white text and graphics, and ran TRS-DOS (or Xenix). A Commodore PET used a 6502, displayed black-and-white text and text-based graphics, and ran Microsoft BASIC (called "Commodore BASIC").
The differences between computer brands were significant. Some brands offered cassette storage, others disk. Some offered serial ports, others parallel ports, and yet others none. The keyboards varied from vendor to vendor (and even model to model).
When the hardware varied, committing to a brand committed you to a lot of other capabilities.
Things began to change when IBM introduced the PC. Actually, it was not the introduction of the IBM PC that changed the market, but the IBM PC clones. When IBM introduced their PC, it was another offering from another vendor with specific capabilities.
It was only the PC clones that made PCs commodities. Once vendors began cloning the PC design, variation between brands dropped. For a while, there was variation in included equipment and portability. (Compaq made its success with portable PC clones.)
And apparently once the consumers in the market decide that hardware is a commodity, they are loathe to return to vendor-specific equipment. IBM tried to introduce vendor-specific equipment with the PS/2, but the special features of the PS/2 were quickly adopted by other manufacturers in their traditional designs. The PS/2 keyboard and mouse connectors were adopted as standard, but the MicroChannel bus was not.
Fast-forward to today, with the rise of smartphones and tablets. Apple has it's offerings of the iPhone, iPod, and iPad. Google has pushed Android on a number of branded devices. Microsoft is pushing Windows 8 on devices, but the tablets will be brand-less; like the iPhone and iPad, the Windows 8 tablets will simply say "Microsoft". The hardware brand has been absorbed into the software, except for Android.
I expect that the brandlessness of hardware will spill over into the desktop PC world. It already has for Apple desktop devices. Full-size desktop PCs are commoditized enough, are bland enough, that the manufacturer makes no difference. (Again, this is for the average user. Users with specific needs will be conscious of the brand.) Full-size desktop PCs will lose their brand identity. Small-format desktop PCs, which can be as small as an Altoids tin or even a pack of gum, will lose their brand identity too, or perhaps never gain one to start.
The questions for selecting computing equipment will remain:
But the answer will be one of Apple, Microsoft, or Google.
Let's be honest, do we really care about the brand of PC? Does it matter if the box is made by HP or Dell, or Asus? We merely want a box to perform computations.
Some people are sensitive to brands. Some are loyal to specific companies, but most of the sensitive-to-brand people are anti-loyal. That is, they pick certain brands of PCs and avoid them. You can identify them by their call: "I will never use a brand ABC PC again." Most likely they had a bad experience with the specific brand at some point in the past.
When choosing to buy a PC, brand, if considered at all, is usually a proxy for quality. If someone has good experiences with a specific brand, they will continue to buy that brand. If they have poor experiences, they will choose other brands.
People can make decisions along these lines because PCs are commodities. Each PC brand (except Apple equipment, which is the exception) is virtually identical. They all run Windows. They all have processors and memory to run Windows and the popular applications. I suspect that when people select a PC, they use the following criteria:
- Will it run my software (present and future software)?
- Is it a good value (reliable, economical)?
- Do I like the color, size, and shape?
I think that these have always been the criteria for selecting computing equipment. But in the early PC days, there was more thought given to the first question.
With today's commodity PCs, the first question is rarely considered. It is assumed that the PC will run the desired software. A few folks with specific processing needs may be sensitive to memory and disk space, and certainly anyone provisioning a server room will ask lots of questions. But for the average consumer (and the average end-user in corporate environments) the average PC will do the job.
In contrast, the early days of PCs saw great variations in hardware. Prior to the IBM PC, each manufacturer had their own architecture and their own software base. An Apple II used a 6502 processor, was able to display color and graphics, and ran Apple DOS. A Radio Shack TRS-80 used a Z-80 (or a 68000), displayed black-and-white text and graphics, and ran TRS-DOS (or Xenix). A Commodore PET used a 6502, displayed black-and-white text and text-based graphics, and ran Microsoft BASIC (called "Commodore BASIC").
The differences between computer brands were significant. Some brands offered cassette storage, others disk. Some offered serial ports, others parallel ports, and yet others none. The keyboards varied from vendor to vendor (and even model to model).
When the hardware varied, committing to a brand committed you to a lot of other capabilities.
Things began to change when IBM introduced the PC. Actually, it was not the introduction of the IBM PC that changed the market, but the IBM PC clones. When IBM introduced their PC, it was another offering from another vendor with specific capabilities.
It was only the PC clones that made PCs commodities. Once vendors began cloning the PC design, variation between brands dropped. For a while, there was variation in included equipment and portability. (Compaq made its success with portable PC clones.)
And apparently once the consumers in the market decide that hardware is a commodity, they are loathe to return to vendor-specific equipment. IBM tried to introduce vendor-specific equipment with the PS/2, but the special features of the PS/2 were quickly adopted by other manufacturers in their traditional designs. The PS/2 keyboard and mouse connectors were adopted as standard, but the MicroChannel bus was not.
Fast-forward to today, with the rise of smartphones and tablets. Apple has it's offerings of the iPhone, iPod, and iPad. Google has pushed Android on a number of branded devices. Microsoft is pushing Windows 8 on devices, but the tablets will be brand-less; like the iPhone and iPad, the Windows 8 tablets will simply say "Microsoft". The hardware brand has been absorbed into the software, except for Android.
I expect that the brandlessness of hardware will spill over into the desktop PC world. It already has for Apple desktop devices. Full-size desktop PCs are commoditized enough, are bland enough, that the manufacturer makes no difference. (Again, this is for the average user. Users with specific needs will be conscious of the brand.) Full-size desktop PCs will lose their brand identity. Small-format desktop PCs, which can be as small as an Altoids tin or even a pack of gum, will lose their brand identity too, or perhaps never gain one to start.
The questions for selecting computing equipment will remain:
- Will it run my software?
- Is it a good value?
- Do I like the color, size, and shape?
But the answer will be one of Apple, Microsoft, or Google.
Sunday, June 17, 2012
Why the Microsoft tablet for Windows 8 is significant
This past week Microsoft announced a tablet for Windows 8.
Microsoft has a checkered history of hardware. Their successful devices include the Kinect, the XBOX, the "melted" keyboard, and the Microsoft Mouse. Failures include the Zune and the Kin, and possibly the Surface. Many people want to know: will the tablet be a success or a failure?
This is the wrong question.
Microsoft had to release a tablet. The market is changing, and the rules of the new market dictate that vendors provide hardware and software.
Microsoft must succeed with the tablet. Perhaps not this particular tablet, but they must succeed with *a* tablet, and probably several tablets.
The old PC market had a model that separated hardware, operating systems, and applications. Various vendors built PCs, Microsoft supplied the operating system, and multiple (non-hardware) vendors supplied application programs. In the early days, one purchased an IBM PC and PC-DOS (supplied to IBM by Microsoft), and Lotus 1-2-3 and WordPerfect. Once manufacturers figured out ways of (legally) building PC clones, one could buy a PC made by IBM, or Compaq, or Dell, or a host of other companies, but the operating system was supplied by Microsoft.
People who are knowledgeable of the history of computing will recognize that the separation of hardware and software originates in an anti-trust lawsuit against IBM, for mainframe software of all things. Yet that decision influenced the marketing arrangements for the original IBM PC, and those arrangements (software separate from hardware) influenced the entire PC market. (To be fair, the pre-IBM PC market had similar arrangements: Radio Shack TRS-80 computers, Apple II computers, and others let you add software -- any compatible software -- to the computer.)
That model endures with today's desktop and laptop PCs. We buy the PC, an operating system (usually Windows) is included, and we add our separately-acquired software. That software may come from any source, be it another company or our internal development shop. We are responsible for configuration and upkeep, and for problems due to incompatible software.
Apple, with the iPhone and iPad, uses a different model: they supply the hardware and the operating system, and while other vendors supply the applications, Apple limits the freedom of users to select application providers. With iTunes, only those apps approved by Apple can get onto an iPhone or iPad. Users cannot select any application, or any provider, or even have them custom-written. They must go through iTunes (and therefore Apple).
The phrase for this arrangement is "walled garden". The environment is a pretty place, but one cannot leave easily. The vendor has erected a wall around the garden, and occupants must remain within the constructed garden.
Walled gardens, especially in tech, have a downside. When leaving one garden for another, one often loses content. You can buy a dozen books for your Kindle. Buy a replacement Kindle and your books are available. Buy instead a Nook, and your books are not available. Barnes and Noble knows nothing about your purchases with Amazon.com, and Amazon.com has no incentive to make it easy (or even possible) to transfer your purchases to another device. While we can leave the garden, we do so only by leaving things behind.
This walled garden model is used by game consoles. Games are made specific to consoles; the XBOX version of "Diablo 3" will run only on XBOX systems, not on Playstations. If you purchased an XBOX and lots of games, and then you decide to switch to the Playstation console, you don't get all the games available in their Playstation form.
We are entering an age of walled gardens. Apple has their iPhone/iPad garden, Amazon.com has theirs, Barnes and Noble is building theirs. With the introduction of the Microsoft tablet, we can see that Microsoft is building theirs. Google has built some infrastructure with Google Docs and the Chromebook laptop/browser.
This new age of walled gardens, of separate kingdoms, requires developers, users, and companies to make choices.
Developers must select the platforms to support. They can focus their efforts onto a limited number of platforms, or they can develop for all of them. But development for each platform requires tools, skills, and time. A large company can invest in multi-platform efforts; a small company with limited resources must choose a subset, possibly forgoing revenue from the omitted market segment.
Users must select their platforms carefully. The cost of changing is high, much higher than changing from a Dell PC to an Asus PC, or changing from a Ford to a Chevy.
Companies must select their platform. They have done so in the past, usually picking Microsoft, the safe choice. (How many times have you heard the phrase "we're a Microsoft shop"?) But now the choice is riskier; there is no safe choice, no one dominant provider (and no one provider that appears ready to become dominant). A company's success will depend not only on its talent and ability to execute business plans, but also upon the success of its platform. A company may do well in its market, yet fail when its technology provider fails.
These are difficult decisions, and they must be made. One cannot defer the selection of platforms; others in the world are moving to new platforms.
To wait is to be left behind.
Microsoft has a checkered history of hardware. Their successful devices include the Kinect, the XBOX, the "melted" keyboard, and the Microsoft Mouse. Failures include the Zune and the Kin, and possibly the Surface. Many people want to know: will the tablet be a success or a failure?
This is the wrong question.
Microsoft had to release a tablet. The market is changing, and the rules of the new market dictate that vendors provide hardware and software.
Microsoft must succeed with the tablet. Perhaps not this particular tablet, but they must succeed with *a* tablet, and probably several tablets.
The old PC market had a model that separated hardware, operating systems, and applications. Various vendors built PCs, Microsoft supplied the operating system, and multiple (non-hardware) vendors supplied application programs. In the early days, one purchased an IBM PC and PC-DOS (supplied to IBM by Microsoft), and Lotus 1-2-3 and WordPerfect. Once manufacturers figured out ways of (legally) building PC clones, one could buy a PC made by IBM, or Compaq, or Dell, or a host of other companies, but the operating system was supplied by Microsoft.
People who are knowledgeable of the history of computing will recognize that the separation of hardware and software originates in an anti-trust lawsuit against IBM, for mainframe software of all things. Yet that decision influenced the marketing arrangements for the original IBM PC, and those arrangements (software separate from hardware) influenced the entire PC market. (To be fair, the pre-IBM PC market had similar arrangements: Radio Shack TRS-80 computers, Apple II computers, and others let you add software -- any compatible software -- to the computer.)
That model endures with today's desktop and laptop PCs. We buy the PC, an operating system (usually Windows) is included, and we add our separately-acquired software. That software may come from any source, be it another company or our internal development shop. We are responsible for configuration and upkeep, and for problems due to incompatible software.
Apple, with the iPhone and iPad, uses a different model: they supply the hardware and the operating system, and while other vendors supply the applications, Apple limits the freedom of users to select application providers. With iTunes, only those apps approved by Apple can get onto an iPhone or iPad. Users cannot select any application, or any provider, or even have them custom-written. They must go through iTunes (and therefore Apple).
The phrase for this arrangement is "walled garden". The environment is a pretty place, but one cannot leave easily. The vendor has erected a wall around the garden, and occupants must remain within the constructed garden.
Walled gardens, especially in tech, have a downside. When leaving one garden for another, one often loses content. You can buy a dozen books for your Kindle. Buy a replacement Kindle and your books are available. Buy instead a Nook, and your books are not available. Barnes and Noble knows nothing about your purchases with Amazon.com, and Amazon.com has no incentive to make it easy (or even possible) to transfer your purchases to another device. While we can leave the garden, we do so only by leaving things behind.
This walled garden model is used by game consoles. Games are made specific to consoles; the XBOX version of "Diablo 3" will run only on XBOX systems, not on Playstations. If you purchased an XBOX and lots of games, and then you decide to switch to the Playstation console, you don't get all the games available in their Playstation form.
We are entering an age of walled gardens. Apple has their iPhone/iPad garden, Amazon.com has theirs, Barnes and Noble is building theirs. With the introduction of the Microsoft tablet, we can see that Microsoft is building theirs. Google has built some infrastructure with Google Docs and the Chromebook laptop/browser.
This new age of walled gardens, of separate kingdoms, requires developers, users, and companies to make choices.
Developers must select the platforms to support. They can focus their efforts onto a limited number of platforms, or they can develop for all of them. But development for each platform requires tools, skills, and time. A large company can invest in multi-platform efforts; a small company with limited resources must choose a subset, possibly forgoing revenue from the omitted market segment.
Users must select their platforms carefully. The cost of changing is high, much higher than changing from a Dell PC to an Asus PC, or changing from a Ford to a Chevy.
Companies must select their platform. They have done so in the past, usually picking Microsoft, the safe choice. (How many times have you heard the phrase "we're a Microsoft shop"?) But now the choice is riskier; there is no safe choice, no one dominant provider (and no one provider that appears ready to become dominant). A company's success will depend not only on its talent and ability to execute business plans, but also upon the success of its platform. A company may do well in its market, yet fail when its technology provider fails.
These are difficult decisions, and they must be made. One cannot defer the selection of platforms; others in the world are moving to new platforms.
To wait is to be left behind.
Labels:
anti-trust,
business models,
competition,
Microsoft,
tablet,
walled garden
Sunday, June 10, 2012
Limits to growth
The shift from desktop and web applications to mobile applications entails many changes. There are changes in technology, new services and capabilities, and integration of apps and the sharing of data. Yet one change that has seen little discussion is the limits of an app's size.
Desktop applications could be as large as we wanted (and sometimes were larger than we wanted). It was easy to add a control to a dialog, or even to add a whole new dialog full of controls. A desktop application could start small and grow, and grow, and grow. After a short time, it was large. And after a little more time, it was "an unmanageable monstrosity".
The desktop PC technology supported this kind of growth. Desktop screens were large, and got larger over time. (Both in terms of absolute dimensions and pixel count.) The Windows UI philosophy allowed for (and encouraged) the use of dialogs to present information used less frequently that the information in the main window. Thus, application settings could be tucked away and kept out of sight, and users could go about their business without distractions.
But the world of mobile apps is different. I see three constraints on the size of apps.
First is the screen. Mobile apps must run on devices with smaller screens. Cell phones and tablets have screens that are smaller than desktop PC monitors, in both absolute dimensions and in pixel count. One cannot simply transfer a desktop UI to the mobile world; the screen is too small to display everything.
Second is the philosophy of app UI. Mobile apps show a few key pieces of information; desktop apps present dozens of fields and can use multiple dialogs to show more information. Dialogs and settings, encouraged in desktop applications, are discouraged in mobile apps. One cannot simply port a desktop application to the mobile world; the technique of hiding information is dialogs works poorly.
Third is the turnover in technology. Mobile apps are generally client-server apps with heavy processing on servers and minimal presentation on clients. The mobile app platforms change frequently, with new versions of cell phones and new types of devices (tablets and Windows 8 Metro devices). While there is some upward compatibility within product lines (apps from the iPhone will run on the iPad) there is a fair amount of work to make an app run on multiple platforms (such as porting an app from iPhone to Android or Windows 8). Desktop applications had a long, steady technology set for their UI; mobile apps have a technology set that changes quickly.
This third constraint interests me. The frequent changes in mobile devices and their operating systems means that app developers have incentive to update and revise their applications frequently. Certainly one can write an app for the earliest platforms such as iOS 1.0, but then you lose later functions. And the rise of competing platforms (Android, Windows 8) means new development efforts or you lose those shares of the market.
I expect that technology for mobile devices will continue to evolve at its rapid pace. (As some might say, this rapid pace is "the new normal" for mobile development.)
If mobile devices and operating systems continue to change, then apps will have to change with them. If the changes to devices and operating systems are large (say, voice recognition and gesture detection) then apps will need significant changes.
These kinds of changes will limit the size of a mobile app. One cannot start with a small app and let it grow, and grow, and grow as we did with desktop PC applications. Every so often we have to re-design the app, re-think our basic assumptions, and re-build it. Mobile apps will remain small because will be constantly re-writing them.
I recognize that I am building a house of cards here, with various assumptions depending on previous assumptions. So I give you fair warning that I may be wrong. But let's follow this line of thinking just a little further.
If mobile apps must remain small (the user interface portion at least), and mobile apps become dominant (not perhaps unreasonable), then any programs that a business uses (word processing, spreadsheets, e-mail, etc.) will have to be small. The world of apps will consist of small UI programs and large server back-ends. (Although I have given little thought to the changes for server technology and applications. But let's assume that they can be large apps in a stable environment.)
If business use the dominant form of computing (mobile apps) and those apps must be small, then business processes must change to use information in small, app-sized chunks. We cannot expect the large, complex data entry applications from the desktop to move to mobile computing, and we cannot expect the business processes that use large, complex data structures to run on mobile devices.
Therefore, business processes must change, to simplify their data needs. They may split data into smaller pieces, with coordinated apps each handling small parts of a larger dataset. Cooperative apps will allow for work to be distributed to multiple workers. Instead of a loan officer that reviews the entire loan, a bank may have several loan analysts performing smaller tasks such as credit history analysis, loan risk, interest rate analysis, and such.
These business changes will shift work from expert-based work to process-based work. Instead of highly trained individuals who know the entire process, a business can use specialists that combine their efforts as needed for each case or business event.
That's quite a change, for a mobile device.
Desktop applications could be as large as we wanted (and sometimes were larger than we wanted). It was easy to add a control to a dialog, or even to add a whole new dialog full of controls. A desktop application could start small and grow, and grow, and grow. After a short time, it was large. And after a little more time, it was "an unmanageable monstrosity".
The desktop PC technology supported this kind of growth. Desktop screens were large, and got larger over time. (Both in terms of absolute dimensions and pixel count.) The Windows UI philosophy allowed for (and encouraged) the use of dialogs to present information used less frequently that the information in the main window. Thus, application settings could be tucked away and kept out of sight, and users could go about their business without distractions.
But the world of mobile apps is different. I see three constraints on the size of apps.
First is the screen. Mobile apps must run on devices with smaller screens. Cell phones and tablets have screens that are smaller than desktop PC monitors, in both absolute dimensions and in pixel count. One cannot simply transfer a desktop UI to the mobile world; the screen is too small to display everything.
Second is the philosophy of app UI. Mobile apps show a few key pieces of information; desktop apps present dozens of fields and can use multiple dialogs to show more information. Dialogs and settings, encouraged in desktop applications, are discouraged in mobile apps. One cannot simply port a desktop application to the mobile world; the technique of hiding information is dialogs works poorly.
Third is the turnover in technology. Mobile apps are generally client-server apps with heavy processing on servers and minimal presentation on clients. The mobile app platforms change frequently, with new versions of cell phones and new types of devices (tablets and Windows 8 Metro devices). While there is some upward compatibility within product lines (apps from the iPhone will run on the iPad) there is a fair amount of work to make an app run on multiple platforms (such as porting an app from iPhone to Android or Windows 8). Desktop applications had a long, steady technology set for their UI; mobile apps have a technology set that changes quickly.
This third constraint interests me. The frequent changes in mobile devices and their operating systems means that app developers have incentive to update and revise their applications frequently. Certainly one can write an app for the earliest platforms such as iOS 1.0, but then you lose later functions. And the rise of competing platforms (Android, Windows 8) means new development efforts or you lose those shares of the market.
I expect that technology for mobile devices will continue to evolve at its rapid pace. (As some might say, this rapid pace is "the new normal" for mobile development.)
If mobile devices and operating systems continue to change, then apps will have to change with them. If the changes to devices and operating systems are large (say, voice recognition and gesture detection) then apps will need significant changes.
These kinds of changes will limit the size of a mobile app. One cannot start with a small app and let it grow, and grow, and grow as we did with desktop PC applications. Every so often we have to re-design the app, re-think our basic assumptions, and re-build it. Mobile apps will remain small because will be constantly re-writing them.
I recognize that I am building a house of cards here, with various assumptions depending on previous assumptions. So I give you fair warning that I may be wrong. But let's follow this line of thinking just a little further.
If mobile apps must remain small (the user interface portion at least), and mobile apps become dominant (not perhaps unreasonable), then any programs that a business uses (word processing, spreadsheets, e-mail, etc.) will have to be small. The world of apps will consist of small UI programs and large server back-ends. (Although I have given little thought to the changes for server technology and applications. But let's assume that they can be large apps in a stable environment.)
If business use the dominant form of computing (mobile apps) and those apps must be small, then business processes must change to use information in small, app-sized chunks. We cannot expect the large, complex data entry applications from the desktop to move to mobile computing, and we cannot expect the business processes that use large, complex data structures to run on mobile devices.
Therefore, business processes must change, to simplify their data needs. They may split data into smaller pieces, with coordinated apps each handling small parts of a larger dataset. Cooperative apps will allow for work to be distributed to multiple workers. Instead of a loan officer that reviews the entire loan, a bank may have several loan analysts performing smaller tasks such as credit history analysis, loan risk, interest rate analysis, and such.
These business changes will shift work from expert-based work to process-based work. Instead of highly trained individuals who know the entire process, a business can use specialists that combine their efforts as needed for each case or business event.
That's quite a change, for a mobile device.
Monday, June 4, 2012
Pendulum or ratchet?
In the beginning, Altair made the 8800, and it was good.
Actually, it was *usable*, by determined hobbyists, and it was usable for very little. But it was available and purchaseable. The computers were stand-alone, and owner/users had to do everything for themselves. It was similar to being part of the first party in a colony. Where the first colonists had to chop wood, carry water, grow their own crops, make their own tools, and care for themselves and their families, the early computer owner/users had to build their own equipment and write their own software.
Later came the manufactured units: the Apple II, the Radio Shack TRS-80, the Commodore PET. These were easier to use (just take them out of the box and plug them in) yet you still had to write your own software.
The IBM PC and MS-DOS made things a bit easier (lots of software available on the market), yet the owner/user was still responsible and life was perhaps not a colony but a house on the prairie. And programs (purchased or constructed) could do anything to the computer, including disrupting other programs.
A big advance was made with IBM OS/2 and Windows NT, which were "real" operating systems that truly controlled "user programs". We had left the prairie and were in an actual town!
The next advance was with Java (and later, C#) which created managed environments for programs. Now we were in Dodge City, and you had to check your firearms when you came into town.
Apple gave us the next step, with iOS and iTunes. In this world, all programs must be reviewed and approved by Apple. You can no longer write any program and release it to the market. You cannot even install it on your own equipment! You must go through Apple's gateway iTunes. Microsoft is following suit with Windows 8 and the Microsoft App store. (Apps in Metro must go through Microsoft, and you can install only operating systems that have been signed by Microsoft.)
All of these changes have been made to improve security. (And let us recognize that Microsoft has been consistently pummeled for exploits against Windows and applications. The incentive for these changes has been the market.)
Yet all of these changes have been moving in one direction: away from the open range and towards the nanny state.
My question is: Are these changes part of the swing of a pendulum, or are they part of a ratchet mechanism? If they are the former, then we can expect a swing back towards freedom (and security problems). If they are part of the latter, then they are here to stay with possibly more restrictions in the future.
Relying on Microsoft (or Apple) to filter out the malware and the bad actors is easy, but it also limits our choices. By allowing a vendor to act as gatekeeper, we give up a degree of control. It is possible that they may choose to restrict other software in the future, such as software that competes with their products. (Microsoft may restrict the Chrome browser, Apple may restrict office suites. Or anything else they desire to restrict, in favor of their own offerings.)
Actually, it was *usable*, by determined hobbyists, and it was usable for very little. But it was available and purchaseable. The computers were stand-alone, and owner/users had to do everything for themselves. It was similar to being part of the first party in a colony. Where the first colonists had to chop wood, carry water, grow their own crops, make their own tools, and care for themselves and their families, the early computer owner/users had to build their own equipment and write their own software.
Later came the manufactured units: the Apple II, the Radio Shack TRS-80, the Commodore PET. These were easier to use (just take them out of the box and plug them in) yet you still had to write your own software.
The IBM PC and MS-DOS made things a bit easier (lots of software available on the market), yet the owner/user was still responsible and life was perhaps not a colony but a house on the prairie. And programs (purchased or constructed) could do anything to the computer, including disrupting other programs.
A big advance was made with IBM OS/2 and Windows NT, which were "real" operating systems that truly controlled "user programs". We had left the prairie and were in an actual town!
The next advance was with Java (and later, C#) which created managed environments for programs. Now we were in Dodge City, and you had to check your firearms when you came into town.
Apple gave us the next step, with iOS and iTunes. In this world, all programs must be reviewed and approved by Apple. You can no longer write any program and release it to the market. You cannot even install it on your own equipment! You must go through Apple's gateway iTunes. Microsoft is following suit with Windows 8 and the Microsoft App store. (Apps in Metro must go through Microsoft, and you can install only operating systems that have been signed by Microsoft.)
All of these changes have been made to improve security. (And let us recognize that Microsoft has been consistently pummeled for exploits against Windows and applications. The incentive for these changes has been the market.)
Yet all of these changes have been moving in one direction: away from the open range and towards the nanny state.
My question is: Are these changes part of the swing of a pendulum, or are they part of a ratchet mechanism? If they are the former, then we can expect a swing back towards freedom (and security problems). If they are part of the latter, then they are here to stay with possibly more restrictions in the future.
Relying on Microsoft (or Apple) to filter out the malware and the bad actors is easy, but it also limits our choices. By allowing a vendor to act as gatekeeper, we give up a degree of control. It is possible that they may choose to restrict other software in the future, such as software that competes with their products. (Microsoft may restrict the Chrome browser, Apple may restrict office suites. Or anything else they desire to restrict, in favor of their own offerings.)
Subscribe to:
Posts (Atom)