Wednesday, June 20, 2012

The death of PC brands

We're about to see the death of PC brands.

Let's be honest, do we really care about the brand of PC? Does it matter if the box is made by HP or Dell, or Asus? We merely want a box to perform computations.

Some people are sensitive to brands. Some are loyal to specific companies, but most of the sensitive-to-brand people are anti-loyal. That is, they pick certain brands of PCs and avoid them. You can identify them by their call: "I will never use a brand ABC PC again." Most likely they had a bad experience with the specific brand at some point in the past.

When choosing to buy a PC, brand, if considered at all, is usually a proxy for quality. If someone has good experiences with a specific brand, they will continue to buy that brand. If they have poor experiences, they will choose other brands.

People can make decisions along these lines because PCs are commodities. Each PC brand (except Apple equipment, which is the exception) is virtually identical. They all run Windows. They all have processors and memory to run Windows and the popular applications. I suspect that when people select a PC, they use the following criteria:

  1. Will it run my software (present and future software)?
  2. Is it a good value (reliable, economical)?
  3. Do I like the color, size, and shape?

I think that these have always been the criteria for selecting computing equipment. But in the early PC days, there was more thought given to the first question.


With today's commodity PCs, the first question is rarely considered. It is assumed that the PC will run the desired software. A few folks with specific processing needs may be sensitive to memory and disk space, and certainly anyone provisioning a server room will ask lots of questions. But for the average consumer (and the average end-user in corporate environments) the average PC will do the job.


In contrast, the early days of PCs saw great variations in hardware. Prior to the IBM PC, each manufacturer had their own architecture and their own software base. An Apple II used a 6502 processor, was able to display color and graphics, and ran Apple DOS. A Radio Shack TRS-80 used a Z-80 (or a 68000), displayed black-and-white text and graphics, and ran TRS-DOS (or Xenix). A Commodore PET used a 6502, displayed black-and-white text and text-based graphics, and ran Microsoft BASIC (called "Commodore BASIC").


The differences between computer brands were significant. Some brands offered cassette storage, others disk. Some offered serial ports, others parallel ports, and yet others none. The keyboards varied from vendor to vendor (and even model to model).


When the hardware varied, committing to a brand committed you to a lot of other capabilities.

Things began to change when IBM introduced the PC. Actually, it was not the introduction of the IBM PC that changed the market, but the IBM PC clones. When IBM introduced their PC, it was another offering from another vendor with specific capabilities.

It was only the PC clones that made PCs commodities. Once vendors began cloning the PC design, variation between brands dropped. For a while, there was variation in included equipment and portability. (Compaq made its success with portable PC clones.)

And apparently once the consumers in the market decide that hardware is a commodity, they are loathe to return to vendor-specific equipment. IBM tried to introduce vendor-specific equipment with the PS/2, but the special features of the PS/2 were quickly adopted by other manufacturers in their traditional designs. The PS/2 keyboard and mouse connectors were adopted as standard, but the MicroChannel bus was not.

Fast-forward to today, with the rise of smartphones and tablets. Apple has it's offerings of the iPhone, iPod, and iPad. Google has pushed Android on a number of branded devices. Microsoft is pushing Windows 8 on devices, but the tablets will be brand-less; like the iPhone and iPad, the Windows 8 tablets will simply say "Microsoft". The hardware brand has been absorbed into the software, except for Android.


I expect that the brandlessness of hardware will spill over into the desktop PC world. It already has for Apple desktop devices. Full-size desktop PCs are commoditized enough, are bland enough, that the manufacturer makes no difference. (Again, this is for the average user. Users with specific needs will be conscious of the brand.) Full-size desktop PCs will lose their brand identity. Small-format desktop PCs, which can be as small as an Altoids tin or even a pack of gum, will lose their brand identity too, or perhaps never gain one to start.


The questions for selecting computing equipment will remain:

  1. Will it run my software?
  2. Is it a good value?
  3. Do I like the color, size, and shape?


But the answer will be one of Apple, Microsoft, or Google.

Sunday, June 17, 2012

Why the Microsoft tablet for Windows 8 is significant

This past week Microsoft  announced a tablet for Windows 8.

Microsoft has a checkered history of hardware. Their successful devices include the Kinect, the XBOX, the "melted" keyboard, and the Microsoft Mouse. Failures include the Zune and the Kin, and possibly the Surface. Many people want to know: will the tablet be a success or a failure?

This is the wrong question.

Microsoft had to release a tablet. The market is changing, and the rules of the new market dictate that vendors provide hardware and software.

Microsoft must succeed with the tablet. Perhaps not this particular tablet, but they must succeed with *a* tablet, and probably several tablets.

The old PC market had a model that separated hardware, operating systems, and applications. Various vendors built PCs, Microsoft supplied the operating system, and multiple (non-hardware) vendors supplied application programs. In the early days, one purchased an IBM PC and PC-DOS (supplied to IBM by Microsoft), and Lotus 1-2-3 and WordPerfect. Once manufacturers figured out ways of (legally) building PC clones, one could buy a PC made by IBM, or Compaq, or Dell, or a host of other companies, but the operating system was supplied by Microsoft.

People who are knowledgeable of the history of computing will recognize that the separation of hardware and software originates in an anti-trust lawsuit against IBM, for mainframe software of all things. Yet that decision influenced the marketing arrangements for the original IBM PC, and those arrangements (software separate from hardware) influenced the entire PC market. (To be fair, the pre-IBM PC market had similar arrangements: Radio Shack TRS-80 computers, Apple II computers, and others let you add software -- any compatible software -- to the computer.)

That model endures with today's desktop and laptop PCs. We buy the PC, an operating system (usually Windows) is included, and we add our separately-acquired software. That software may come from any source, be it another company or our internal development shop. We are responsible for configuration and upkeep, and for problems due to incompatible software.

Apple, with the iPhone and iPad, uses a different model: they supply the hardware and the operating system, and while other vendors supply the applications, Apple limits the freedom of users to select application providers. With iTunes, only those apps approved by Apple can get onto an iPhone or iPad. Users cannot select any application, or any provider, or even have them custom-written. They must go through iTunes (and therefore Apple).

The phrase for this arrangement is "walled garden". The environment is a pretty place, but one cannot leave easily. The vendor has erected a wall around the garden, and occupants must remain within the constructed garden.

Walled gardens, especially in tech, have a downside. When leaving one garden for another, one often loses content. You can buy a dozen books for your Kindle. Buy a replacement Kindle and your books are available. Buy instead a Nook, and your books are not available. Barnes and Noble knows nothing about your purchases with Amazon.com, and Amazon.com has no incentive to make it easy (or even possible) to transfer your purchases to another device. While we can leave the garden, we do so only by  leaving things behind.

This walled garden model is used by game consoles. Games are made specific to consoles; the XBOX version of "Diablo 3" will run only on XBOX systems, not on Playstations. If you purchased an XBOX and lots of games, and then you decide to switch to the Playstation console, you don't get all the games available in their Playstation form.

We are entering an age of walled gardens. Apple has their iPhone/iPad garden, Amazon.com has theirs, Barnes and Noble is building theirs. With the introduction of the Microsoft tablet, we can see that Microsoft is building theirs. Google has built some infrastructure with Google Docs and the Chromebook laptop/browser.

This new age of walled gardens, of separate kingdoms, requires developers, users, and companies to make choices.

Developers must select the platforms to support. They can focus their efforts onto a limited number of platforms, or they can develop for all of them. But development for each platform requires tools, skills, and time. A large company can invest in multi-platform efforts; a small company with limited resources must choose a subset, possibly forgoing revenue from the omitted market segment.

Users must select their platforms carefully. The cost of changing is high, much higher than changing from a Dell PC to an Asus PC, or changing from a Ford to a Chevy.

Companies must select their platform. They have done so in the past, usually picking Microsoft, the safe choice. (How many times have you heard the phrase "we're a Microsoft shop"?) But now the choice is riskier; there is no safe choice, no one dominant provider (and no one provider that appears ready to become dominant). A company's success will depend not only on its talent and ability to execute business plans, but also upon the success of its platform. A company may do well in its market, yet fail when its technology provider fails.

These are difficult decisions, and they must be made. One cannot defer the selection of platforms; others in the world are moving to new platforms.

To wait is to be left behind.

Sunday, June 10, 2012

Limits to growth

The shift from desktop and web applications to mobile applications entails many changes. There are changes in technology, new services and capabilities, and integration of apps and the sharing of data. Yet one change that has seen little discussion is the limits of an app's size.

Desktop applications could be as large as we wanted (and sometimes were larger than we wanted). It was easy to add a control to a dialog, or even to add a whole new dialog full of controls. A desktop application could start small and grow, and grow, and grow. After a short time, it was large. And after a little more time, it was "an unmanageable monstrosity".

The desktop PC technology supported this kind of growth. Desktop screens were large, and got larger over time. (Both in terms of absolute dimensions and pixel count.) The Windows UI philosophy allowed for (and encouraged) the use of dialogs to present information used less frequently that the information in the main window. Thus, application settings could be tucked away and kept out of sight, and users could go about their business without distractions.

But the world of mobile apps is different. I see three constraints on the size of apps.

First is the screen. Mobile apps must run on devices with smaller screens. Cell phones and tablets have screens that are smaller than desktop PC monitors, in both absolute dimensions and in pixel count. One cannot simply transfer a desktop UI to the mobile world; the screen is too small to display everything.

Second is the philosophy of app UI. Mobile apps show a few key pieces of information; desktop apps present dozens of fields and can use multiple dialogs to show more information. Dialogs and settings, encouraged in desktop applications, are discouraged in mobile apps. One cannot simply port a desktop application to the mobile world; the technique of hiding information is dialogs works poorly.

Third is the turnover in technology. Mobile apps are generally client-server apps with heavy processing on servers and minimal presentation on clients. The mobile app platforms change frequently, with new versions of cell phones and new types of devices (tablets and Windows 8 Metro devices). While there is some upward compatibility within product lines (apps from the iPhone will run on the iPad) there is a fair amount of work to make an app run on multiple platforms (such as porting an app from iPhone to Android or Windows 8). Desktop applications had a long, steady technology set for their UI; mobile apps have a technology set that changes quickly.

This third constraint interests me. The frequent changes in mobile devices and their operating systems means that app developers have incentive to update and revise their applications frequently. Certainly one can write an app for the earliest platforms such as iOS 1.0, but then you lose later functions. And the rise of competing platforms (Android, Windows 8) means new development efforts or you lose those shares of the market.

I expect that technology for mobile devices will continue to evolve at its rapid pace. (As some might say, this rapid pace is "the new normal" for mobile development.)

If mobile devices and operating systems continue to change, then apps will have to change with them. If the changes to devices and operating systems are large (say, voice recognition and gesture detection) then apps will need significant changes.

These kinds of changes will limit the size of a mobile app. One cannot start with a small app and let it grow, and grow, and grow as we did with desktop PC applications. Every so often we have to re-design the app, re-think our basic assumptions, and re-build it. Mobile apps will remain small because will be constantly re-writing them.

 I recognize that I am building a house of cards here, with various assumptions depending on previous assumptions. So I give you fair warning that I may be wrong. But let's follow this line of thinking just a little further.

If mobile apps must remain small (the user interface portion at least), and mobile apps become dominant (not perhaps unreasonable), then any programs that a business uses (word processing, spreadsheets, e-mail, etc.) will have to be small. The world of apps will consist of small UI programs and large server back-ends. (Although I have given little thought to the changes for server technology and applications. But let's assume that they can be large apps in a stable environment.)

If business use the dominant form of computing (mobile apps) and those apps must be small, then business processes must change to use information in small, app-sized chunks. We cannot expect the large, complex data entry applications from the desktop to move to mobile computing, and we cannot expect the business processes that use large, complex data structures to run on mobile devices.

Therefore, business processes must change, to simplify their data needs. They may split data into smaller pieces, with coordinated apps each handling small parts of a larger dataset. Cooperative apps will allow for work to be distributed to multiple workers. Instead of a loan officer that reviews the entire loan, a bank may have several loan analysts performing smaller tasks such as credit history analysis, loan risk, interest rate analysis, and such.

These business changes will shift work from expert-based work to process-based work. Instead of highly trained individuals who know the entire process, a business can use specialists that combine their efforts as needed for each case or business event.

That's quite a change, for a mobile device.


Monday, June 4, 2012

Pendulum or ratchet?

In the beginning, Altair made the 8800, and it was good.

Actually, it was *usable*, by determined hobbyists, and it was usable for very little. But it was available and purchaseable. The computers were stand-alone, and owner/users had to do everything for themselves. It was similar to being part of the first party in a colony. Where the first colonists had to chop wood, carry water, grow their own crops, make their own tools, and care for themselves and their families, the early computer owner/users had to build their own equipment and write their own software.

Later came the manufactured units: the Apple II, the Radio Shack TRS-80, the Commodore PET. These were easier to use (just take them out of the box and plug them in) yet you still had to write your own software.

The IBM PC and MS-DOS made things a bit easier (lots of software available on the market), yet the owner/user was still responsible and life was perhaps not a colony but a house on the prairie. And programs (purchased or constructed) could do anything to the computer, including disrupting other programs.

A big advance was made with IBM OS/2 and Windows NT, which were "real" operating systems that truly controlled "user programs". We had left the prairie and were in an actual town!

The next advance was with Java (and later, C#) which created managed environments for programs. Now we were in Dodge City, and you had to check your firearms when you came into town.

Apple gave us the next step, with iOS and iTunes. In this world, all programs must be reviewed and approved by Apple. You can no longer write any program and release it to the market. You cannot even install it on your own equipment! You must go through Apple's gateway iTunes. Microsoft is following suit with Windows 8 and the Microsoft App store. (Apps in Metro must go through Microsoft, and you can install only operating systems that have been signed by Microsoft.)

All of these changes have been made to improve security. (And let us recognize that Microsoft has been consistently pummeled for exploits against Windows and applications. The incentive for these changes has been the market.)

Yet all of these changes have been moving in one direction: away from the open range and towards the nanny state.

My question is: Are these changes part of the swing of a pendulum, or are they part of a ratchet mechanism? If they are the former, then we can expect a swing back towards freedom (and security problems). If they are part of the latter, then they are here to stay with possibly more restrictions in the future.

Relying on Microsoft (or Apple) to filter out the malware and the bad actors is easy, but it also limits our choices. By allowing a vendor to act as gatekeeper, we give up a degree of control. It is possible that they may choose to restrict other software in the future, such as software that competes with their products. (Microsoft may restrict the Chrome browser, Apple may restrict office suites. Or anything else they desire to restrict, in favor of their own offerings.)