Wednesday, April 8, 2009

The EDVAC leap

New technologies often allow us new ways of doing work. Many times, we see the new way only after the technology is developed.

Two early computing devices were ENIAC and EDVAC. Both were created in the nineteen-forties and used to compute ballistics tables for the US Army. ENIAC was created first, as an electronic calculating machine. Previous calculating machines were either mechanical (all gears, rods, and levers such as Babbage's Difference Engine) or electromechanical (electrical relays but not vacuum tubes).

EDVAC followed ENIAC and was quite different.

ENIAC was an electronic version of a mechanical calculating machine. Mechnical calculators used rings to hold digit values. Think of them as fance versions of old-style car odometers. ENIAC used electronic circuits to duplicate the physical rings of machines. It used decimal arithmetic, with each digit having one of ten possible values. It was a "direct port" of the electromechanical design into electronics.

EDVAC used a different design, one made possible by electronics. EDVAC did not duplicate the ten-digit rings, it used binary values and two-state memory digits (today known as 'bits'). It had other innovations too. One can draw a sharp line between ENIAC and EDVAC and label the early part "calculators" and the later part "computers".

But perhaps this experience tells us about our inventive skills. We use a new technology to build a new version of something, duplicating the design. Then we invent new ways to use the technology, creating new designs that were not possible with the old technology. With electronic calculators, we had to build ENIAC first. We could not go straight to EDVAC.

Another example is automobiles. When first created, they were close replications of carriages. The term "horseless carriage" really meant a horseless carriage! The initial design was a carriage that could move about without the aid of a horse. We gained knowledge about motors and their application to carriages, and made design changes. The automobile grew out of the horseless carriage, providing benefits that we could not see prior to the invention of the horseless carriage.

When confronted with a new technology, we do not see the range of possible applications and the potential uses. (OK, that's a pretty trite observation.) But here's the thing: We need the new technology in place, and then we can have the vision of new devices, devices that break from traditional design. The new designs are very different from the old.

This is the EDVAC leap. (The name is somewhat arbitrary. I wanted to give EDVAC some recognition.) We can understand a new technology and then leap to a new set of designs, devices, and uses.

Electronic calculators have been done. Automobiles have been done. Which is not to say that they are finished, but that we've had the EDVAC leap for them and learned how to use those technologies. Other new technologies may lead to other EDVAC leaps:

Electric/battery-powered cars
Segways
Social networks
Twitter
Wearable computers
WiMax and ubiquitous connectivity

Cell phones were originally cordless phones with a much larger range. Now they let us send and receive text messages and pictures, store information, and play TV shows. Who knows what new these technologies will let us do? All we need is the EDVAC leap.


Tuesday, April 7, 2009

Little Earthquakes (part II)

When introducing a set of changes to a web site, one can add them all at once, or slowly over a period of time. The former I consider the "big earthquake" method: nothing changes for a long time, and then suddenly everything moves!

The other approach is when I call "little earthquakes": small changes over a period of time. The end result is the same set of changes, but each change is smaller and more easily absorbed.

Your selection between these two methods has an affect on advertising.

Foregoing a large set of changes means foregoing the fanfare of the "new look". You cannot stick a label on your site with the words "new and improved" when you make minor changes to navigation. Or the next week when you fix some defects in your web application. If you do not have a large set of changes, the "new look" tag makes no sense.

A fair amount of advertising is based on "new and improved". It works for some things. I'm not sure that it works for web sites, or that it is even helpful to web sites.

With physical products like cars and detergent, the phrase "new and improved" may be effective in gaining new customers. Cars are durable goods and are purchased infrequently. The phrase "new" may spur people to purchase a car earlier than necessary. Detergent is a commodity. One soap is pretty much the same as another, and we tend to use the same amount of soap over time -- we don't increase our soap consumption because the soap improves. The label "improved" may encourage a customer to switch brands but it won't increase overall consumption.

Web sites are neither durable goods nor commodities. They are services, not products, and tend to be complex and non-substitutable. (LiveJournal is not the same as FaceBook, and neither are the same as LinkedIn.) The phrase "new and improved" works poorly for web sites: it doesn't make an existing customer purchase more of the service, nor does it convert a customer from another web site to yours. It *may* encourage people who are not using web sites to sign up with yours, but I suspect new sign-ups come from any advertising, not specifically "new and improved" advertising.

Exception: If you add a feature to your web site, something new, you may want to provide some "new services" fanfare. But only in the case of truly new services. Don't cry "new and improved" because you changed the colors or the shape of buttons.

Don't get me wrong: I am not advocating stasis. I am not saying that you should make no changes. I am recommending small changes over time.

If big earthquakes do nothing to gain new customers or increase sales, is there any harm to them? I think there is, in that they make things difficult for your existing customers. I find it easier to work with smaller changes; I learn them one at a time and I retain confidence in my abilities to use the web site. I find large, sweeping changes harder to work with.

So if big earthquakes gain you no business and possibly irritate your customers, and little earthquakes gain you no business (you aren't advertising them) but cost you no customers, which do you use -- and more importantly, what actions will gain you customers?

More on the "gaining customers" later. For now, I'm looking for little earthquakes.

Wednesday, April 1, 2009

Little Earthquakes (part I)

Those of us who use Google (which may be everyone) and those of us who pay close attention to Google's web site (which may be a smaller number) know that from time to time Google adjusts their logo. The "Google" that appears on the web page is often decorated in the style of the day: green on St. Patrick's Day, fireworks on July 4th, and so on.

While these changes appear whimsical and without purpose, that may not be the case. Changing the design of the Google logo can serve a purpose beyond the amusement of their users.

Changing the logo (really, changing the image file that holds the logo) requires a process to update their web site. At the very least, they must replace one image file with a second image file. (There are other ways of updating the web site, such as changing the HTML on the home page to refer to a different image file. The result, a process to update files on their web server, is the same.)

Most organizations isolate their production web servers (the ones that customers actually see, not their internal test servers) and carefully guard against changes. They limit access to one or a few teams. They put procedures in place to review all changes and test them prior to "moving them to production". They create an environment which discourages changes to web servers, probably out of fear that a change may cause a failure.

The result is that their web sites change infrequently, and when they do, the changes are usually a bunch of changes that have been tested in advance and patiently waiting for "update day".

This kind of thinking was (and is) used with other computer systems: PCs, minicomputers, and big mainframes. It is not tied to a technology but derived from managers' confidence (or lack thereof) in their people.

Indeed, this kind of thinking extends beyond the IT industry. Magazines will revise their layout, changing the "look" of the publication. They spiff up (or slim down) the table of contents, add (or remove) graphic elements, and change the typeface used for articles. They make these changes all at once, usually with some fanfare about their new look. Even the magazine Fast Company which espouses 'fast' principles and decries the 'old' way of doing business uses this "all redesign at once" philosophy.

Google takes a different approach. Rather than hold all changes for one big update, Google makes small changes over a period of time. It updates a single application one week, and then updates another a following week. Google changes their web site slowly, in small increments. A magazine could do the same thing, by changing the typeface in one issue, adding graphic elements in a subsequent issue, and revising the table of contents in a third issue.

Google's approach is the "little earthquakes" method. Rather than one big earthquake of short duration but large magnitude, Google accepts smaller shifts over a period of time. I think that this works well for them, and for users. (I am a user of their applications.) Google benefits by getting feedback on changes earlier and with fewer risks per update. I benefit by having to learn fewer changes at any one time.

To make this happen, Google needs a reliable process for updates. It must have a process that it can count on. And the best way to create a process that one can count on is to run the process frequently.

So Google's "whimsical" changes to its logo (which requires an update process) on a frequent basis is a way for Google to test its process and see that updates are easy to deploy. And maybe that is not such a whimsical idea.


Sunday, March 22, 2009

New tech and new hazards

With the invention of any new technology, we have the invention of new hazards. The invention of the train is coupled to the invention of the train wreck. The invention of the airplane is coupled with the invention of the plane crash.

These inventions occur at exactly the same time. We focus on the "good" aspect of the invention and tend to ignore the "bad" aspect of an invention.

But the "good" and "bad" attributes of an invention are our construction. The invention is a change in technology, a change in our use of tools. We assign the notions of "good" and "bad" to different uses and use cases. So the invention of e-mail is "good", but the use of e-mail for spam is "bad".

If the inventor of a technology is credited with the goodness that comes from an invention, should he be blamed for the badness that comes from it? For the most part, we cheer the inventor. Thomas Edison is praised for his work on incandescent light and power distribution grids (among other things) but not blamed for electrocutions, power failures, or longer hours in the office (possible with cheap, efficient light sources).

We have a bias towards the inventor, at least when the invention has benefits, and that may be a good thing. We want inventors. We want innovation. Our ability to use tools, starting with the plough, has made life more comfortable for many people, and allowed us to expand the human population on the planet. (Some will consider this a bad thing. I argue that comfort and increased population are good things, if done in a sustainable manner.)

We also have an almost automatic disdain for people who create things that have few or no perceived benefits for society. Robert Morris is remembered as the author of a nasty computer worm program, regardless of his other work. His construct did much damage to computer systems, but more horrifying than that he brought the specters from science fiction into real life. The event destroyed our perception of computers as reliable and safe, and we Do Not Like Having Our Delusions Removed.

Morris and Edison did the same thing: they created new things. The asymmetry is in our perception of their work on society. Edison created things that individuals find useful and we as a society agree are good things. Morris created something that most folks do not find useful and we as a society have agreed is a bad thing. (Malware writers may find his work useful and even inspiring, but they are a small segment of society.)

With any new software, we construct the thing and the hazards of the thing. With e-mail comes unwanted spam e-mail. With the invention of the "Reply All" button we have the hazard of foolishly worded responses sent to more than we intend. The spreadsheet brings us fast calculations and fast incorrect calculations.

If we're lucky, we create software with more good things than bad things.


Sunday, February 22, 2009

Lock-in and customer value

I recently read Breaking Windows by David Bank. It tells the story of the Microsoft anti-trust trial and some of the activities that lead up to it.

One of the strategies that Microsoft used (and still uses, perhaps) is the interlocking of products. That is, designing Microsoft products to work well with other Microsoft products and poorly (or not at all) with competing products. You can see the result of this strategy with the compatibility of Microsoft products. MS SourceSafe uses MS SQL Server (but not other databases), MS Outlook connects to MS Exchange but not other e-mail clients, and Windows uses ActiveDirectory for authentication but not generic LDAP servers (at least not without third-party products).

In Breaking Windows, Banks claims that this strategy is pushed by Bill Gates. Gates was not satisfied with the strategy of winning market share by having the best product; he feared that a newer, better product could take market share. Gates wanted lock-in, a strategy that made it expensive (and difficult) for a customer to switch to a competing product, whether that product was the browser, the word processor, the database, or anything.

And the strategy seems to have worked. Microsoft gained (and still has) a dominant market share. MS Office leads the field, despite competing products offering compatible file formats. The one weakest area might be with the browser, and Microsoft still has eighty percent of the market.

But it strikes me that Microsoft may have missed some nuances of this strategy. I can see situations in which Microsoft loses market share. The problem is that using Microsoft technologies is an all-or-nothing deal. You have to buy into the entire stack, or leave the entire suite of Microsoft products out in the cold. Even shops that use multiple technologies (Microsoft and IBM, or Microsoft and Sun) are usually two well-separated operations, not a single unified shop.

Here's how the interlocking, all-or-nothing strategy hurts Microsoft:

First, non-Microsoft shops are reluctant to switch to Microsoft. Reluctant to switch anything, since any product becomes the "camel's nose under the tent". Not surprisingly, Microsoft has used this approach to get themselves into non-Microsoft shops. But I'm sure that some shops have issued dictums to the effect of "no Microsoft technology... at all". Microsoft, by sticking to their approach, has alienated these shops -- and lost their business.

Second, Microsoft shops, once they start migrating to another technology (despite the interlocking effort by Microsoft), are likely to dump all of the Microsoft technology. This depends a great deal on the specific starting point: a shop can more easily replace MS Office with OpenOffice than it can replace ActiveDirectory with LDAP. But once the "new tech" has been accepted, and people see that other Microsoft products do not work with it, I think the tendency is to consider the Microsoft technologies as "legacy" and less important than the shiney new stuff. (Probably not a rational approach, but much in our industry is not rational.)

Once people see that OpenOffice does the job, they are more willing to accept Linux. And then LDAP for authentication. And MySQL or SQlite databases.

And the whole Microsoft pyramid crashes to the ground.

Microsoft, by focussing on "hard to switch", lost sight of "better customer value". Customers are aware of "hard to switch", but they also keep their eyes on "value for me" -- if they don't, they go out of business.

Microsoft with their interlocking strategy is shooting for the whole pie, or nothing at all.

And they may just get it.

Friday, February 13, 2009

Silver and Golden ages

Does history repeat itself? Perhaps it does.

Let's look back at the "Great PC Event" of 1981 and the ages immediately prior and after. In September of 1981, IBM introduced its model 5150, otherwise known as the "PC", and the world changed.

Prior to the release of the PC, there was the "silver age" of microcomputers. This ran from 1977 to 1980, by my arbitrary standards. This is the age of the Apple II, the TRS-80, the Commodore 64, and my personal favorite the Heathkit H-89. I use the adjective "silver" because to many of us, these microcomputers were magical, and could solve just about any problem. They were the new silver bullets. (Yes, they were clunky. Yes, they had problems. And yes, they did *not* solve every problem, or even many problems. But we thought of them as magical.)

The golden age of IBM PCs started with the IBM PC and lasted until 1986, when Compaq started innovating beyond IBM. It was certainly over in 1987, when IBM introduced the PS/2, and many in the industry were disappointed and many did not follow IBM's lead. I use the adjective "golden" for this age because the IBM PC computers made a lot of gold for people, mostly IBM.

There are differences between these two ages. The silver age saw a collection of different computers, each with their own architectures, keyboards, disk formats, memory layouts, and operating systems. The golden age saw a standard architecture, a standard keyboard, a standard set of disk formats, a standard memory layout, and a standard operating system.

One big difference between the two ages was the level of corporate acceptance. Before IBM gave the stamp of legitimacy to PCs, corporations didn't really trust small computers. (Oh, there were a few that experimented, but in general microcomputers were considered fancy typewriters.)

After IBM defined the hardware and Lotus provided the "killer app" of the spreadsheet, corporations were willing to accept the PC. The earlier computers had no chance at success. The new legitimate standard was "the thing".

Corporations accepted the PC as a unit of computation, a place for work. The PC went from "fancy typewriter" to "small, stand-alone computing station". And it has stayed there since.

Let's fast-forward to the year 2009 and look at the state of PCs. The internals of the PC have changed: new processors, new buss technologies, better graphics, faster (and much more) memory, CDs and not floppy drives. But much has not changed.

Despite the advances in networking, the corporate mindset of the PC is still a small, stand-alone computing station. Microsoft and others have made some progress at collaborative tools, but their acceptance has been mild. The PC performs a specific task in the corporation, and the corporation is not ready to expand that role. I predict that PCs will keep that assigned role.

Something else that we have in 2009 are the portable devices, or mobile internet devices (MIDs). These devices include cell phones, smart phones, iPods, iPhones, internet tables, Zunes, PalmPilots, and the like. You can see them everywhere: in the office, at the gym, even in the library. People have accepted them as small, portable, networked units that provide information and entertainment. The important notions are "portable" and "networked".

MIDs are different from PCs. PCs are large, not easily carried, and attached to things like power and network connections. Laptop PCs are smaller than PCs but still inconvenient to carry. Bettery life is short, so you need power connections. And many corporations don't want wireless networking, so you have to use cables to attach to network points. MIDs are mobile; you can take them with you easily, and they are aware of their location. Because of these traits, MIDs will have different uses.

When corporations adopt MIDs, they will adopt them with the traits of "portable" and "networked". These ideas will be "baked in" to the corporate notion of a MID.

The new functions for MIDs will probably be based some combination of location awareness (GPS), constant internet connectivity (a Wi-Fi connection that can move like a cell phone), fast messaging (Twitter), and the old PDA functions (calendar, reminders, address book) but on-line to a central database. MIDs will allow people to share information in real-time.

Currently, every MID is its own thing. An iPhone is an iPhone, with it's screen and software. A Nokia N810 is a Nokia N810 with a different screen and different software. 

People use MIDs but corporations do not. (The one exception may be the Blackberry, but in the corporate mind that is simply a channel to e-mail.) There is no corporate notion of a MID; MIDs are not part of the corporation mindset.

History is repeating itself. Or perhaps, we are repeating a behavior. We are in the silver age.

I expect a standard to emerge, a combination of hardware and a "killer app". Corporations will accept MIDs (because of the killer app), and assign them a role different from the role of PCs. They will find a use for the MID and accept them.

The technology is almost ready. The applications have yet to appear.

I'm pretty sure that MIDs will *not* be used for e-mail, word processing, and spreadsheets. Their form serves those tasks poorly. In the corporate mind, the "proper" device for those tasks is a PC, and the corporate mind changes slowly.

The new applications for MIDs will use the "network effect" to gain popularity. Once your friend has it, you will have a use for it. Once you have it, all of your friends will have a need for it. In the corporate world, once your boss has it, you will have a use for it.

So I think that we are in the silver age of MIDs. We have different MIDs that can do different things. When we have the right application (the "killer app"), we will enter the golden age. At that point, one manufacturer may become dominant (like IBM with the PC) or many may become successful (such as the situation with cell phones).

And a new golden age will begin.


Tuesday, February 3, 2009

The new dinosaurs

Way back in the day, we were the young upstarts, the revolutionaries, the misfits. We were the users of microcomputers, which were later known as personal computers. We were the the people who would change the world.

We derided the mainframe users. We considered their legacy hardware bulky and clunky, hard to use, and encumbered with design decisions that favored backward compatibility. Their software was awkward too, and their languages were clumsy, lacking the modern style of our languages. "Who wants to work on those old things?" we would as ourselves, and anyone willing to listen to revolutionaries. We wanted the new, the shiney, the modern. We wanted MS-DOS and dBase II and Lotus 1-2-3, not COBOL or DB2 or CICS.

Our systems were sleek and efficient, with new designs and flexible architectures. Our languages (C and Pascal) were nimble and powerful. We were the new kings of the world, although perhaps the world did not know it. We left the dinosaurs at their table and set up our own table, and had conversations in the newspeak of PCs.

Today, I find myself in an interesting situation. Today, it is the almost thirty-year-old PC that is the clumsy beast, unable to keep up with the times. Today, the sleek and modern equipment is the iPod, the iPhone, and the netbook computers. The PC is the legacy beast, old and clumsy compared to the new equipment. Languages too have changed. The C and Pascal we considered modern are now relics. The super-modern C++ is a legacy language, the "COBOL of the nineties". Microsoft Windows is a bear, tolerated only because large corporations use it. The "new" languages of the past are now old, and languages such as Ruby, Lua, and Haskell are in the ascendent.

"Who wants to work on those old beasts?" the young revolutionaries ask. "Why use those old languages and those old PCs with their legacy compromises? Their not portable and lots of them don't even have wi-fi!"

The revolutionaries have left our table, leaving us to talk about the glory days of PCs and the conquests we made with our software. They are setting up their own table, with wi-fi and mobility and pocket-sized devices.

We have met the dinosaurs and they are us!