Friday, February 13, 2009

Silver and Golden ages

Does history repeat itself? Perhaps it does.

Let's look back at the "Great PC Event" of 1981 and the ages immediately prior and after. In September of 1981, IBM introduced its model 5150, otherwise known as the "PC", and the world changed.

Prior to the release of the PC, there was the "silver age" of microcomputers. This ran from 1977 to 1980, by my arbitrary standards. This is the age of the Apple II, the TRS-80, the Commodore 64, and my personal favorite the Heathkit H-89. I use the adjective "silver" because to many of us, these microcomputers were magical, and could solve just about any problem. They were the new silver bullets. (Yes, they were clunky. Yes, they had problems. And yes, they did *not* solve every problem, or even many problems. But we thought of them as magical.)

The golden age of IBM PCs started with the IBM PC and lasted until 1986, when Compaq started innovating beyond IBM. It was certainly over in 1987, when IBM introduced the PS/2, and many in the industry were disappointed and many did not follow IBM's lead. I use the adjective "golden" for this age because the IBM PC computers made a lot of gold for people, mostly IBM.

There are differences between these two ages. The silver age saw a collection of different computers, each with their own architectures, keyboards, disk formats, memory layouts, and operating systems. The golden age saw a standard architecture, a standard keyboard, a standard set of disk formats, a standard memory layout, and a standard operating system.

One big difference between the two ages was the level of corporate acceptance. Before IBM gave the stamp of legitimacy to PCs, corporations didn't really trust small computers. (Oh, there were a few that experimented, but in general microcomputers were considered fancy typewriters.)

After IBM defined the hardware and Lotus provided the "killer app" of the spreadsheet, corporations were willing to accept the PC. The earlier computers had no chance at success. The new legitimate standard was "the thing".

Corporations accepted the PC as a unit of computation, a place for work. The PC went from "fancy typewriter" to "small, stand-alone computing station". And it has stayed there since.

Let's fast-forward to the year 2009 and look at the state of PCs. The internals of the PC have changed: new processors, new buss technologies, better graphics, faster (and much more) memory, CDs and not floppy drives. But much has not changed.

Despite the advances in networking, the corporate mindset of the PC is still a small, stand-alone computing station. Microsoft and others have made some progress at collaborative tools, but their acceptance has been mild. The PC performs a specific task in the corporation, and the corporation is not ready to expand that role. I predict that PCs will keep that assigned role.

Something else that we have in 2009 are the portable devices, or mobile internet devices (MIDs). These devices include cell phones, smart phones, iPods, iPhones, internet tables, Zunes, PalmPilots, and the like. You can see them everywhere: in the office, at the gym, even in the library. People have accepted them as small, portable, networked units that provide information and entertainment. The important notions are "portable" and "networked".

MIDs are different from PCs. PCs are large, not easily carried, and attached to things like power and network connections. Laptop PCs are smaller than PCs but still inconvenient to carry. Bettery life is short, so you need power connections. And many corporations don't want wireless networking, so you have to use cables to attach to network points. MIDs are mobile; you can take them with you easily, and they are aware of their location. Because of these traits, MIDs will have different uses.

When corporations adopt MIDs, they will adopt them with the traits of "portable" and "networked". These ideas will be "baked in" to the corporate notion of a MID.

The new functions for MIDs will probably be based some combination of location awareness (GPS), constant internet connectivity (a Wi-Fi connection that can move like a cell phone), fast messaging (Twitter), and the old PDA functions (calendar, reminders, address book) but on-line to a central database. MIDs will allow people to share information in real-time.

Currently, every MID is its own thing. An iPhone is an iPhone, with it's screen and software. A Nokia N810 is a Nokia N810 with a different screen and different software. 

People use MIDs but corporations do not. (The one exception may be the Blackberry, but in the corporate mind that is simply a channel to e-mail.) There is no corporate notion of a MID; MIDs are not part of the corporation mindset.

History is repeating itself. Or perhaps, we are repeating a behavior. We are in the silver age.

I expect a standard to emerge, a combination of hardware and a "killer app". Corporations will accept MIDs (because of the killer app), and assign them a role different from the role of PCs. They will find a use for the MID and accept them.

The technology is almost ready. The applications have yet to appear.

I'm pretty sure that MIDs will *not* be used for e-mail, word processing, and spreadsheets. Their form serves those tasks poorly. In the corporate mind, the "proper" device for those tasks is a PC, and the corporate mind changes slowly.

The new applications for MIDs will use the "network effect" to gain popularity. Once your friend has it, you will have a use for it. Once you have it, all of your friends will have a need for it. In the corporate world, once your boss has it, you will have a use for it.

So I think that we are in the silver age of MIDs. We have different MIDs that can do different things. When we have the right application (the "killer app"), we will enter the golden age. At that point, one manufacturer may become dominant (like IBM with the PC) or many may become successful (such as the situation with cell phones).

And a new golden age will begin.


Tuesday, February 3, 2009

The new dinosaurs

Way back in the day, we were the young upstarts, the revolutionaries, the misfits. We were the users of microcomputers, which were later known as personal computers. We were the the people who would change the world.

We derided the mainframe users. We considered their legacy hardware bulky and clunky, hard to use, and encumbered with design decisions that favored backward compatibility. Their software was awkward too, and their languages were clumsy, lacking the modern style of our languages. "Who wants to work on those old things?" we would as ourselves, and anyone willing to listen to revolutionaries. We wanted the new, the shiney, the modern. We wanted MS-DOS and dBase II and Lotus 1-2-3, not COBOL or DB2 or CICS.

Our systems were sleek and efficient, with new designs and flexible architectures. Our languages (C and Pascal) were nimble and powerful. We were the new kings of the world, although perhaps the world did not know it. We left the dinosaurs at their table and set up our own table, and had conversations in the newspeak of PCs.

Today, I find myself in an interesting situation. Today, it is the almost thirty-year-old PC that is the clumsy beast, unable to keep up with the times. Today, the sleek and modern equipment is the iPod, the iPhone, and the netbook computers. The PC is the legacy beast, old and clumsy compared to the new equipment. Languages too have changed. The C and Pascal we considered modern are now relics. The super-modern C++ is a legacy language, the "COBOL of the nineties". Microsoft Windows is a bear, tolerated only because large corporations use it. The "new" languages of the past are now old, and languages such as Ruby, Lua, and Haskell are in the ascendent.

"Who wants to work on those old beasts?" the young revolutionaries ask. "Why use those old languages and those old PCs with their legacy compromises? Their not portable and lots of them don't even have wi-fi!"

The revolutionaries have left our table, leaving us to talk about the glory days of PCs and the conquests we made with our software. They are setting up their own table, with wi-fi and mobility and pocket-sized devices.

We have met the dinosaurs and they are us!

Thursday, January 1, 2009

The Year of the Linux Desktop

A new year begins, and with it, many projections as the year of "The Linux Desktop".

For some reason, folks in the open source press want to predict the "Year of the Linux Desktop". This is the year (they think) that Linux will "take over" the desktop and become the dominant operating system for users around the world. Some even think that Windows will disappear completely.

To all of these people, I say: hogwash.

Linux is usable (I certainly use it at home) and reliable and economically feasible. But none of those (or even all of those) factors is enough to dislodge Microsoft Windows from it's monopoly of the desktop, home or corporate. Microsoft has a good foot-hold for desktop computing, and I think they will retain it, despite Windows Vista.

Linux and its associated applications (Open Office, The Gimp, etc.) are good, but they are not good enough (or better enough) to convince people to replace Windows. In the battle for the desktop, being equal to Windows, or even ten percent better than Windows, is not good enough to cause a revolution. You have to be significantly better, an order of magnitude better, and even then there will be hold-outs.

Microsoft is not winning friends with Windows Vista, but the majority of users will keep using Windows (Vista, Windows 7, or whatever) because it is close to what they have. Inertia will hold the Microsoft dominance in place.

That doesn't mean that Linux is doomed. I think Linux has a bright future. I think that its market share will grow (as will Apple's) and people will use Linux. Some will be home users, others will be corporations. I suspect that Apple will grow faster than Linux, simply because corporations like to buy things that have litigable parties. (Also, Apple does a smart job with the user interface.)

So I'm not holding out for the "year of the Linux desktop". I'm thinking instead: "new equilibrium".

Sunday, October 19, 2008

Sometimes the truth is not enough

I want computers (and web pages, and people, come to think of it) to be truthful.

That is, when I sign on to my credit card web page, I want it to give me an accurate and truthful list of transactions. I want to see the numbers for my account, not someone else's, and I want current information. And yes, I recognize that some information is not available in real time.

But in addition to truth, I want useful information. And sometimes the exact truth is not enough to be useful.

Case in point: a large bank's "find a branch" web page. I have accounts with said large bank (OK, the bank is Chase) and I can handle most transactions on-line. But not all transactions. For some, I need a branch office.

Now, I live outside of Chase's operating area. The history of my account explains this oddity. I opened the account with a bank called "Wingspan", which was Bank One's foray into online banking. After a few years, Bank One realized that online banking was banking and folded the operation into its regular operation. A few years later, Chase bought Bank One. Voila! I am a Chase customer.

So having a transaction that requires a visit to a branch, I decide to use Chase's web site to find a branch. I call up the page, type in my ZIP Code, and the result page says the following:

"There are no branches within thirty miles of you. If you change your criteria, you may get different results."

I will ignore the "change your criteria" statement, since all I can provide is a ZIP Code. Let's look at the first statement: "There are no branches within thirty miles of you." (The phrasing may be somewhat inexact, but that is the gist of the message.)

I'm sure that this is a true statement. I have no doubt that all of Chase's branche offices are further than thirty miles.

But the statement is not useful. It gives me no indication of where I can find a branch office, close to me or not. It leaves me to guess.

Now, I think I understand how this came to be. Chase designed the "find a branch" web page to report a subset of its branches, not the entire list. And that makes sense. I don't need to see the entire list, just the close ones. And Chase picked thirty miles as a cut-off for the meaning of "close". So a person in "Chase territory" gets a manageable list of perhaps a dozen branches. This behavior is useful.

But when the result list is empty, the behavior is not useful. It leaves me in the cold, with no "lead", nothing to follow, no clue about a branch location.

Truthful, but not useful.

Sunday, October 12, 2008

Standards

Do we, in the programming profession, hold ourselves to high enough standards?

Here's what Christopher Alexander writes:

"In my life as an architect, I find that the single thing which inhibits young professionals,new students most severely, is their acceptance of standards that are too low. If I ask a student whether her design is as good as Chartres, she often smiles tolerantly at me as if to say, "Of course not, that isn't what I am trying to do... I could never do that."

Then, I express my disagreement, and tell her: "That standard must be our standard. If you are going to be a builder, no other standard is worthwhile. That is what I expect of myself in my own buildings, and it is what I expect of my students.""

-- from the foreword to Patterns of Software by Richard P. Gabriel --

I submit that we (in the programming community) have, for the most part, accepted standards that are too low. I submit that we have "sold out" to those who believe that "good enough" is good enough. We tolerate poor managers, poor developers, and poor techniques, and we congratulate ourselves on our mediocre efforts. We consider programming a job, an activity that occurs between the hours of 9:00 AM and 5:00 PM, with breaks for lunch and interruptions for meetings.

I submit that we have no standard to look to, no Chartres, in our profession. We build a thing and proclaim it to be wonderful, or at least what the user wanted (and with few major defects). We do not consider what could have been. We do not even consider what the "other guy" is building. We look at our own constructs (and not too closely, lest we see the warts) and think "that is a worthy accomplishment and therefore we are worthy".

Our efforts are expensive. Our processes are inefficient. Our participants are poorly educated, self-centered, and ignorant. Our results are unsatisfying. Users dislike, no, hate, that which we produce. We consider them beneath us and their opinions unimportant.

We think that we know what it is that we do, yet we do not know our profession's history.

One day we shall be weighed in the balance. On that day, I fear, we shall be found wanting.

Sunday, September 7, 2008

Test first, then code

One of the tenets of Agile Development is test-driven development. Test-driven development means using tests (usually automated) to lead the development. In short, you write the tests before you write the code.

This is backwards from the usual methodologies, which place tests (and the creation of tests) after the design and code phases.

I have used test-driven development for small programs, and it works. (At least for small programs written by me.) I find that I am much more productive with test-driven development. Here's how it works:

1. I start with an idea for a program.

2. I create some use cases, and translate them into test cases. This usually entails creating input and output data for each use case.

3. I create a small program that works, sort of, and produces some (not necessarily correct) output.

4. I run the program against the test cases, and observe failures. I don't get too upset about failures. I rarely write a program and get it correct the first time.

5. I modify the program and re-run the test cases. I usually pick one or two failing tests and correct the program for those. I don't try to fix all problems at once.

6. As I run the tests, I often think of more use cases. I create tests for those use cases as I discover them.

7. I stay in the cycle of testing and modifying code until the code performs correctly.

8. Once I have the code working (that is, passing tests), I revise the code. I combine common sections and clean messy sections. After each change, I run the tests (which the program passed when I started the 'clean-up' work). If a test fails, I know that the failure is due to my most recent change.

9. I stay in the 'clean-up' cycle until I am happy with the code.

In the end, I have a program that has clean code and works. I know it works because it passes the tests. I know that the code is clean because I keep reviewing it.

The idea of testing before coding is not new. I recently found a reference to it in The Psychology of Computer Programming by Gerald Weinberg.

Another well-known psychological bias in observations is the overdependence on "early" data returns. In program testing, the programmer who gets early "success" with his program is likely to stop testing too soon. One way to guard against this mistake is to prepare the tests in advance of testing and, if possible in advance of coding.

                                   -- Gerald Weinberg, The Psychology of Computer Programming

The emphasis is mine.

The Psychology of Computer Programming was written in 1971.

So testing before coding is not a new concept. You may want to try it.

Sunday, August 24, 2008

Three aspects of programming

Many people think that there is one aspect of programming: creating code. Some might recognize two (creating new code and maintaining old code). I'm not counting tasks such as testing, deployment, or support.

I think there are three aspects of programming: creation, maintenance, and destruction.

The choice of destruction often surprises people. Why would one want to destroy code? At the very least, we can simply stop using it and leave it as 'dead' code. Why would we risk removing it?

As I see it, all three tasks much occur in the proper balance. We can create a new application (and by definition create new code). We can improve the application by maintaining code - making changes to existing code, fixing problems, and correcting defects.

Adding features to an application is generally a mix of creation and maintenance. Some code can be completely new, and often some existing code is modified.

So where does destruction fit in?

Destruction can be viewed not as the wholesale obliteration of code, but as the careful removal of sections of code. Perhaps two modules can be combined into a single general module. In this case, the two old modules are removed (destroyed) and a new module is created. (Or maybe one module is removed and the second is made more general.)

Destruction can also occur with redesign. If a set of classes or functions is particularly difficult to work with, one may wish to replace them. (This assumes that the difficultness is in the implementation and not part of the problem at hand.) Replacing a set of functions with another (better-designed) set of functions is destroying one set and creating a new set. Some might argue that this is agressive maintenance, and not necessarily destruction and creation. Perhaps it is. That is a philosophical discussion.

Sometimes destruction is useful by itself. If an application has a feature that is not used, or perhaps considered harmful, then removing the feature can be an improvement.

Destruction is useful in that it simplifies and opens the way for new features.

Destruction is an aspect of programming. Used in the right amount, it is a benefit.