Sunday, October 19, 2008

Sometimes the truth is not enough

I want computers (and web pages, and people, come to think of it) to be truthful.

That is, when I sign on to my credit card web page, I want it to give me an accurate and truthful list of transactions. I want to see the numbers for my account, not someone else's, and I want current information. And yes, I recognize that some information is not available in real time.

But in addition to truth, I want useful information. And sometimes the exact truth is not enough to be useful.

Case in point: a large bank's "find a branch" web page. I have accounts with said large bank (OK, the bank is Chase) and I can handle most transactions on-line. But not all transactions. For some, I need a branch office.

Now, I live outside of Chase's operating area. The history of my account explains this oddity. I opened the account with a bank called "Wingspan", which was Bank One's foray into online banking. After a few years, Bank One realized that online banking was banking and folded the operation into its regular operation. A few years later, Chase bought Bank One. Voila! I am a Chase customer.

So having a transaction that requires a visit to a branch, I decide to use Chase's web site to find a branch. I call up the page, type in my ZIP Code, and the result page says the following:

"There are no branches within thirty miles of you. If you change your criteria, you may get different results."

I will ignore the "change your criteria" statement, since all I can provide is a ZIP Code. Let's look at the first statement: "There are no branches within thirty miles of you." (The phrasing may be somewhat inexact, but that is the gist of the message.)

I'm sure that this is a true statement. I have no doubt that all of Chase's branche offices are further than thirty miles.

But the statement is not useful. It gives me no indication of where I can find a branch office, close to me or not. It leaves me to guess.

Now, I think I understand how this came to be. Chase designed the "find a branch" web page to report a subset of its branches, not the entire list. And that makes sense. I don't need to see the entire list, just the close ones. And Chase picked thirty miles as a cut-off for the meaning of "close". So a person in "Chase territory" gets a manageable list of perhaps a dozen branches. This behavior is useful.

But when the result list is empty, the behavior is not useful. It leaves me in the cold, with no "lead", nothing to follow, no clue about a branch location.

Truthful, but not useful.

Sunday, October 12, 2008

Standards

Do we, in the programming profession, hold ourselves to high enough standards?

Here's what Christopher Alexander writes:

"In my life as an architect, I find that the single thing which inhibits young professionals,new students most severely, is their acceptance of standards that are too low. If I ask a student whether her design is as good as Chartres, she often smiles tolerantly at me as if to say, "Of course not, that isn't what I am trying to do... I could never do that."

Then, I express my disagreement, and tell her: "That standard must be our standard. If you are going to be a builder, no other standard is worthwhile. That is what I expect of myself in my own buildings, and it is what I expect of my students.""

-- from the foreword to Patterns of Software by Richard P. Gabriel --

I submit that we (in the programming community) have, for the most part, accepted standards that are too low. I submit that we have "sold out" to those who believe that "good enough" is good enough. We tolerate poor managers, poor developers, and poor techniques, and we congratulate ourselves on our mediocre efforts. We consider programming a job, an activity that occurs between the hours of 9:00 AM and 5:00 PM, with breaks for lunch and interruptions for meetings.

I submit that we have no standard to look to, no Chartres, in our profession. We build a thing and proclaim it to be wonderful, or at least what the user wanted (and with few major defects). We do not consider what could have been. We do not even consider what the "other guy" is building. We look at our own constructs (and not too closely, lest we see the warts) and think "that is a worthy accomplishment and therefore we are worthy".

Our efforts are expensive. Our processes are inefficient. Our participants are poorly educated, self-centered, and ignorant. Our results are unsatisfying. Users dislike, no, hate, that which we produce. We consider them beneath us and their opinions unimportant.

We think that we know what it is that we do, yet we do not know our profession's history.

One day we shall be weighed in the balance. On that day, I fear, we shall be found wanting.

Sunday, September 7, 2008

Test first, then code

One of the tenets of Agile Development is test-driven development. Test-driven development means using tests (usually automated) to lead the development. In short, you write the tests before you write the code.

This is backwards from the usual methodologies, which place tests (and the creation of tests) after the design and code phases.

I have used test-driven development for small programs, and it works. (At least for small programs written by me.) I find that I am much more productive with test-driven development. Here's how it works:

1. I start with an idea for a program.

2. I create some use cases, and translate them into test cases. This usually entails creating input and output data for each use case.

3. I create a small program that works, sort of, and produces some (not necessarily correct) output.

4. I run the program against the test cases, and observe failures. I don't get too upset about failures. I rarely write a program and get it correct the first time.

5. I modify the program and re-run the test cases. I usually pick one or two failing tests and correct the program for those. I don't try to fix all problems at once.

6. As I run the tests, I often think of more use cases. I create tests for those use cases as I discover them.

7. I stay in the cycle of testing and modifying code until the code performs correctly.

8. Once I have the code working (that is, passing tests), I revise the code. I combine common sections and clean messy sections. After each change, I run the tests (which the program passed when I started the 'clean-up' work). If a test fails, I know that the failure is due to my most recent change.

9. I stay in the 'clean-up' cycle until I am happy with the code.

In the end, I have a program that has clean code and works. I know it works because it passes the tests. I know that the code is clean because I keep reviewing it.

The idea of testing before coding is not new. I recently found a reference to it in The Psychology of Computer Programming by Gerald Weinberg.

Another well-known psychological bias in observations is the overdependence on "early" data returns. In program testing, the programmer who gets early "success" with his program is likely to stop testing too soon. One way to guard against this mistake is to prepare the tests in advance of testing and, if possible in advance of coding.

                                   -- Gerald Weinberg, The Psychology of Computer Programming

The emphasis is mine.

The Psychology of Computer Programming was written in 1971.

So testing before coding is not a new concept. You may want to try it.

Sunday, August 24, 2008

Three aspects of programming

Many people think that there is one aspect of programming: creating code. Some might recognize two (creating new code and maintaining old code). I'm not counting tasks such as testing, deployment, or support.

I think there are three aspects of programming: creation, maintenance, and destruction.

The choice of destruction often surprises people. Why would one want to destroy code? At the very least, we can simply stop using it and leave it as 'dead' code. Why would we risk removing it?

As I see it, all three tasks much occur in the proper balance. We can create a new application (and by definition create new code). We can improve the application by maintaining code - making changes to existing code, fixing problems, and correcting defects.

Adding features to an application is generally a mix of creation and maintenance. Some code can be completely new, and often some existing code is modified.

So where does destruction fit in?

Destruction can be viewed not as the wholesale obliteration of code, but as the careful removal of sections of code. Perhaps two modules can be combined into a single general module. In this case, the two old modules are removed (destroyed) and a new module is created. (Or maybe one module is removed and the second is made more general.)

Destruction can also occur with redesign. If a set of classes or functions is particularly difficult to work with, one may wish to replace them. (This assumes that the difficultness is in the implementation and not part of the problem at hand.) Replacing a set of functions with another (better-designed) set of functions is destroying one set and creating a new set. Some might argue that this is agressive maintenance, and not necessarily destruction and creation. Perhaps it is. That is a philosophical discussion.

Sometimes destruction is useful by itself. If an application has a feature that is not used, or perhaps considered harmful, then removing the feature can be an improvement.

Destruction is useful in that it simplifies and opens the way for new features.

Destruction is an aspect of programming. Used in the right amount, it is a benefit.

Sunday, June 29, 2008

A worthy book: Learning Ruby

It's been a long time since I have read a good book about a programming language.

The book Learning Ruby is a good book. Possibly the best book on a programming language that I have read. Ever. (Although Que's "Programming C" from the 1980s was pretty good.)

Learning Ruby starts with the basics, ramps up quickly, and gives you the right information at the right time. I find it a much better introduction than Ruby in a Nutshell and Programming Ruby. (Both good books, but not for beginners.)

If you're programming in Ruby (or want to program in Ruby), Learning Ruby is an excellent choice. If you're not interested in Ruby, this book will make you interested. (And possibly jealous, when you compare the syntax and features of Ruby to the language of your choice.)

Sunday, May 11, 2008

How much is that XP in the Windows?

For years, Microsoft has sold Windows. And for each version, Microsoft specified minimum hardware requirements. So for any version of Windows, you need so much memory and so much disk and so much video power.

Now, Microsoft is turning the rules upside-down. Instead of specifying the minimum hardware, Microsoft is specifying the maximum hardware.

This is part of Microsoft's strategy for the ULPC (Ultra Low-cost PC) market. New PCs like the ASUS 'eee' PC and others are gaining interest by consumers, and market share. Normally, interest in PCs is a good thing for Microsoft, but this market isn't. Microsoft has staked its future on Windows Vista, and the ULPCs can't run Vista. (ULPCs are too small and underpowered for Windows Vista. They can run a stripped-down version of Windows XP. Many of them run Linux.)

It appears that Microsoft was surprised by the interest in ULPCs. I suspect that Microsoft's strategy assumed that hardware would continue to get a little bit faster and more capable each year. They didn't see a new 'technology curve' for hardware, namely the ULPC.

Microsoft is now between a rock and a hard place. They cannot ignore the ULPC market. Doing so would give Linux a 'foot in the door' at home and quite possibly in the business market. But they cannot distribute Windows Vista for the ULPCs because it just won't run, and if they extend the life of Windows XP then they lose the income from the upgrades to Windows Vista.

So in a move to have their cake and eat it too, Microsoft is extending the life of Windows XP, but now with limits on its use. The limits are not the typical "minimum system requirements" but a new concept: maximum system requirements. Or maybe, instead of "requirements", we should say "supported", since the rules set the upper bounds on hardware.

This lets Microsoft get into the ULPC market but keep Windows Vista as their premium operating system. The limits are artificial (1 GB RAM, 80 GB DASD, 10.5 inch display) and Windows XP can handle more -- it's been doing it for years.

It strikes me that if customers saw value in Windows Vista, the limits would not be necessary. If Windows Vista had value, people would be using it for their new desktops, upgrading their old desktops to use Vista, and would be clamoring for ULPCs that run Windows Vista.

Sunday, March 30, 2008

The new old thing

The new old thing is e-mail.

As today's new users (read 'kids') use computers, they adopt new technologies faster than the 'old guard'. Not only do they adopt new technologies, they also drop old technologies.

People have been adopting new technologies and dropping old technologies for years. Decades. Centuries. Who uses buggy whips nowadays? Or builds pyramids? Or uses carbon paper?

The new old thing is e-mail.

The new new things are instant messaging (IM), text messaging, social networking sites (MySpace and FaceBook), and sites like Twitter. Kids today use these technologies for communicating with friends. They grudgingly use e-mail, to talk with their parents.

The new old thing is e-mail.

This blog used to be distributed via e-mail. Today it is hosted on the Google site called 'Blogger'. The blog site is easier for me to administer and easier for users. Google supplies the storage space and provides the RSS plumbing.

The new old thing is e-mail. For now.