Sunday, September 7, 2008

Test first, then code

One of the tenets of Agile Development is test-driven development. Test-driven development means using tests (usually automated) to lead the development. In short, you write the tests before you write the code.

This is backwards from the usual methodologies, which place tests (and the creation of tests) after the design and code phases.

I have used test-driven development for small programs, and it works. (At least for small programs written by me.) I find that I am much more productive with test-driven development. Here's how it works:

1. I start with an idea for a program.

2. I create some use cases, and translate them into test cases. This usually entails creating input and output data for each use case.

3. I create a small program that works, sort of, and produces some (not necessarily correct) output.

4. I run the program against the test cases, and observe failures. I don't get too upset about failures. I rarely write a program and get it correct the first time.

5. I modify the program and re-run the test cases. I usually pick one or two failing tests and correct the program for those. I don't try to fix all problems at once.

6. As I run the tests, I often think of more use cases. I create tests for those use cases as I discover them.

7. I stay in the cycle of testing and modifying code until the code performs correctly.

8. Once I have the code working (that is, passing tests), I revise the code. I combine common sections and clean messy sections. After each change, I run the tests (which the program passed when I started the 'clean-up' work). If a test fails, I know that the failure is due to my most recent change.

9. I stay in the 'clean-up' cycle until I am happy with the code.

In the end, I have a program that has clean code and works. I know it works because it passes the tests. I know that the code is clean because I keep reviewing it.

The idea of testing before coding is not new. I recently found a reference to it in The Psychology of Computer Programming by Gerald Weinberg.

Another well-known psychological bias in observations is the overdependence on "early" data returns. In program testing, the programmer who gets early "success" with his program is likely to stop testing too soon. One way to guard against this mistake is to prepare the tests in advance of testing and, if possible in advance of coding.

                                   -- Gerald Weinberg, The Psychology of Computer Programming

The emphasis is mine.

The Psychology of Computer Programming was written in 1971.

So testing before coding is not a new concept. You may want to try it.

Sunday, August 24, 2008

Three aspects of programming

Many people think that there is one aspect of programming: creating code. Some might recognize two (creating new code and maintaining old code). I'm not counting tasks such as testing, deployment, or support.

I think there are three aspects of programming: creation, maintenance, and destruction.

The choice of destruction often surprises people. Why would one want to destroy code? At the very least, we can simply stop using it and leave it as 'dead' code. Why would we risk removing it?

As I see it, all three tasks much occur in the proper balance. We can create a new application (and by definition create new code). We can improve the application by maintaining code - making changes to existing code, fixing problems, and correcting defects.

Adding features to an application is generally a mix of creation and maintenance. Some code can be completely new, and often some existing code is modified.

So where does destruction fit in?

Destruction can be viewed not as the wholesale obliteration of code, but as the careful removal of sections of code. Perhaps two modules can be combined into a single general module. In this case, the two old modules are removed (destroyed) and a new module is created. (Or maybe one module is removed and the second is made more general.)

Destruction can also occur with redesign. If a set of classes or functions is particularly difficult to work with, one may wish to replace them. (This assumes that the difficultness is in the implementation and not part of the problem at hand.) Replacing a set of functions with another (better-designed) set of functions is destroying one set and creating a new set. Some might argue that this is agressive maintenance, and not necessarily destruction and creation. Perhaps it is. That is a philosophical discussion.

Sometimes destruction is useful by itself. If an application has a feature that is not used, or perhaps considered harmful, then removing the feature can be an improvement.

Destruction is useful in that it simplifies and opens the way for new features.

Destruction is an aspect of programming. Used in the right amount, it is a benefit.

Sunday, June 29, 2008

A worthy book: Learning Ruby

It's been a long time since I have read a good book about a programming language.

The book Learning Ruby is a good book. Possibly the best book on a programming language that I have read. Ever. (Although Que's "Programming C" from the 1980s was pretty good.)

Learning Ruby starts with the basics, ramps up quickly, and gives you the right information at the right time. I find it a much better introduction than Ruby in a Nutshell and Programming Ruby. (Both good books, but not for beginners.)

If you're programming in Ruby (or want to program in Ruby), Learning Ruby is an excellent choice. If you're not interested in Ruby, this book will make you interested. (And possibly jealous, when you compare the syntax and features of Ruby to the language of your choice.)

Sunday, May 11, 2008

How much is that XP in the Windows?

For years, Microsoft has sold Windows. And for each version, Microsoft specified minimum hardware requirements. So for any version of Windows, you need so much memory and so much disk and so much video power.

Now, Microsoft is turning the rules upside-down. Instead of specifying the minimum hardware, Microsoft is specifying the maximum hardware.

This is part of Microsoft's strategy for the ULPC (Ultra Low-cost PC) market. New PCs like the ASUS 'eee' PC and others are gaining interest by consumers, and market share. Normally, interest in PCs is a good thing for Microsoft, but this market isn't. Microsoft has staked its future on Windows Vista, and the ULPCs can't run Vista. (ULPCs are too small and underpowered for Windows Vista. They can run a stripped-down version of Windows XP. Many of them run Linux.)

It appears that Microsoft was surprised by the interest in ULPCs. I suspect that Microsoft's strategy assumed that hardware would continue to get a little bit faster and more capable each year. They didn't see a new 'technology curve' for hardware, namely the ULPC.

Microsoft is now between a rock and a hard place. They cannot ignore the ULPC market. Doing so would give Linux a 'foot in the door' at home and quite possibly in the business market. But they cannot distribute Windows Vista for the ULPCs because it just won't run, and if they extend the life of Windows XP then they lose the income from the upgrades to Windows Vista.

So in a move to have their cake and eat it too, Microsoft is extending the life of Windows XP, but now with limits on its use. The limits are not the typical "minimum system requirements" but a new concept: maximum system requirements. Or maybe, instead of "requirements", we should say "supported", since the rules set the upper bounds on hardware.

This lets Microsoft get into the ULPC market but keep Windows Vista as their premium operating system. The limits are artificial (1 GB RAM, 80 GB DASD, 10.5 inch display) and Windows XP can handle more -- it's been doing it for years.

It strikes me that if customers saw value in Windows Vista, the limits would not be necessary. If Windows Vista had value, people would be using it for their new desktops, upgrading their old desktops to use Vista, and would be clamoring for ULPCs that run Windows Vista.

Sunday, March 30, 2008

The new old thing

The new old thing is e-mail.

As today's new users (read 'kids') use computers, they adopt new technologies faster than the 'old guard'. Not only do they adopt new technologies, they also drop old technologies.

People have been adopting new technologies and dropping old technologies for years. Decades. Centuries. Who uses buggy whips nowadays? Or builds pyramids? Or uses carbon paper?

The new old thing is e-mail.

The new new things are instant messaging (IM), text messaging, social networking sites (MySpace and FaceBook), and sites like Twitter. Kids today use these technologies for communicating with friends. They grudgingly use e-mail, to talk with their parents.

The new old thing is e-mail.

This blog used to be distributed via e-mail. Today it is hosted on the Google site called 'Blogger'. The blog site is easier for me to administer and easier for users. Google supplies the storage space and provides the RSS plumbing.

The new old thing is e-mail. For now.

Sunday, March 16, 2008

The Third Age

I think that we are entering (have entered?) a third age of computing. The technologies coming on-line (XML, RIA, Web 2.0, and such) are giving us the capabilities to create new types of applications.

The first age is what many folks think of as 'the mainframe age'. Some folks will call it 'the bad old days', and other folks will wax nostalgic over it. I look at the ages in terms of data and how that data was organized, who organized it, and who could access it.

In the first age, data was structured and centralized. (Just like mainframe systems like it to be.) The typical applications were business applications: general ledger, payroll, inventory, and billing systems. The technology used was mainframes and COBOL, but then, that was all we had. (I am conveniently ignoring FORTRAN, but while FORTRAN applications were for science folks the data was still structured and not really shared.) The system owners organized the data and the users had to comply with the imposed structure. Data was not shared, at least not very much. And data was limited to the business problem at hand. The 'killer app' for the first age was the 'accounting set' of the aforementioned typical business applications.

The second age is 'the PC revolution'. Data was non-structured and distributed - everyone had their own computer and stored data as they saw fit. The main applications were word processors and spreadsheets, running on PCs with either MS-DOS or Windows. (I'm ignoring the pre-PC microcomputers such as Apple II and Commodore PET. I'm also ignoring minicomputers and Unix.) The primary users in the second age were businesses, as in the first age. The data did not follow the first-age flow of 'input-process-output' but was composed and then distributed, usually on paper but sometimes on floppy disk or BBS. There were many originators of data but each originator had few recipients. The killer app was the spreadsheet, but the word processor helped.

The third age changes the rules again. The data is not the rigidly structured datasets of the first age or the non-structured documents of the second age; data is semi-structured with markup tags. Data is stored on central servers but accessible from anywhere (well, anywhere on the internet).  The technologies include web servers, web services, AJAX, XML, and other 'new stuff'. Typical applications include LiveJournal, MySpace, FaceBook, LinkedIn, and DOPPLR. I don't know that we've seen the 'killer app' for the third age (yet).

The big difference in the third age applications is the user set. The third age is driven by customers, not businesses. Individuals enter the data (journal entries for LiveJournal, contacts for LinkedIn, and travel plans for DOPPLR) and share it with others; the sites do not create the data and provide it to users. In third-age applications, the users who enter data are also the users who share the data and retrieve the data. (Not always in proportion to how much they create.)  The site provides the API and the storage area; the users provide the content and organize it themselves.

I don't see the applications of the third age replacing earlier applications. Instead, I see the new applications opening up new functions and features. Just as the second age expanded the computing universe (we still have accounting running on mainframes, don't we?), the third age apps (OK, if you insist 'web 2.0 apps') will create new capabilities that do not replace our beloved word processing and spreadsheet applications.

There may be some overlap, for example collaborative document composition, and I predict that some applications will move from the PC to the web because the data sharing model is a better fit. (Project management, for example. MS-Project will die as a Windows application and move to the web. The data sharing needs of team members will force this change.)

The lesson here is to embrace the new technology and more importantly, embrace the new capabilities for sharing data. Not all applications have to use them, but some will, and we will all be more effective with the change.

Sunday, March 9, 2008

Clever people

I'm back from the O'Reilly 'Emerging Technology' conference. Wow, what a show!

I really like the O'Reilly conferences. The folks know how to run a con; everything went smoothly from check-in to keynotes to individual sessions. They can get a good set of speakers with an eclectic set of topics. And they can attract clever people to attend.

The big ideas from this show, the ideas that I took away, are these:

 - Cell phones can be used for a lot more than phone calls and text messages. They can be used for interactive, multiplayer games. They can be used to track individuals and feed analytics web sites. (How you use the data from cell phones is the clever bit.)

 - Web 2.0 is coming on-line. Web 2.0 is more than just Google maps; it is mashing information from one web site into another. Web sites are adapting to use Web 2.0 techniques; they are exposing APIs for other web sites to use.

 - Lots of applications can be built in the web space, and these applications are different from the traditional PC applications just as PC applications were different from traditional mainframe applications. PC apps didn't kill mainframe apps, they created a new space for applications. (Visicalc didn't replace General Ledger but complemented it.)  The new Web 2.0 apps won't kill PC apps (MS-Word will be with us for a while, I'm afraid) but they will extend the usefulness of computers and create new opportunities for developers.

 - Clever people are still out there. There are a number of clever people available, and you can find them if you try. Clever people are, well, clever, and don't fall for the stupid tricks that work on average folks. If you are looking for clever people you must work at it.

The future is happening now. New web sites with Web 2.0 tech are available now. People are developing them now. If you are not developing them now, you will be in the late-comer set. Which is fine if you are comfortable with the idea. Not everyone is a leader. But if you want to be a leader, Web 2.0 is happening.

All you need is an idea... and some clever people.