Sunday, October 19, 2008

Sometimes the truth is not enough

I want computers (and web pages, and people, come to think of it) to be truthful.

That is, when I sign on to my credit card web page, I want it to give me an accurate and truthful list of transactions. I want to see the numbers for my account, not someone else's, and I want current information. And yes, I recognize that some information is not available in real time.

But in addition to truth, I want useful information. And sometimes the exact truth is not enough to be useful.

Case in point: a large bank's "find a branch" web page. I have accounts with said large bank (OK, the bank is Chase) and I can handle most transactions on-line. But not all transactions. For some, I need a branch office.

Now, I live outside of Chase's operating area. The history of my account explains this oddity. I opened the account with a bank called "Wingspan", which was Bank One's foray into online banking. After a few years, Bank One realized that online banking was banking and folded the operation into its regular operation. A few years later, Chase bought Bank One. Voila! I am a Chase customer.

So having a transaction that requires a visit to a branch, I decide to use Chase's web site to find a branch. I call up the page, type in my ZIP Code, and the result page says the following:

"There are no branches within thirty miles of you. If you change your criteria, you may get different results."

I will ignore the "change your criteria" statement, since all I can provide is a ZIP Code. Let's look at the first statement: "There are no branches within thirty miles of you." (The phrasing may be somewhat inexact, but that is the gist of the message.)

I'm sure that this is a true statement. I have no doubt that all of Chase's branche offices are further than thirty miles.

But the statement is not useful. It gives me no indication of where I can find a branch office, close to me or not. It leaves me to guess.

Now, I think I understand how this came to be. Chase designed the "find a branch" web page to report a subset of its branches, not the entire list. And that makes sense. I don't need to see the entire list, just the close ones. And Chase picked thirty miles as a cut-off for the meaning of "close". So a person in "Chase territory" gets a manageable list of perhaps a dozen branches. This behavior is useful.

But when the result list is empty, the behavior is not useful. It leaves me in the cold, with no "lead", nothing to follow, no clue about a branch location.

Truthful, but not useful.

Sunday, October 12, 2008

Standards

Do we, in the programming profession, hold ourselves to high enough standards?

Here's what Christopher Alexander writes:

"In my life as an architect, I find that the single thing which inhibits young professionals,new students most severely, is their acceptance of standards that are too low. If I ask a student whether her design is as good as Chartres, she often smiles tolerantly at me as if to say, "Of course not, that isn't what I am trying to do... I could never do that."

Then, I express my disagreement, and tell her: "That standard must be our standard. If you are going to be a builder, no other standard is worthwhile. That is what I expect of myself in my own buildings, and it is what I expect of my students.""

-- from the foreword to Patterns of Software by Richard P. Gabriel --

I submit that we (in the programming community) have, for the most part, accepted standards that are too low. I submit that we have "sold out" to those who believe that "good enough" is good enough. We tolerate poor managers, poor developers, and poor techniques, and we congratulate ourselves on our mediocre efforts. We consider programming a job, an activity that occurs between the hours of 9:00 AM and 5:00 PM, with breaks for lunch and interruptions for meetings.

I submit that we have no standard to look to, no Chartres, in our profession. We build a thing and proclaim it to be wonderful, or at least what the user wanted (and with few major defects). We do not consider what could have been. We do not even consider what the "other guy" is building. We look at our own constructs (and not too closely, lest we see the warts) and think "that is a worthy accomplishment and therefore we are worthy".

Our efforts are expensive. Our processes are inefficient. Our participants are poorly educated, self-centered, and ignorant. Our results are unsatisfying. Users dislike, no, hate, that which we produce. We consider them beneath us and their opinions unimportant.

We think that we know what it is that we do, yet we do not know our profession's history.

One day we shall be weighed in the balance. On that day, I fear, we shall be found wanting.

Sunday, September 7, 2008

Test first, then code

One of the tenets of Agile Development is test-driven development. Test-driven development means using tests (usually automated) to lead the development. In short, you write the tests before you write the code.

This is backwards from the usual methodologies, which place tests (and the creation of tests) after the design and code phases.

I have used test-driven development for small programs, and it works. (At least for small programs written by me.) I find that I am much more productive with test-driven development. Here's how it works:

1. I start with an idea for a program.

2. I create some use cases, and translate them into test cases. This usually entails creating input and output data for each use case.

3. I create a small program that works, sort of, and produces some (not necessarily correct) output.

4. I run the program against the test cases, and observe failures. I don't get too upset about failures. I rarely write a program and get it correct the first time.

5. I modify the program and re-run the test cases. I usually pick one or two failing tests and correct the program for those. I don't try to fix all problems at once.

6. As I run the tests, I often think of more use cases. I create tests for those use cases as I discover them.

7. I stay in the cycle of testing and modifying code until the code performs correctly.

8. Once I have the code working (that is, passing tests), I revise the code. I combine common sections and clean messy sections. After each change, I run the tests (which the program passed when I started the 'clean-up' work). If a test fails, I know that the failure is due to my most recent change.

9. I stay in the 'clean-up' cycle until I am happy with the code.

In the end, I have a program that has clean code and works. I know it works because it passes the tests. I know that the code is clean because I keep reviewing it.

The idea of testing before coding is not new. I recently found a reference to it in The Psychology of Computer Programming by Gerald Weinberg.

Another well-known psychological bias in observations is the overdependence on "early" data returns. In program testing, the programmer who gets early "success" with his program is likely to stop testing too soon. One way to guard against this mistake is to prepare the tests in advance of testing and, if possible in advance of coding.

                                   -- Gerald Weinberg, The Psychology of Computer Programming

The emphasis is mine.

The Psychology of Computer Programming was written in 1971.

So testing before coding is not a new concept. You may want to try it.

Sunday, August 24, 2008

Three aspects of programming

Many people think that there is one aspect of programming: creating code. Some might recognize two (creating new code and maintaining old code). I'm not counting tasks such as testing, deployment, or support.

I think there are three aspects of programming: creation, maintenance, and destruction.

The choice of destruction often surprises people. Why would one want to destroy code? At the very least, we can simply stop using it and leave it as 'dead' code. Why would we risk removing it?

As I see it, all three tasks much occur in the proper balance. We can create a new application (and by definition create new code). We can improve the application by maintaining code - making changes to existing code, fixing problems, and correcting defects.

Adding features to an application is generally a mix of creation and maintenance. Some code can be completely new, and often some existing code is modified.

So where does destruction fit in?

Destruction can be viewed not as the wholesale obliteration of code, but as the careful removal of sections of code. Perhaps two modules can be combined into a single general module. In this case, the two old modules are removed (destroyed) and a new module is created. (Or maybe one module is removed and the second is made more general.)

Destruction can also occur with redesign. If a set of classes or functions is particularly difficult to work with, one may wish to replace them. (This assumes that the difficultness is in the implementation and not part of the problem at hand.) Replacing a set of functions with another (better-designed) set of functions is destroying one set and creating a new set. Some might argue that this is agressive maintenance, and not necessarily destruction and creation. Perhaps it is. That is a philosophical discussion.

Sometimes destruction is useful by itself. If an application has a feature that is not used, or perhaps considered harmful, then removing the feature can be an improvement.

Destruction is useful in that it simplifies and opens the way for new features.

Destruction is an aspect of programming. Used in the right amount, it is a benefit.

Sunday, June 29, 2008

A worthy book: Learning Ruby

It's been a long time since I have read a good book about a programming language.

The book Learning Ruby is a good book. Possibly the best book on a programming language that I have read. Ever. (Although Que's "Programming C" from the 1980s was pretty good.)

Learning Ruby starts with the basics, ramps up quickly, and gives you the right information at the right time. I find it a much better introduction than Ruby in a Nutshell and Programming Ruby. (Both good books, but not for beginners.)

If you're programming in Ruby (or want to program in Ruby), Learning Ruby is an excellent choice. If you're not interested in Ruby, this book will make you interested. (And possibly jealous, when you compare the syntax and features of Ruby to the language of your choice.)

Sunday, May 11, 2008

How much is that XP in the Windows?

For years, Microsoft has sold Windows. And for each version, Microsoft specified minimum hardware requirements. So for any version of Windows, you need so much memory and so much disk and so much video power.

Now, Microsoft is turning the rules upside-down. Instead of specifying the minimum hardware, Microsoft is specifying the maximum hardware.

This is part of Microsoft's strategy for the ULPC (Ultra Low-cost PC) market. New PCs like the ASUS 'eee' PC and others are gaining interest by consumers, and market share. Normally, interest in PCs is a good thing for Microsoft, but this market isn't. Microsoft has staked its future on Windows Vista, and the ULPCs can't run Vista. (ULPCs are too small and underpowered for Windows Vista. They can run a stripped-down version of Windows XP. Many of them run Linux.)

It appears that Microsoft was surprised by the interest in ULPCs. I suspect that Microsoft's strategy assumed that hardware would continue to get a little bit faster and more capable each year. They didn't see a new 'technology curve' for hardware, namely the ULPC.

Microsoft is now between a rock and a hard place. They cannot ignore the ULPC market. Doing so would give Linux a 'foot in the door' at home and quite possibly in the business market. But they cannot distribute Windows Vista for the ULPCs because it just won't run, and if they extend the life of Windows XP then they lose the income from the upgrades to Windows Vista.

So in a move to have their cake and eat it too, Microsoft is extending the life of Windows XP, but now with limits on its use. The limits are not the typical "minimum system requirements" but a new concept: maximum system requirements. Or maybe, instead of "requirements", we should say "supported", since the rules set the upper bounds on hardware.

This lets Microsoft get into the ULPC market but keep Windows Vista as their premium operating system. The limits are artificial (1 GB RAM, 80 GB DASD, 10.5 inch display) and Windows XP can handle more -- it's been doing it for years.

It strikes me that if customers saw value in Windows Vista, the limits would not be necessary. If Windows Vista had value, people would be using it for their new desktops, upgrading their old desktops to use Vista, and would be clamoring for ULPCs that run Windows Vista.

Sunday, March 30, 2008

The new old thing

The new old thing is e-mail.

As today's new users (read 'kids') use computers, they adopt new technologies faster than the 'old guard'. Not only do they adopt new technologies, they also drop old technologies.

People have been adopting new technologies and dropping old technologies for years. Decades. Centuries. Who uses buggy whips nowadays? Or builds pyramids? Or uses carbon paper?

The new old thing is e-mail.

The new new things are instant messaging (IM), text messaging, social networking sites (MySpace and FaceBook), and sites like Twitter. Kids today use these technologies for communicating with friends. They grudgingly use e-mail, to talk with their parents.

The new old thing is e-mail.

This blog used to be distributed via e-mail. Today it is hosted on the Google site called 'Blogger'. The blog site is easier for me to administer and easier for users. Google supplies the storage space and provides the RSS plumbing.

The new old thing is e-mail. For now.

Sunday, March 16, 2008

The Third Age

I think that we are entering (have entered?) a third age of computing. The technologies coming on-line (XML, RIA, Web 2.0, and such) are giving us the capabilities to create new types of applications.

The first age is what many folks think of as 'the mainframe age'. Some folks will call it 'the bad old days', and other folks will wax nostalgic over it. I look at the ages in terms of data and how that data was organized, who organized it, and who could access it.

In the first age, data was structured and centralized. (Just like mainframe systems like it to be.) The typical applications were business applications: general ledger, payroll, inventory, and billing systems. The technology used was mainframes and COBOL, but then, that was all we had. (I am conveniently ignoring FORTRAN, but while FORTRAN applications were for science folks the data was still structured and not really shared.) The system owners organized the data and the users had to comply with the imposed structure. Data was not shared, at least not very much. And data was limited to the business problem at hand. The 'killer app' for the first age was the 'accounting set' of the aforementioned typical business applications.

The second age is 'the PC revolution'. Data was non-structured and distributed - everyone had their own computer and stored data as they saw fit. The main applications were word processors and spreadsheets, running on PCs with either MS-DOS or Windows. (I'm ignoring the pre-PC microcomputers such as Apple II and Commodore PET. I'm also ignoring minicomputers and Unix.) The primary users in the second age were businesses, as in the first age. The data did not follow the first-age flow of 'input-process-output' but was composed and then distributed, usually on paper but sometimes on floppy disk or BBS. There were many originators of data but each originator had few recipients. The killer app was the spreadsheet, but the word processor helped.

The third age changes the rules again. The data is not the rigidly structured datasets of the first age or the non-structured documents of the second age; data is semi-structured with markup tags. Data is stored on central servers but accessible from anywhere (well, anywhere on the internet).  The technologies include web servers, web services, AJAX, XML, and other 'new stuff'. Typical applications include LiveJournal, MySpace, FaceBook, LinkedIn, and DOPPLR. I don't know that we've seen the 'killer app' for the third age (yet).

The big difference in the third age applications is the user set. The third age is driven by customers, not businesses. Individuals enter the data (journal entries for LiveJournal, contacts for LinkedIn, and travel plans for DOPPLR) and share it with others; the sites do not create the data and provide it to users. In third-age applications, the users who enter data are also the users who share the data and retrieve the data. (Not always in proportion to how much they create.)  The site provides the API and the storage area; the users provide the content and organize it themselves.

I don't see the applications of the third age replacing earlier applications. Instead, I see the new applications opening up new functions and features. Just as the second age expanded the computing universe (we still have accounting running on mainframes, don't we?), the third age apps (OK, if you insist 'web 2.0 apps') will create new capabilities that do not replace our beloved word processing and spreadsheet applications.

There may be some overlap, for example collaborative document composition, and I predict that some applications will move from the PC to the web because the data sharing model is a better fit. (Project management, for example. MS-Project will die as a Windows application and move to the web. The data sharing needs of team members will force this change.)

The lesson here is to embrace the new technology and more importantly, embrace the new capabilities for sharing data. Not all applications have to use them, but some will, and we will all be more effective with the change.

Sunday, March 9, 2008

Clever people

I'm back from the O'Reilly 'Emerging Technology' conference. Wow, what a show!

I really like the O'Reilly conferences. The folks know how to run a con; everything went smoothly from check-in to keynotes to individual sessions. They can get a good set of speakers with an eclectic set of topics. And they can attract clever people to attend.

The big ideas from this show, the ideas that I took away, are these:

 - Cell phones can be used for a lot more than phone calls and text messages. They can be used for interactive, multiplayer games. They can be used to track individuals and feed analytics web sites. (How you use the data from cell phones is the clever bit.)

 - Web 2.0 is coming on-line. Web 2.0 is more than just Google maps; it is mashing information from one web site into another. Web sites are adapting to use Web 2.0 techniques; they are exposing APIs for other web sites to use.

 - Lots of applications can be built in the web space, and these applications are different from the traditional PC applications just as PC applications were different from traditional mainframe applications. PC apps didn't kill mainframe apps, they created a new space for applications. (Visicalc didn't replace General Ledger but complemented it.)  The new Web 2.0 apps won't kill PC apps (MS-Word will be with us for a while, I'm afraid) but they will extend the usefulness of computers and create new opportunities for developers.

 - Clever people are still out there. There are a number of clever people available, and you can find them if you try. Clever people are, well, clever, and don't fall for the stupid tricks that work on average folks. If you are looking for clever people you must work at it.

The future is happening now. New web sites with Web 2.0 tech are available now. People are developing them now. If you are not developing them now, you will be in the late-comer set. Which is fine if you are comfortable with the idea. Not everyone is a leader. But if you want to be a leader, Web 2.0 is happening.

All you need is an idea... and some clever people.

Sunday, February 17, 2008

EEE! It's a PC!

In the nineteen-fifties, Volkswagen introduced its 'Beetle' model to the US market. The car proved popular, despite being the opposite of a 'proper' car. (It was small, economical, and possibly the worst of all, without fins.)

Asus has released their new EEE PC, an inexpensive sub-notebook computer. Like the Beetle, it is small, economical, and also without the fins that accompany modern computers.

The EEE PC is a no-frills device. Asus has limited the hardware, keeping the computer portable. It has a seven inch LCD screen, not the typical fifteen or twenty-one inch display. Storage is handled by Solid State Disk (SSD) so there is no spinning disk. There is no built-in CD or DVD drive. The keyboard is small and requires delicate fingers.

Despite the limited capabilities, the EEE has some advantages. First, it is small. The sub-notebook is about the size of a small hard-cover book and can be conveniently stored in a purse or backpack. (It's a bit large to fit in your pocket.)

Second, it is inexpensive. Priced at four hundred dollars, it is in the range of an Apple iPod Touch. (There are several models of the EEE, with different storage configurations.) But more importantly, it is a reasonable price for people who want a simple computer to surf the web, read and send e-mail, and write a few documents.

Third, it is designed for non-techies. The software runs by default in 'basic' mode. (The opposite of 'advanced' mode. Neither mode has anything to do with the programming language known as 'BASIC'.) This is a computer for people who want to do things, not fiddle with computer settings.

The EEE has network connections. It has a CAT-5 socket and it has Wi-Fi built in. It also has three USB ports, allowing you to attach external drives, keyboard, mice, and other devices.

The EEE PC runs Linux. Asus selected a variant of Debian Linux for the EEE; there are other distros of Linux available for it. (The Asus web page claims that the EEE is compatible with Windows XP, almost as an afterthought.) The selection of Linux made it possible to create the 'basic' and 'advanced' modes.

I've seen the EEE, and I like it. The keyboard is a bit crowded, but it is servicable. The display is just large enough. The EEE does not have everything, but it does have enough to work. It is the 'Volkswagen Beetle' of PCs - small, economical, easy to use, and easy to care for.

Asus just might have something here.

Sunday, January 13, 2008

Kindle

Amazon.com has released the Kindle, a hand-held reader. Kindle works with electronic versions of books. Amazon.com would like you to purchase books through them, but Kindle allows for books in Mobipocket and plain text formats, and Amazon.com provides utilities to convert HTML, MS-Word, JPEG, GIF, PNG, and Microsoft BMP to its proprietary format.

Now, let me say that the name 'Kindle' strikes me as a poor choice for anything to do with books, and an especially poor choice for the elimination of paper books. Yes, I know that Amazon.com want to 'kindle new ideas' but this is not the market for that name. The name 'Kindle' conjures up visions of book-burning.

While Kindle includes The New Oxford American Dictionary, it's not clear that it comes with a copy of Bradbury's Farenheit 451. (Its also not clear that F451 is even available for the Kindle. Amazon.com has set up the Kindle web site to allow searches by bestselling, price, customer review, or publication date, but not by title or author. Possibly searching by title or author would emphasize the limited number of titles available. Amazon.com needs to fix this.)

I suspect that the true audiences for the Kindle are college students and textbook publishers. Certainly the younger crowd is faster to use new tech and adopts it more readily. (But they also may have higher expectations, be more critical, and be more likely to reject a 'dud' technology.)

Textbook publishers win in several ways. They have a shorter time to market and reduced costs. They can sell new versions of their books every year. They also shut down the used textbook market, driving up annual sales.

Losers in this scenario will be the students who buy used textbooks. Since you are not allowed to transfer Kindle books (at least the Amazon.com-proprietary ones), students will have to pony up for the full cost of a new book.

Losers may also include society in general: People other than students buy used textbooks. I have purchased old, used textbooks, because they present information better than today's textbooks. (And because they are cheap.) Textbooks (and other books, and printed items in general) are our civilation's collective memory. If textbooks move to the electronic format, we lose a bit of that memory. Paper lasts longer than electronic patterns and requires less maintenance. I can read books from ten, twenty, fifty, and even one hundred years ago. Will we be able to read Kindle-2007 format books ten, twenty, or fifty years from today?

Sunday, January 6, 2008

Microsoft's view

One of Microsoft's advertisement for its Visual Studio product bothers me. For a while, didn't know why. After a bit of thought, I may have identified the cause.

The advertisement (a two-page ad) has in large, all-capital letters: "IT TOOK A THOUSAND YEARS TO BUILD ROME; YOUR DEV TEAM HAS A MONTH" In smaller type, it says "DEFY ALL CHALLENGES" "Your challenge: finish big projects eons faster." "Defy it: communicate and collaborate better with Visual Studio Team System."

I will overlook the grammatical and idiomatic errors in these claims. Instead, let's see what this advertisement tells us about the thinking in Redmond.

The ad shows a view of four modern-day individuals (one assumes that they are project leaders or "software architects") overlooking the construction of two large buildings in a city that one is apparently supposed to believe is Rome.

The "project leaders" are viewing the work from an elevated platform. They are high enough to see all of the work, or a broad scope of the work. They cannot see details. While Microsoft has armed them with a laptop and a cell phone, they have no telescope or binoculars to get a detailed view of the work.

The work is performed with wooden scaffolding, ramps, ropes and pulleys to move heavy stones up the ramps, and cranes with large-scale hamster wheels in which men walk and thereby power the crane. Individuals doing the work are small, numerous, and indistinguishable.

Here's what I get from this advertisement:

- Projects are big
- Managers are important
- Managers must see the work being done (but they don't need to see details)
- Managers use modern tools
- Managers ought to be at high levels

OK so far? Here's the next batch:

- Workers use primitive tools and methods
- You need few managers for many workers
- Workers are interchangeable
- Individual workers are unimportant

The ad repeats the 'think big' philosophy that Microsoft has had for some time. Microsoft has focussed on solutions for large companies and has lost the mindset (and the ability) to deliver solutions for small teams.

I thought that this attitude was driven by greed and arrogance. Greed, in that larger companies can afford larger and more expensive technologies. Arrogant, in that Microsoft was walking away from the individuals, the hobbyists, and the small companies that made them successful.

But perhaps Microsoft did not discard the ability to provide solutions for small teams. Perhaps it was taken away from them. Small teams can use open-source tools and technologies (Linux; Apache; MySQL; and Perl, Python, or PHP) and deliver effective solutions. They don't need Microsoft, and Microsoft is hard-pressed to compete with those technologies. The small guys may have been the ones to walk away, leaving Microsoft with nothing but the big guys.

If that's the case, then it is not greed and arrogance. It is desparation.