Thursday, August 27, 2009

If at first...

The movie "The Maltese Falcon" (with Humphery Bogart, Mary Astor, Peter Lorre, and Sydney Greenstreet) is widely recognized as a classic.

Yet the 1941 movie did not simply spring into existence. There were two predecessors: a 1931 version also called "The Maltese Falcon" and a 1936 remake called "Satan Met a Lady". (Both of which were based on the novel by Dashiell Hammett.)

All three of the movies were made by Warner Brothers. The first two have been cast into oblivion; the third remains with us.

The third movie was a success for a number of reasons:

- Warner Brothers tried different approaches with each movie. The first movie was a serious drama. The second movie was light-hearted, almost to the point of comedy. The third movie, like Goldilocks' porridge, was "just right".

- They had the right technologies. The first version used modern (for the time) equipment, but Hollywood was still feeling its way with the new-fangled "sound" pictures. The first version relied on dialog alone; the classic version used sound and music to its advantage.

- They took the best dialog from the first two versions. The third movie was dramatic and witty, combining the straight drama of the first and the comic aspects of the second.

- They had better talent. The acting, screenwriting, and camerawork of the third movie is significantly better than the first two efforts.

In the end, Warner Brothers was successful with the movie, but only after trying and learning from earlier efforts.

Perhaps there is a lesson here for software development.

Monday, August 24, 2009

Dependencies

One of the differences between Windows and Linux is dependencies, and how they handle dependencies in software.

Linux distros have a long history of managing dependencies between packages. Distros include package managers such as 'aptitude' or 'YAST' which, when you to select products for installation, identify the necessary components and install them too.

Windows, on the other hand, has practically no mechanism for handling dependencies. In Windows, every install package is a self-contained thing, or at least views itself that way.

This difference is possibly due to the history of product development. Windows (up until the .NET age) has had a fairly flat stack for dependencies. To install a product, you had to have Windows, MFC, and... that was about it. Everything was in either the original Windows box or the new product box.

Linux has a larger stack. In Linux, there is the kernel, the C run-time libraries, the X windowing system, Qt, the desktop manager (usually KDE or Gnome), and possibly other packages such as Perl, Python, Apache, and MySQL. It's not uncommon for a single package to require a half-dozen other packages.

The difference in dependency models may be due to the cost model. In Linux, and open source in particular, there is no licensing cost touse a package in your product. I can build a system on top of Linux, Apache, MySQL, and Perl (the "LAMP stack") and distribute all components. (Or assume that the user can get the components.) Building a similar system in Microsoft technologies would mean that the customer muct have (or acquire) Windows, IIS, SQL Server, and, umm... there is no direct Microsoft replacement for Perl or other scripting language. (But that's not the point.) The customer would have to have all of those components in place, or buy them. It's a lot easier to leverage sub-packages when they are freely available.

Differences in dependency management affect more than just package installation.

Open source developers have a better handle on dependencies than developers of proprietary software. They have to -- the cost of not using sub-packages is too high, and they have to deal with new versions of those packages. In the proprietary world, the typical approach I have seen is to select a base platform and then freeze the specification of it.

Some groups carry the "freeze the platform" method too far. They freeze everything and allow no changes (except possibly for security updates). They stick to the originally selected configuration and prevent updates to their compilers, IDEs, database managers, ... anything they use.

The problem with this "freeze the platform" method is that it doesn't work forever. At some point, you have to upgrade. A lot of shops are buying new PCs and downgrading Windows Vista to Windows XP. (Not just development shops, but let's focus on them.) That's a strategy that buys a little time, but eventually Microsoft will pull the plug on Windows XP. When the time comes, the effort to update is large -- usually a big, "get everyone on the new version" project that delays coding. (If you're in such a shop, ask folks about their strategy for migrating to Windows Vista or Windows 7. If the answer is "we'll stay on Windows XP until we have to change", you may want to think about your options.)

Open source, with its distributed development model and loose specification for platforms, allows developers to move from one version of a sub-package to another. They follow the "little earthquakes" model, absorbing changes in smaller doses. (I'm thinking that the use of automated tests can ease the adoption of new versions of sub-packages, but have no experience there.)

A process that develops software on a fixed platform will yield fragile software -- any change could break it. A process to handle dependencies will yield a more robust product.

Which would you want?

Thursday, August 20, 2009

Systems are interfaces, not implementations

When building a program, the implementation is important. It must perform a specific task. Otherwise, the program has little value.

When building a system, it is the interfaces that are important. Interfaces exist between the components (the implementations) that perform specific tasks.

Interfaces define the system's architecture. Good interfaces will make the system; poor interfaces will break it.

It is much easier to fix a poorly-designed component than a poorly-designed interface. A component hides behind an interface; it's implementation is not visible to the other components. (By definition, anything that is visible to other components is part of the interface.) Since the other components have no knowledge of the innards, changing the innards will not affect them.

On the other hand, an interface is visible. Changing an interface requires changes to the one component *and* changes to (potentially) any module that uses the component. (Some components have large, complex interfaces; a minor change may affect many or few consumers-components.) Changes to interfaces are more expensive (and riskier) than changes to implementations.

Which is why you should pay attention to interfaces. As you build your system, you will get some right and some wrong. The wrong ones will cost you time and quality. You need a way to fix them.

Which doesn't mean that you can ignore implementations. Implementations are important; they do the actual work. Often they have requirements for functionality, accuracy, precision, or performance. You have to get them right.

Too many times we focus on the requirements of the implementations and ignore the interfaces. When a development process is driven by "business requirements" or "functional requirements" then the focus is on implementations. Interfaces become a residual artifact, something that "goes along for the ride" but isn't really important.

If you spend all of your time implementing business requirements and give no thought to interfaces, you will build a system that is hard to maintain, difficult to expand, and providing a poor user experience.

Monday, August 17, 2009

Teaching does not cause learning

Our society has done a pretty good job at Taylorizing the learning experience. It's structured, it's ordered, it's routine, and it's efficient. And it's quite ineffective. (For proof, compare the results of US children against other nations.)

I learn best when I can explore and make mistakes. The mistakes is the important part. I learn only when I make a mistake. When I do something and get the wrong results, I try different methods until one works. That's how I learn Apparently I'm not alone in this.

In the book Bringing Design to Software, Schön and Bennet's essay describes the results of students with two computer programs. One (named "McCavity") was designed to be a tutoring program and the other ("GrowlTiger") was created as a simple design tool. Both were for engineering students.

The results were surprising. The students found McCavity (the tutoring program) boring and were more interested in GrowlTiger.

Maybe the results were not so surprising. The tutoring program provided students with information but they had little control over the delivery. The design program, on the other hand, let students explore. They could examine the problems foremost in their minds.

Exploring, trying things, and making mistakes. That's how I learn. I don't learn on a schedule.

You can lead a horse to water, but you can't make him drink. And you can lead students to knowledge, but you can't stuff it down their throats.

Sunday, August 16, 2009

Why leave C++?

A recruiter asked me why I wanted to move away from C++. The short answer is that it is no longer the shiny new thing. The longer answer is more complex.

Why move away from C++?

For starters, let's decide on where I'm moving *to*. It's easy to leave C++ but harder to pick the destination. In my mind, the brighter future lies with C#/.NET, Java, Python, and Ruby.

First reason: C++ is tied to hardware. That is, C++ is the last of the big languages to compile to the processor level. C#, Java, Perl, Python, and Ruby all compile to a pseudo-machine and run on interpreters. C# runs in the Microsoft CLR, Java runs on the JVM, and so on. By itself, this is not a problem for C++ but an advantage: C++ programs run more efficiently than the later languages. Unfortunately for C++, the run-time efficiency is not enough to make up for development costs.

Second reason: Easier languages. C++ is a hard language to learn. It has lots of rules, and you as a developer must know them all. The later languages have backed off and use fewer rules. (Note for C#: you're getting complicated and while not at C++'s level of complexity you do have a lot to remember.)

Third reason: Garbage collection. The later languages all have it; C++ does not (unless you use an add-in library). In C++ you must delete() everything that you new(); in the later languages you can new() and never worry about delete(). Not worrying about deleting objects lets me focus on the business problem.

Fourth reason: Better tools. Debugging and testing tools can take advantage of the interpreter layer. Similar tools are available in C++ but their developers have to work harder.

Fifth reason: Platform independence isn't that important. The big advantage of C++ is platform independence; all of the major platforms (Windows, Mac, Linux, Solaris) have ANSI-compliant compilers. And the platform independence works, at the command-line level. It doesn't extend to the GUI level. Microsoft has its API for Windows, Mac has its API, Solaris usually use X, and Linux uses X but often has Gnome or KDE on top of X.

Sixth reason: Developer efficiency. I'm much more effective with Perl than with C#, and more effective with C# than C++. C++ is at the bottom of the pile, the programming language that takes me the longest time to implement a solution. It's usually better for me (and my clients) to get a program done quickly. I can complete the assignment in Perl in half a day, in C#.NET in a day, and in C++ in two or more days. (This does depend on the specifics of the task.)

Seventh reason: Fit with web technologies. C++ fits poorly with the web frameworks that are emerging; especially for cloud computing. Yes, you can make it work with enough effort. But the later languages make it work with less effort.

Eighth reason: Applications in later languages have less cruft. This is probably a function of time and not language design. Cruft accumulates over time, and the applications written in later languages have had less time to accumulate cruft. I'm sure that they will. But by then, the older C++ applications will have accumulated even more cruft. And cruft makes mantenance harder.

Ninth reason: Management support. I've observed that managers support projects with the newer languages better than projects in C++. This is possibly because the applications in newer languages are newer, and the management team supports the newer applications. By 'support', I mean 'provide resources'. New applications are given people, money, and technology; older applications are put into 'maintenance mode' with limited resources.

So there are my reasons for leaving C++. None of these reasona are tied directly to C++; in fact I expect to see many of the same problems with newer applications in the next few years. Look to see another article, say a few years hence, of why I want to leave C#.

Monday, August 10, 2009

Consumers drive tech

I'm not sure when it happened, but some time in the past few years consumers have become the drivers for technology.

In the "good old days", technology such as recording equipment, communication gear, and computing machinery followed a specific path. First, government adopted equipment (and possibly funded the research), then corporations adopted it, and finally consumers used watered-down versions of the equipment. Computers certainly followed this path. (All computers, not just microcomputers or PC variants.)

The result was that government and large corporations had a fairly big say in the design and cost of equipment. When IBM was selling mainframes to big companies (and before they sold PCs), they would have to respond to the needs of the market. (Yes, IBM was a bit of a monopoly and had market power, so they could decide some things.) But the end result was that equipment was designed for large organizations, with diminutive PCs being introduced after the "big" equipment. Since the PCs came later, they had to play with the standards set by the big equipment: PC screens followed the 3270 convention of 25 lines and 80 characters, the original discs for CP/M were IBM 3740 compatible, and the original PC keyboard was left-over parts from the IBM System/23. CP/M took its design from DEC's RT-11 and RSX-11 operating systems, and PC-DOS was a clone of CP/M.

But the world has changed. In the twenty-first century, consumers decide the equipment design. Cell phones, internet tablets, and iPhones are designed and marketed to individuals, not companies. (The one exception is possibly the Blackberry devices, which are designed to integrate into the corporate environment.)

The typical PC purchased for home use is more powerful that the typical corporate PC. I myself saw this effect when I purchased a laptop PC. It had a faster processor, more memory, and a bigger screen than my corporate-issued desktop PC. And it stayed in front for several years. Eventually an upgrade at the office surpassed my home PC... but it took a while.

Corporations are buying the bargain equipment, and consumers are buying the premium stuff. But it's more than hardware.

Individuals are using software, specifically social networking and web applications, much faster than companies and government agencies. If you consider Facebook, Twitter, Dopplr, and LiveJournal, it is clear that the major design efforts are for the consumer market. People use these applications and companies do not. The common office story is often about the new hire just out of college, who looks around and declares the office to be medieval, since corporate policies prevent him from checking his personal e-mail or using Twitter.

With consumers in the driver's seat, corporations now have to use equipment that is first designed for people and somehow tame it for corporate use. They tamed PCs, but that was an easy task since PCs were derived from the bigger equipment. New items like iPhones have always been designed for consumers; integrating them will be much harder.

And there's more. With consumers getting the best and corporations using the bargain equipment, individuals will have an edge. Smaller companies (say, two guys in a garage) will have the better equipment. They've always been able to move faster; now they will have two advantages. I predict that smaller, nimbler companies will arise and challenge the existing companies.

OK, that's always happening. No surprise there. I think there will be more challengers than before.