Thursday, August 20, 2009

Systems are interfaces, not implementations

When building a program, the implementation is important. It must perform a specific task. Otherwise, the program has little value.

When building a system, it is the interfaces that are important. Interfaces exist between the components (the implementations) that perform specific tasks.

Interfaces define the system's architecture. Good interfaces will make the system; poor interfaces will break it.

It is much easier to fix a poorly-designed component than a poorly-designed interface. A component hides behind an interface; it's implementation is not visible to the other components. (By definition, anything that is visible to other components is part of the interface.) Since the other components have no knowledge of the innards, changing the innards will not affect them.

On the other hand, an interface is visible. Changing an interface requires changes to the one component *and* changes to (potentially) any module that uses the component. (Some components have large, complex interfaces; a minor change may affect many or few consumers-components.) Changes to interfaces are more expensive (and riskier) than changes to implementations.

Which is why you should pay attention to interfaces. As you build your system, you will get some right and some wrong. The wrong ones will cost you time and quality. You need a way to fix them.

Which doesn't mean that you can ignore implementations. Implementations are important; they do the actual work. Often they have requirements for functionality, accuracy, precision, or performance. You have to get them right.

Too many times we focus on the requirements of the implementations and ignore the interfaces. When a development process is driven by "business requirements" or "functional requirements" then the focus is on implementations. Interfaces become a residual artifact, something that "goes along for the ride" but isn't really important.

If you spend all of your time implementing business requirements and give no thought to interfaces, you will build a system that is hard to maintain, difficult to expand, and providing a poor user experience.

Monday, August 17, 2009

Teaching does not cause learning

Our society has done a pretty good job at Taylorizing the learning experience. It's structured, it's ordered, it's routine, and it's efficient. And it's quite ineffective. (For proof, compare the results of US children against other nations.)

I learn best when I can explore and make mistakes. The mistakes is the important part. I learn only when I make a mistake. When I do something and get the wrong results, I try different methods until one works. That's how I learn Apparently I'm not alone in this.

In the book Bringing Design to Software, Schön and Bennet's essay describes the results of students with two computer programs. One (named "McCavity") was designed to be a tutoring program and the other ("GrowlTiger") was created as a simple design tool. Both were for engineering students.

The results were surprising. The students found McCavity (the tutoring program) boring and were more interested in GrowlTiger.

Maybe the results were not so surprising. The tutoring program provided students with information but they had little control over the delivery. The design program, on the other hand, let students explore. They could examine the problems foremost in their minds.

Exploring, trying things, and making mistakes. That's how I learn. I don't learn on a schedule.

You can lead a horse to water, but you can't make him drink. And you can lead students to knowledge, but you can't stuff it down their throats.

Sunday, August 16, 2009

Why leave C++?

A recruiter asked me why I wanted to move away from C++. The short answer is that it is no longer the shiny new thing. The longer answer is more complex.

Why move away from C++?

For starters, let's decide on where I'm moving *to*. It's easy to leave C++ but harder to pick the destination. In my mind, the brighter future lies with C#/.NET, Java, Python, and Ruby.

First reason: C++ is tied to hardware. That is, C++ is the last of the big languages to compile to the processor level. C#, Java, Perl, Python, and Ruby all compile to a pseudo-machine and run on interpreters. C# runs in the Microsoft CLR, Java runs on the JVM, and so on. By itself, this is not a problem for C++ but an advantage: C++ programs run more efficiently than the later languages. Unfortunately for C++, the run-time efficiency is not enough to make up for development costs.

Second reason: Easier languages. C++ is a hard language to learn. It has lots of rules, and you as a developer must know them all. The later languages have backed off and use fewer rules. (Note for C#: you're getting complicated and while not at C++'s level of complexity you do have a lot to remember.)

Third reason: Garbage collection. The later languages all have it; C++ does not (unless you use an add-in library). In C++ you must delete() everything that you new(); in the later languages you can new() and never worry about delete(). Not worrying about deleting objects lets me focus on the business problem.

Fourth reason: Better tools. Debugging and testing tools can take advantage of the interpreter layer. Similar tools are available in C++ but their developers have to work harder.

Fifth reason: Platform independence isn't that important. The big advantage of C++ is platform independence; all of the major platforms (Windows, Mac, Linux, Solaris) have ANSI-compliant compilers. And the platform independence works, at the command-line level. It doesn't extend to the GUI level. Microsoft has its API for Windows, Mac has its API, Solaris usually use X, and Linux uses X but often has Gnome or KDE on top of X.

Sixth reason: Developer efficiency. I'm much more effective with Perl than with C#, and more effective with C# than C++. C++ is at the bottom of the pile, the programming language that takes me the longest time to implement a solution. It's usually better for me (and my clients) to get a program done quickly. I can complete the assignment in Perl in half a day, in C#.NET in a day, and in C++ in two or more days. (This does depend on the specifics of the task.)

Seventh reason: Fit with web technologies. C++ fits poorly with the web frameworks that are emerging; especially for cloud computing. Yes, you can make it work with enough effort. But the later languages make it work with less effort.

Eighth reason: Applications in later languages have less cruft. This is probably a function of time and not language design. Cruft accumulates over time, and the applications written in later languages have had less time to accumulate cruft. I'm sure that they will. But by then, the older C++ applications will have accumulated even more cruft. And cruft makes mantenance harder.

Ninth reason: Management support. I've observed that managers support projects with the newer languages better than projects in C++. This is possibly because the applications in newer languages are newer, and the management team supports the newer applications. By 'support', I mean 'provide resources'. New applications are given people, money, and technology; older applications are put into 'maintenance mode' with limited resources.

So there are my reasons for leaving C++. None of these reasona are tied directly to C++; in fact I expect to see many of the same problems with newer applications in the next few years. Look to see another article, say a few years hence, of why I want to leave C#.

Monday, August 10, 2009

Consumers drive tech

I'm not sure when it happened, but some time in the past few years consumers have become the drivers for technology.

In the "good old days", technology such as recording equipment, communication gear, and computing machinery followed a specific path. First, government adopted equipment (and possibly funded the research), then corporations adopted it, and finally consumers used watered-down versions of the equipment. Computers certainly followed this path. (All computers, not just microcomputers or PC variants.)

The result was that government and large corporations had a fairly big say in the design and cost of equipment. When IBM was selling mainframes to big companies (and before they sold PCs), they would have to respond to the needs of the market. (Yes, IBM was a bit of a monopoly and had market power, so they could decide some things.) But the end result was that equipment was designed for large organizations, with diminutive PCs being introduced after the "big" equipment. Since the PCs came later, they had to play with the standards set by the big equipment: PC screens followed the 3270 convention of 25 lines and 80 characters, the original discs for CP/M were IBM 3740 compatible, and the original PC keyboard was left-over parts from the IBM System/23. CP/M took its design from DEC's RT-11 and RSX-11 operating systems, and PC-DOS was a clone of CP/M.

But the world has changed. In the twenty-first century, consumers decide the equipment design. Cell phones, internet tablets, and iPhones are designed and marketed to individuals, not companies. (The one exception is possibly the Blackberry devices, which are designed to integrate into the corporate environment.)

The typical PC purchased for home use is more powerful that the typical corporate PC. I myself saw this effect when I purchased a laptop PC. It had a faster processor, more memory, and a bigger screen than my corporate-issued desktop PC. And it stayed in front for several years. Eventually an upgrade at the office surpassed my home PC... but it took a while.

Corporations are buying the bargain equipment, and consumers are buying the premium stuff. But it's more than hardware.

Individuals are using software, specifically social networking and web applications, much faster than companies and government agencies. If you consider Facebook, Twitter, Dopplr, and LiveJournal, it is clear that the major design efforts are for the consumer market. People use these applications and companies do not. The common office story is often about the new hire just out of college, who looks around and declares the office to be medieval, since corporate policies prevent him from checking his personal e-mail or using Twitter.

With consumers in the driver's seat, corporations now have to use equipment that is first designed for people and somehow tame it for corporate use. They tamed PCs, but that was an easy task since PCs were derived from the bigger equipment. New items like iPhones have always been designed for consumers; integrating them will be much harder.

And there's more. With consumers getting the best and corporations using the bargain equipment, individuals will have an edge. Smaller companies (say, two guys in a garage) will have the better equipment. They've always been able to move faster; now they will have two advantages. I predict that smaller, nimbler companies will arise and challenge the existing companies.

OK, that's always happening. No surprise there. I think there will be more challengers than before.

Friday, July 31, 2009

RIP Software Development Conference

I am behind the times. Not in the loop. Uninformed.

Techweb killed the SD conferences. These were the "Software Development" conferences that I liked. (So much that I would pay my own way to attend them.)

Techweb killed them back in March, shortly after the "SD West 2009" con.

Here's an excerpt of the announcement that Techweb sent to exhibitors.

Due to the current economic situation, TechWeb has made the difficult decision to discontinue the Software Development events, including SD West, SD Best Practices and Architecture & Design World. We are grateful for your support during SD's twenty-four year history and are disappointed to see the events end.

Developers remain important to TechWeb, and we encourage you to participate in other TechWeb brands, online and face-to-face, which include vibrant developer communities:
...
Again, please accept our sincerest gratitude for your time, effort and contributions over the years. It is much appreciated.


The full text is posted on Alan Zeichick's blog.

The SD shows were inspirational. They brought together people with the one common interest of writing software. The shows were not sponsored by a single company, nor did they focus on one technology. People came from different industries to discuss and learn about all aspects of software. As one fellow-attendee said: the conferences were "ecumenical".

While I'm saddened at the loss (and bemused at my ignorance of their demise), I'm disappointed with Techweb's approach. Their announcement is bland and uninspiring. The brutal utility of the message tells us of Techweb's view of its mission: running conferences efficiently (and profitably). Their announcement can be paraphrased: "SD was not profitable, so we killed it. We've got these other shows; please spend money on them."

In contrast, the O'Reilly folks have a very different mission: building a community. They run conferences for the community, not as their means of existence. (They also publish books, host web sites, and run other events.) If a conference should become unprofitable, then it becomes a drag on their mission of building community and I would expect them to cancel it. But here's the difference: I would expect O'Reilly to provide another means for people to meet and discuss and learn, and I would expect O'Reilly to phrase their announcement in a more positive and inspirational light. Something along the lines of:

We've been running the (fill in the name) conference for (number) years, bringing people together and building the community. In recent years, the technology and the environment have changed, and the conference is no longer the best way to meet the needs of the practitioners, presenters, and exhibitors. We're changing our approach, and creating a new (whatever the new thing is) to exchange experiences and learn from each other. We invite you to participate in this new aspect of our community.

OK, that's not the perfect announcement, but it's much closer to what I want from conference organizers.

I'm not a marketing expert; I'm a programmer. But I know what I want: Someone who listens. O'Reilly does that. I'm not sure that Techweb does.

Tuesday, July 28, 2009

Last century's model?

When it comes to processes, commercial software development is living in the industrial age. And it doesn't have to be that way.

Lots of development shops use the standard "integration" approach to building software. Small teams build components, which are then integrated into assemblies, which are then integrated into subsystems, which then become systems, which are then assembled into the final deliverable. At each step, the component/assembly/subsystem/system is tested and then "released" to the next higher team. Nothing is released until it passes the group's quality assurance process.

This model resembles (one might say "duplicates") the process used for physical entities by defense contractors, automobile assembly plants, and other manufacturers. It leads to a long "step" schedule, with each component dependent on the release of pieces.

But does it really have to be that way?

For the folks building a new fighter plane, the model makes sense. If I'm working on a new jet engine, you can't have it, because it can be in only one physical place at any given time. I need it until I'm done. There's no way I can work on it and let you have a copy. (Oh, we could build two of them, but that would add a great deal of expense and the synchronization problems are significant.) We are limited to the "build the pieces and then bring them together" process.

Once the complete prototype is proven, we can change to an asychronous assembly process, one that creates a large number of the components and assembles them as needed. Bot for software, that amounts to duplicating the "golden CD" or posting the files on the internet.

Software is different from physical assemblies. You *can* have a copy while I work on it. Software is bits, and easily copied. Rather than hold a component in hiding, you can make the current version available to everyone else on the project. Other teams can integrate the latest version into their component, test it, and give you feedback. You would get feedback faster, since you don't have to wait for the "integration phase" (when you've committed to the design of your component).

And this is in fact what many open source projects do. They make their latest code available... to anyone. The software projects that I have seen have two sets of available code: the latest build (usually marked as "development" or "unstable") and the most recent "good" release (marked as "stable").

If you're on a project (or running a project) that uses the old "build pieces independently and then bring them together" process, you may want to think about it.

Sunday, July 26, 2009

Just what is a cloud, anyway?

The dominant theme at last week's OSCON conference was cloud computing. So what can I say about cloud computing?

As I see it, "cloud computing" is a step towards a the commoditization of computing power. The cloud model moves server hardware and base software out of corporate data centers and into provider data centers. Clients (mostly corporations) can use cloud computing services "on demand", paying more as they use more and less as they use less.

Cloud computing services fill the second tier, between the front-end browser and the legacy back-end processing systems. This is where web processing is today.

The big players have signed on to this new model. Microsoft has its "Azure" offering, Amazon.com has EC2 and S3, and Google has its App Engine.

Cloud providers are similar to the early electric companies. They build and operate the generators and transmission lines, and insist on meters and bills.

Moving into the cloud requires change. Your applications must be ready to work in the cloud. They have to talk to the cloud APIs and be designed to run in multiple instances. Indeed, one of the features of the cloud is that new instances of your application can come on-line as you request more computing power.

Like the early electricity companies, each provider has its own API. Apps for the Amazon.com platform cannot be (easily) transferred to Google. And Microsoft not only has its own API but uses the .NET platform with its development languages. I suspect that common APIs will emerge, but only after time and possibly with government assistance. (Just as the government set standards for control pedals in automobiles.)

I suspect that apps on the cloud will be different from today's apps. Mainframes had their standards apps: accounting and finance, mainly. When minicomputers arrived, people ported the accounting apps to minis with some success but also created new applications such as word processing. Later, PCs arrived and absorbed the word processing market but also saw new apps such as spreadsheets. Networked PCs created e-mail but left the old apps in stand-alone mode. The web saw new applications like LiveJournal and Facebook. (OK, yes, I know that e-mail existed prior to networked PCs. But networked PCs made e-mail possible for most people. It was the killer app for networks.)

Each new platform sees new applications. Maybe the new platform is created to serve the new apps; maybe the new apps are created as a response to the new environment. I don't know the direction of causality, but I'm leaning towards the former. With cloud computing, expect to see new applications, things that don't work on PCs or on the currrent web. The apps will meet new needs and have advantages (and risks) beyond today's apps.