Sunday, December 27, 2009

The "nook" should be called the "no"

The Kindle and the Nook are intriguing devices. Marketed as e-book readers, the intent is to replace paper books with electronic ones. They both look interesting. I've seen people using the Kindle, and this week-end I stopped by the local Barnes and Noble to check out the Nook.

I find it nice, but not compelling.

My biggest complaint is probably the name. Barnes and Noble gets the "dumbest name of a consumer product" award. (The previous holder of said award was Amazon.com, for "Kindle". So the unappealing names of the two devices cancel.)

Despite the name, I find I prefer the Nook over the Kindle. Partly because of the virtual keyboard, partly because of the Android operating system, and partly because of the feel of the device. The virtual keyboard is more flexible, and the Nook uses the space for other menus and commands. I like the idea of an open-source operating system. And the Nook feels nice in my hands -- its a comfortable fit.

But the Nook web site has too many "no" answers. It plays MP3 audio files, but not Ogg Vorbis audio files. (And on an Android O/S!) It doesn't let me share notes with friends. (I can lend books to friends, but not my annotations.) It doesn't let me update my Facebook or LiveJournal pages. It doesn't let me surf the web. Its a device with narrow functionality.

I'm not sure that the Kindle is any better. (I've looked at the Nook more than the Kindle, so I can't really speak about the Kindle.)

I understand the reasoning for limiting the devices. They (Amazon.com and B&N) want to use it to drive book sales (e-book sales directly and paper book sales indirectly) and also want to minimize network traffic. Users don't pay for airtime and connections; B&N and Amazon.com pay for them. Also, a complex device is hard for a lot of non-techies to use. Most people don't care about the format of their music, or Facebook updates, or sharing notes. At least, not yet.

Barnes and Noble and Amazon.com are missing the point. They have designed their e-readers for the baby-boomer generation, people focussed on themselves. The Kindle and the Nook are excellent devices for sitting in a corner, reading to yourself, and not interacting with others. But that's not the device for me. I want to share (think of  LiveJournal, DOPPLR, Facebook, and Twitter) and have little use for a "me only" device.


Thursday, December 17, 2009

e-Books are not books

The statement "e-Books are not books" is, on its face, a tautology. Of course they're not books. The exist as bits and can be viewed only by a "reader", a device or program that renders pixels to a person.

My point is beyond the immediate observation. e-Books are not books, and will have capabilities we do not associate with books today. e-Books are a new form, just as the automobile was not a horseless carriage and a word processor is not a typewriter.

We humans need time to understand a new thing. We didn't "get" electronic computing right away. ENIAC was an electronic version of a mechanical adding machine; a few years later EDVAC was a computer.

Shifts in technology can be big or small. Music was distributed on paper; the transition to 78s and LPs was a major shift. It took us some time to fully appreciate the possibilities of recorded music. The shift to compact discs (small shiny plastic instead of large, warping vinyl) was a small one; little changed in our consumption or in the economic model. The shift to digital music on forms other than shiny plastic discs is a big one, and the usage and economic model will change.

An on-line newspaper is not a newspaper. It will become a form of journalism -- but not called a newspaper, nor will it have the same capabilities or limitations of ink slapped onto dead trees.

e-Books are not books. The stories and information presented to us on Kindle and Nook readers is the same information as in printed books, but that will change. For example, I expect that annotations will become the norm for e-books. Multiple readers will provide annotations, with comments for themselves and for others. (Think of it as today's e-book with Twitter and Google.) One person has blogged about their method for reading books (http://www.freestylemind.com/how-to-read-a-book) and how they keep notes and re-read portions of books for better understanding. Their system uses post-it notes. I predict that future e-Book readers will allow for the creation and storage of personal notes, and the sharing of notes with friends and the world.

Or perhaps e-Books will let us revise books and make corrections. (Think "e-Books today combined with Wikipedia".)

And that is why an e-book is not a book.


Sunday, December 13, 2009

Code first, then design, and then architecture

Actually, the sequence is: Tests first, then code, then design, and finally architecture.

On a recent project, I worked the traditional project sequence backwards. Rather than starting with an architecture and developing a design and then code, we built the code and then formed a design, and later evolved an architecture. We used an agile process, so we had tests supporting us as we worked, and those tests were valuable.

Working backwards seems wrong to many project managers. It breaks with the metaphor of programming as building a house. It goes against the training in project management classes. It goes against the big SDLC processes.

Yet it worked for us. We started with a rudimentary set of requirements. Very rudimentary. Something along the lines of "the program must read these files and produce this output", and nothing more formal or detailed. Rather than put the details in a document, we left the details in the sample files.

Our first task was to create a set of tests. Given the nature of the program, we could use simple scripts to run the program and compare output against a set of expected files. The 'diff' program was our test engine.

After we had some tests, we wrote some code and ran the tests. Some passed, but most failed. We weren't discouraged; we expected most of them to fail. We slowly added features to the code and got our tests to pass. As we coded, we thought of more tests and added them to our scripts.

Eventually, we had a program that worked. The code wasn't pretty -- we have made several design compromises as we coded -- but it did provide the desired output. The "standard" process would advance at this point, to formal testing and then deployment. But we had other plans.

We wanted to improve the code. There were several classes that were big and hard to maintain. We knew this by looking at the code. (Even during our coding sessions, we told ourselves "this is ugly".) So we set out to improve the code.

Managers of typical software development efforts might cringe at such an effort. They've probably seen efforts to improve code, many times which fail without delivering any improvement. Or perhaps the programmers say that the code is better, but the manager has no evidence of improvement.

We had two things that helped us. First was our tests. We were re-factoring the code, so we knew that the behavior would not change. (If you're re-factoring code and you want the behavior to change, then you are not re-factoring the code -- you're changing the program.) Our tests kept us honest, by finding changes to behavior. When we were done, we had new code that passed all of the old tests.

The second thing we had was class reference diagrams. Not class hierarchy diagrams, but reference diagrams. Class hierarchy diagrams show you the inheritance and container relationships of classes. Reference diagrams give you a different picture, showing you which classes are used by other classes. The difference is subtle but important.) The reference diagrams gave us a view of the design. They showed all of our classes, with arrows diagramming the connections between classes. We had several knots of code -- sets of classes with tightly-coupled relationships -- and we wanted a cleaner design.

We got our cleaner design, and we kept the "before" and "after" diagrams. The non-technical managers could see the difference, and commented that the "after" design was a better one.

We repeated this cycle of code-some, refactor-some, and an architecture evolved. We're pretty happy with it. It's easy to understand, allows for changes, and gives us the performance that we need.

Funny thing, though. At the start, a few of us had definite ideas about the "right" architecture for the programs. (Full disclosure: I was one such individual.) Our final architecture, the one that evolved to meet the specific needs of the program as we went along and learned about the task, looked quite different from the initial ideas. If we had picked the initial architecture and stayed with it, our resulting program would be complicated and hard to maintain. Instead, by working backwards, we ended with a better design and better code.

Sometimes, the way forward is to go in reverse.


Saturday, December 5, 2009

Limits to App Growth

Long ago (when dinosaurs roamed the Earth), applications were limited in size. Today, applications are limited in size, but from different causes.

Old constraints were hardware and software: the physical size of the computer (memory and disk), the capabilities of the operating system, and the capacities of the compiler. For example, some compilers had a fixed size to the symbol table.

Over the decades, physical machines became more capable, and the limits from operating systems and compilers have become less constraining. So much so that they no longer limit the size of applications. Instead, a different factor is the limiting one. That factor is upgrades to tools.

How can upgrades limit the size of an application? After all, new versions of compilers are always "better" than the old. New operating systems give us more features, not fewer.

The problem comes not from the release of new tools, but from the deprecation of the old ones.

New versions of tools often break compatibility with the old version. Anyone who programmed in Microsoft's Visual Basic saw this as Microsoft rolled out version 4, which broke a lot of code. And then again as version 5 broke a lot of code. And then again as VB.NET broke a ... well, you get the idea.

Some shops avoid compiler upgrades. But you can't avoid the future, and at some point you must upgrade. Possibly because you cannot buy new copies of the old compiler. Possibly because another tool (like the operating system) forces you to the new compiler. Sometimes a new operating system requires the use of new features (Windows NT, for example, with its "Ready for Windows NT" logo requirements).

Such upgrades are problematic for project managers. They divert development resources from other initiatives with no increase in business capabilities. They're also hard to predict, since they occur infrequently. One can see that the effort is related to the size of the code, but little beyond that. Will all modules have to change, or only a few? Does the code use a language feature or library call that has changed? Are all of the third-party libraries compatible with the new compiler?

The project team is especially challenged when there is a hard deadline. This can come from the release of a new platform ("we want to be ready for Windows 7 on its release date!") or the expiration of an old component ("Visual Studio 6 won't be supported on Windows Vista"). In these situations, you *have* to convert your system to the new component/compiler/platform by a specific date.

This is the factor that limits your system size. Small systems can be adapted to a new compiler or platform with some effort. Larger systems require more effort. Systems of a certain size will require so much effort that they cannot be converted in time. What's the crossover point? That depends on your code, your tools, and your team's talent. I think that every shop has its own factors. But make no mistake, in every shop there is a maximum size to a system, a size that once crossed will be too large to upgrade before the deadline.

What are the deadlines? That's the evil part to this situation. You're not in control of these deadlines; your vendors create them. For most shops, that's Microsoft, or Sun, or IBM.

Here's the problem for Microsoft shops: MFC libraries.

Lots of applications use MFC. Big systems and small. Commonly used systems and rarely-used ones. All of them dependent on the MFC libraries.

At some point, Microsoft will drop support for MFC. After they drop support, their new tools will not support MFC, and using MFC will become harder. Shops will try to keep the old tools, or try to drag the libraries into new platforms, but the effort won't be small and won't be pretty.

The sunset of MFC won't be a surprise. I'm sure that Microsoft will announce it well in advance. (They've made similar announcements for other tools.) The announcement will give people notice and let them plan for a transition.

But here's the thing: Some shops won't make the deadline. Some applications are so big that their maintainers will be unable to convert them in time. Even if they start on the day Microsoft announces their intent to "sunset" MFC. Their system is too large to meet the deadline.

That's the limit to systems. Not the size of the physical machine, not the size of the compiler's symbol table, but the effort to "stay on the treadmill" of new versions. Or rather, the ability of the development team to keep from falling off the end of the treadmill.

I've picked MFC as the bogeyman in this essay, but there are other dependencies. Compilers, operating systems, third-party libraries, IP4, UNICODE, mobile-ness in apps, the iPhone, Microsoft Office file formats, Windows as a dominant platform, ... the list goes on.

All projects are built on foundations. These foundations can change. You must be prepared to adapt to changes. Are you and your team ready?