Tuesday, June 29, 2010

Rugged individualism

What will hardware and software look like in the future? I don't know for certain, but I can make a few guesses. Let's look at the past:

The common wisdom for the evolution of hardware runs this way: from the early tabulators we built large computers (which were later named "mainframes"), then we learned to make minicomputers, then in 1976 we introduced the pre-PC microcomputers, followed by the IBM PC, workstations, servers, and finally the iPhone group. It's not completely accurate, but it's pretty good.

Common wisdom for the evolution of software runs along similar lines: from the early days of direct programming in machine language, we moved to monitors (operating systems) and the early compilers (FORTRAN and COBOL), batch processing, time-share systems and interactive processing (UNIX), virtual machines, a reversion to direct programming with the early microcomputers followed by "real" operating systems for PCs (Windows NT), client-server systems, web applications, and a re-invention of virtual computers, leading to today's modern world of smartphone apps and cloud computing.

Aside from the obvious trend of smaller yet more powerful hardware, we can see that advances in computing -- especially in software -- have favored the user. Starting with microcomputers in 1976, advances in technology have provided advantage to individual users. Such trends are clear in the iPhone and kin.

Advances in hardware have favored the individual, and so have software. Not only are iPhones made for individuals, but the software is tailored to individuals. iPhones do not plug in to the enterprise-level control systems (such as Windows Exchange servers). They remain controlled by the user, who selects the software (apps) to install and run.

The BlackBerry devices do plug in to the corporate systems, and while lots of people have them, the vast majority of people don't pay for them -- their employer does. BlackBerry phones are not the choice of individuals, they are the choice of IT managers.

I expect the trend toward the individual to continue. I expect that new cloud-based apps will be designed for individuals (Facebook, anyone) and not for corporations. The corporate applications (general ledger, billing, personnel, and markting) all fit in the mainframe world (with a possible web front-end). It is the individuals who will drive new tech.

Areas to look for improvements: pervasive computing (in which small computers are everywhere), automatic authentication (eliminating the need for passwords), and collaboration. The last is an interesting one, since it applies not to the individual but to a group. Yet it won't be the formal, structured group of the corporation; it will be a group of like-minded individuals with specific common goals.


Saturday, June 26, 2010

Books have the advantage in the new world order

Books, magazines, newspapers, musicians, and movie-makers must all learn to live in the new world order of the internet. All of the key players must abandon the old ways and learn the new. Book publishers may be in the best position of the crowd, due to the public domain.

Book publishers, magazine publishers, newspaper publishers, recording labels, and film studios all face the same problems of the internet: it is easy to copy a digital good. The news web sites and blogs are awash in articles about the downward trend in revenues, the upward trend in piracy, and the imminent collapse of their business models. And for a number of specific publishers (I'm using the larger definition that includes all media) the demise is indeed imminent. But not all; some will figure out the rules of the brave new world. (Just which ones, though, is a matter that must wait until we arrive in the new world.)

Of the media, book publishers have an advantage: the public domain. Or rather, a collection of works that is in the public domain. There is a large number of books that are available for free -- that is without the encumbrance of copyright. These works were published some time ago and have fallen "out of copyright". Books such as "Pride and Prejudice" and "Alice in Wonderland".

Such a collection is useful to book publishers and the makers of e-book readers. E-reader makers can use this collection as an enticement, in fact the Kobo reader comes with one hundred such classics.

Book publishers can use the collection too, indirectly. They can let the e-reader makers include this collection and train the customers in the use of e-book formats. Rather than buying a single copy of a book and passing it on to friends, a customer can become accustomed to downloading books to their personal device. From there it is a small matter of paying a modest fee (and the modesty and reasonableness of the fee is important) for a current book. This gives the publishers a business model.

Other media lack this collection of free works. The music industry has been quite good at keeping all of its works in copyright; consequently they have no free collection to use for training customers. Everything must be purchased: every song or collection must have money transacted. By being grabby the recording industry and the movie industry have gained in the short term but are losing in the long term.

The book industry is making gains with the e-readers such as the Kindle and the Nook. These gains are due in part to the free collection of materials. Good materials. If the music and film industries want to make similar gains with customers, they may have to consider a similar strategy. Sadly, I think that they will be unable to do so. They will cling tightly to every work in their collection, demanding payment for every use, every viewing, and every excerpt. And they may pay the price for making customers pay the price.


Monday, June 21, 2010

Caveat emptor -- and measurements

We focus on the development of systems (programs, applications, or apps, depending on the environment) but we must not forget the buyer, the client, or the person who requests the system. They too have responsibilities in the development of the solution.

One overlooked responsibility is the specification for the durability of the system, or its expected life span. A program designed as a quick fix to an immediate problem can be a wonderful thing, but it is not necessarily designed for a long life span. Platforms and development environments change, and programs can be designed for a narrow or general need.

With ownership comes responsibility. A system owner must understand the durability of their systems. We often hear of a temporary fix living for years. Sometimes this is acceptable, yet many times it entails risks. Quick, temporary solutions are just that: hastily assembled solutions to specific problems. There is no guarantee that they will work for all future data.

Yet owners can think that a quick fix today can be used for a long period of time. This is a mistake. It is equivalent to fixing a flat tire with "fix-flat goo" and replacing the tire. The former does repair the flat tire but it does not provide you with a new tire. If you want a new tire, you have to pay for it.

Similarly, if you want a durable software solution, you have to pay for it. If you are willing to accept a temporary solution, you must acknowledge that the solution is temporary and you will have to replace it in the future. You cannot pay for a temporary solution and receive a permanent one. Yet system owners rarely accept this responsibility.

Part of the problem is that we have no way to measure the durability of a solution. A system owner cannot tell the difference between a temporary fix and a permanent solution, other than the project duration and cost. Unlike a physical good, software is unobservable and unmeasurable. With no ability to rate a solution, all solutions look the same.

All solutions are not the same. We need a way to measure solutions and identify their quality.


Sunday, June 20, 2010

Forecast: mostly cloudy

Microsoft started its journey in 1975 with a simple mission: become the dominant software manufacturer. (They're mission statement was "A computer in every office and in every home... and all of them running Microsoft software.") It is safe to say that they have accomplished that mission.

Apple started about the same time. Their product was hardware, not software. I don't know if they had a mission statement, but "Computers that regular people can use" would not be a bad approximation. They too have accomplished their mission.

The computing world is bigger than office software and consumer gadgets. It reaches to embedded systems, high performance computing, games, and the World Wide Web. In these areas, neither Microsoft nor Apple have become dominant.

The frontier is on the web, with cloud computing on the back end and browsers and smart phones on the front end. Apple has a commanding lead on the front end with their iPhone, but RIM and Google are pushing hard with Blackberry and Droid devices. Microsoft is pushing with Windows Phone 7, but by sticking to software they have all but given the market away.

The back end is the interesting area. With a long legacy of open source (Linux and Apache) the old back end of web servers is growing into cloud processing. Vendors want to be there, but the competition is difficult. Microsoft is developing their Azure platform, and has even changed their C# language so that it can live in the cloud. (The scaling that makes the cloud places requirements on the programs that run in the cloud, and those requirements demand specific features in language design. Not just any program can move to the cloud.)

I'm pretty sure that Oracle wants to move into the cloud, or at least ride it with their applications. Yet they are in a difficult position -- the NoSQL databases fit better in the cloud, and Oracles namesake DB fits in the cloud as well as an anvil fits in a real-life cloud.

The most successful cloud folks have leverage open source and advanced languages like Ruby and Python. And the folks who are working in the cloud are the bright young things out of school or from creativity-supporting IT shops. Folks from the stodgy, conservative organizations tend to worry more about politics than technology, and cloud tech has a disturbing influence on politics. You cannot keep the old organization chart and expect the cloud to fit into the existing structure.

So here is where we see Microsoft and Apple: Microsoft has fit into the organization, making PCs operate much like big mainframes. Apple has targeted the top of the food chain (the user) with little attention to the plumbing underneath.

My guess is that Apple will weather the incoming clouds a bit easier than Microsoft, and open source will have a big win with application development.


Tuesday, June 15, 2010

The Model T and software

We can learn a lot from the past. Many industries have dealt with issues that seem new to the development industry, yet we can use the knowledge from older industries.

Let's apply the manufactoruing model to software. In this model, we can consider software a manufactured good, like a car or a pair of shoes.

In the mnufacturing model, you have the following costs: raw materials, facilities and equipment, manufacturing (operating the equipment and transforming the raw materials into finished product), and distribution. One could add post-sales support (since software has a lot of support), disposal of waste products, training for workers, marketing, industrial engineering and operations research, and management overhead, but let's stick to the basics.

For PC software (the kind sold on floppies or a CD), there were no raw materials, the equipment consistent of the computers and tools forthe development and testing teams, the distribution consisted of producing CDs in packages, and the post-sales support could be significant.

Web software and cloud applications have a similar model. Again, there are no raw materials and the equipment consists of computers (perhaps servers) and software. Distribution costs are lower, since you don't have to ship shiny round plastic discs to users.

In both types of application, the big costs are for development, testing, and support. Businesses recognize this, and are attempting to reduce these costs with outsourcing. It is a short-term, quick-fix solution and one that will provide little reduction in cost.

Many programs are complex. As Fred Brooks explains, software consists of essential and accidential complexity.

As an industry, we have two problems. First, we have no way to measure complexity. Second, we don't care about complexity.

The first problem is not completely true. We do have ways of measuring complexity, some better than others. All involve work and thinking, and can produce inconvenient results. But none (of which I am aware) can differentiate between the essential and the accidental aspects of complexity.

The second problem is the bigger. We accept complexity. Given deadlines and the the need to ship something -- anything, as long as it works -- we make no effort to reduce the complexity of our systems. As a result, our systems grow in complexity.

This complexity drives up cost. It is harder (read that as "more expensive") to change a complex program than a simple one.

It;s worse than that. The typical application life cycle process limits changes to those which achieve specific business needs. These processes prevent members of the team from improving areas of the code. As a result, the typical application life cycle process ensures that complex code will remain unchanged until a specific business need arises, and then (since no one knows the code) the programmers will make the minimum set of changes to satisfy the requirements (in fear of breaking some other feature).

So the system grows in complexity.

Yet the complexity of the system drives the development cost, and (I'm pretty sure) the support cost, the two costs that worry managers.

The way out requires lots of work and a change in our thinking. First, we must measure the complexity of our systems. (Or perhaps I should put as the zeroth item: we must decide that we *want* to measure the complexity of our systems and use that measurement in the management  of projects. Then we can decide on measurements.)

We have to expand our project management skills to account for complexity. We must project the complexity of solutions and include that measure in our decision to implement a specific solution. We must possibly decline some changes based on the increase in complexity (and therefore the increase in subsequent development and support costs).

Not everyone will like the analysis of complexity. Marketers will be unhappy that their latest request was denied based on future costs. Developers may feel that they are rated on the quality (and complexity) of the code they write. Managers will see another task in their (already overflowing) inbox.

Yet measuring and managing complexity is the only way to tame the software monster. Without the proper discipline, our systems will become large and expensive to maintain. More expensive than the next company's system, perhaps.


Sunday, June 13, 2010

Debating foolish consistency

For the past thirty years, the model for databases has been the relational database with its SQL language for access. The tenets of relational databases have been noted with the acronym ACID, referring to the concepts of Atomicity, Consistency, Isolation, and Durability.

We are now debating those ideas. And high time, in my opnion.

The debate has been spurred by technology and the desire for performance. Applications like Twitter and Facebook have raised the bar for database size and performance. Facebook is approaching 500 million users, many of whom are on-line at the same time. The old model of centralized control and absolute consistency simply does not work for applications of this size.

Which is not to say that we must abandon the old ACID/SQL/centralized model for databases. They work for applications of certain sizes and certain transaction volume.

It used to be that ACID/SQL worked for the "high end" applications and the low end could be handled by plain text files.  This was a simple model, and vendors such as Microsoft and Oracle used it to their advantage.

The definition of "high end" has changed. It's been pushed far higher, and the new high end is beyond the capabilities of the old model. This has spurred people to develop the NoSQL model which which moves away from structured data and relaxes the requirement for consistency.

This issue is not without debate. Michael Stonebraker, in his article in the Communications of the ACM, debates the relaxaton of consistency requirements.

I'm not sure of the outcome of this debate. But I like the fact that we are having the debate. We selected the relational database model thirty years ago. The selection was made on little more than the say-so of E. F. Codd and IBM. Let's review the possibilities and discuss the mertis of different approaches.


Wednesday, June 9, 2010

Requiem for power users

This blog talks about settings, but says something important about power users: ignore them.

At first, I was angry. Then sad.

But I have seen the future, and power users have no place in it. With the demise of the home computer (replaced by smart phones and cool tablets) and the organizational support of the work computer, power users have no place.

Power users came into existence with the first corporate use of PCs. Power users were the folks who were the smartest kids on the block (or in the office) and knew how to make the computers work. In the days of PC-DOS, Lotus 1-2-3, screen drivers, and FORMAT commands, power users had a place in the corporate world. They helped the neophytes with these strange new devices.

Power users used magic to make the mysterious box do a person's bidding.

They existed only in corporations. Home users had the smart neighborhood child to make things work, but no one ever called him a "power user". You have power users only in a community of users, and you have users only in corporations. (Or perhaps schools.)

The days of power users are long gone. Corporations now have support organizations to answer the questions of regular users, but they are staffed by technicians. The web and search engines allow anyone to get answers quickly. Software (especially software for Apple devices) has gotten quite easy to use. You plug in the device and it works. Gone are the days of configuration files, custom settings, and poring through printed (and often poorly written) manuals.

It is time for us power users to gracefully retire, to quietly leave the scene and let folks enjoy their shiny hand-sized magical boxes. Corporations have no need for magicians.


Saturday, June 5, 2010

Creativity and open source

The contrast between open source and closed source can be seen in the creativity of the users. Open source communities have more focus on tool-building. This was clear at a recent Ruby meet-up, in which attendees listed their favorite Ruby gems and tools. (A Ruby "gem" is an add-on. The gem is a specific format for distribution and unpacking on your computer, much like an RPM file for Red Hat Linux or an MSI for Windows.)

At the meet-up, people came up with a list of thirty or so tools and plug-ins for Ruby, in about thirty minutes. The tools were diverse, from debuggers to code formatters.

In contrast, the Windows C#/.NET world would be hard-pressed to come up with thirty add-ons for Visual Studio, much less thirty add-ons that were "must-haves".

In part, this difference is due to the size and capabilities of Visual Studio. It has a lot of things built in. When you buy Visual Studio, you get all of the goodies and don't need much in the way of extras. Ruby, on the other hand, is just the interpreter and some libraries, so all of the goodies must be added in. The two models are different, and so lead to two different ecosystems.

i tend to think that the Ruby ecosystem is a bit stronger, since it involves more developers (who must develop tools to meet their needs). One could argue the opposite, saying that Visual Studio is the stronger community since users (of Visual Studio, that is, developers) can focus on the job at hand and be more productive. It's not an argument without merit.

So I will pick another example of creativity with open source. A local MSDN user group has a web site, and wants to increase its capabilities. they want more collaboration among group members. So the idea floated by one of the senior folks in the group: Add capabilities with Wordpress (which happens to be an open source CMS).

Ironic, isn't it?

Thursday, June 3, 2010

Wired doesn't get it

Wired sent me an e-mail, advertising their new Wird-on-iPad application.

I don't particularly mind the e-mail, as I am a subscriber to Wired and have given them my e-mail address. I expect they will send e-mails. And they have sent e-mails. None have interested me.

But I had heard about their app-for-iPad, and how it was poorly designed. I was not impressed with their e-mail either (although I have no specific fault with it).

So I decided to unsubscribe from the e-mail list. Their e-mail had the necessary "unsubscribe" link, and I used it. The link brought up a confirmation page (used by many unsubscribe mechanisms) and after clicking on the "confirm" button, I got a new page that read:

"Thanks! Please allow 10 business days for your subscription for Wired to be removed."

This message is amusing and disappointing. It tells me that Wired has an inefficient process, probably a manual one. The key is the phrase 'business days' in the text. Computers don't have the notion of business days, only people do. Ironic, as Wired is all about the new tech and shiny toys.

I suspect that I will be dropping my subscription to Wired. If they can't use the tech that they talk about, then they are demoted to the level of marketing. And I don't need marketing, I need information.

Virtualization, stage two

Now that products such as VMware have shown that there is a demand for virtualization, we're ready for the next stage. Our current use of virtualization is clumsy and inefficient. The next stage will provide solutions that are more effective.

The virtualization technologies that we have (VMware, Xen, Microsoft's Hypervisor) have a common problem: their child machines run real operating systems. We thought that we wanted little virtual PCs, all existing in a pretend environment, would help us. (And it does, up to a point.)

The configuration of a fake PC run with software inside of a real PC is an inefficient solution. It's inefficient because the fake PC expects to be a real PC with all of the quirks of a real PC. For example, a fake PC expects to deal with hardware like the PIC, video cards, and network cards.

Creating all of the pretend low-level hardware for a virtual PC is work. Lots of work. Most of VMware's effort is creating the fake hardware that the fake PC can talk to, and then either ignore or abstract away for higher levels of the operating system.

What we really want is not a complete fake PC but just the operating system API. We're long past the days of writing directly to the hardware as we did with the IBM PC. The operating system API has become opaque.

Since we cannot see beyond the API, we don't care about the innards of the operating system. So we don't care if we are talking to a "real" operating system that is driving the hardware or a "virtual" operating system that is a thin layer that translates requests to the hyperviser for execution.

A good point to make this "cut" is the hardware abstraction layer. Let the hypervisor engine handle everything below the abstraction layer, and let the operating system worry about the things above it.

The second stage of virtualization will be this stub layer of operating systems. With a bottom layer of the hypervisor and a thin layer of OS API, we will be able to easily create virtual machines. ("Provisioning", in the jargon.) Creating an instance of a stub OS API will be faster than creating an instance of a full OS, since you can ignore all of the hardware details. Run-time performance of virtual machines will improve, since we can eliminate the emulation of low-level hardware.

I suspect that Linux will get there first, since anyone around the world can start on the project. Apple and Microsoft will probably remain unconvinced, until the Linux project shows efficiencies and market demand. But once demonstrated, Microsoft will have to provide an answer of their own, since they cannot afford to lose the market for large-scale computing.