Sunday, June 20, 2010

Forecast: mostly cloudy

Microsoft started its journey in 1975 with a simple mission: become the dominant software manufacturer. (They're mission statement was "A computer in every office and in every home... and all of them running Microsoft software.") It is safe to say that they have accomplished that mission.

Apple started about the same time. Their product was hardware, not software. I don't know if they had a mission statement, but "Computers that regular people can use" would not be a bad approximation. They too have accomplished their mission.

The computing world is bigger than office software and consumer gadgets. It reaches to embedded systems, high performance computing, games, and the World Wide Web. In these areas, neither Microsoft nor Apple have become dominant.

The frontier is on the web, with cloud computing on the back end and browsers and smart phones on the front end. Apple has a commanding lead on the front end with their iPhone, but RIM and Google are pushing hard with Blackberry and Droid devices. Microsoft is pushing with Windows Phone 7, but by sticking to software they have all but given the market away.

The back end is the interesting area. With a long legacy of open source (Linux and Apache) the old back end of web servers is growing into cloud processing. Vendors want to be there, but the competition is difficult. Microsoft is developing their Azure platform, and has even changed their C# language so that it can live in the cloud. (The scaling that makes the cloud places requirements on the programs that run in the cloud, and those requirements demand specific features in language design. Not just any program can move to the cloud.)

I'm pretty sure that Oracle wants to move into the cloud, or at least ride it with their applications. Yet they are in a difficult position -- the NoSQL databases fit better in the cloud, and Oracles namesake DB fits in the cloud as well as an anvil fits in a real-life cloud.

The most successful cloud folks have leverage open source and advanced languages like Ruby and Python. And the folks who are working in the cloud are the bright young things out of school or from creativity-supporting IT shops. Folks from the stodgy, conservative organizations tend to worry more about politics than technology, and cloud tech has a disturbing influence on politics. You cannot keep the old organization chart and expect the cloud to fit into the existing structure.

So here is where we see Microsoft and Apple: Microsoft has fit into the organization, making PCs operate much like big mainframes. Apple has targeted the top of the food chain (the user) with little attention to the plumbing underneath.

My guess is that Apple will weather the incoming clouds a bit easier than Microsoft, and open source will have a big win with application development.


Tuesday, June 15, 2010

The Model T and software

We can learn a lot from the past. Many industries have dealt with issues that seem new to the development industry, yet we can use the knowledge from older industries.

Let's apply the manufactoruing model to software. In this model, we can consider software a manufactured good, like a car or a pair of shoes.

In the mnufacturing model, you have the following costs: raw materials, facilities and equipment, manufacturing (operating the equipment and transforming the raw materials into finished product), and distribution. One could add post-sales support (since software has a lot of support), disposal of waste products, training for workers, marketing, industrial engineering and operations research, and management overhead, but let's stick to the basics.

For PC software (the kind sold on floppies or a CD), there were no raw materials, the equipment consistent of the computers and tools forthe development and testing teams, the distribution consisted of producing CDs in packages, and the post-sales support could be significant.

Web software and cloud applications have a similar model. Again, there are no raw materials and the equipment consists of computers (perhaps servers) and software. Distribution costs are lower, since you don't have to ship shiny round plastic discs to users.

In both types of application, the big costs are for development, testing, and support. Businesses recognize this, and are attempting to reduce these costs with outsourcing. It is a short-term, quick-fix solution and one that will provide little reduction in cost.

Many programs are complex. As Fred Brooks explains, software consists of essential and accidential complexity.

As an industry, we have two problems. First, we have no way to measure complexity. Second, we don't care about complexity.

The first problem is not completely true. We do have ways of measuring complexity, some better than others. All involve work and thinking, and can produce inconvenient results. But none (of which I am aware) can differentiate between the essential and the accidental aspects of complexity.

The second problem is the bigger. We accept complexity. Given deadlines and the the need to ship something -- anything, as long as it works -- we make no effort to reduce the complexity of our systems. As a result, our systems grow in complexity.

This complexity drives up cost. It is harder (read that as "more expensive") to change a complex program than a simple one.

It;s worse than that. The typical application life cycle process limits changes to those which achieve specific business needs. These processes prevent members of the team from improving areas of the code. As a result, the typical application life cycle process ensures that complex code will remain unchanged until a specific business need arises, and then (since no one knows the code) the programmers will make the minimum set of changes to satisfy the requirements (in fear of breaking some other feature).

So the system grows in complexity.

Yet the complexity of the system drives the development cost, and (I'm pretty sure) the support cost, the two costs that worry managers.

The way out requires lots of work and a change in our thinking. First, we must measure the complexity of our systems. (Or perhaps I should put as the zeroth item: we must decide that we *want* to measure the complexity of our systems and use that measurement in the management  of projects. Then we can decide on measurements.)

We have to expand our project management skills to account for complexity. We must project the complexity of solutions and include that measure in our decision to implement a specific solution. We must possibly decline some changes based on the increase in complexity (and therefore the increase in subsequent development and support costs).

Not everyone will like the analysis of complexity. Marketers will be unhappy that their latest request was denied based on future costs. Developers may feel that they are rated on the quality (and complexity) of the code they write. Managers will see another task in their (already overflowing) inbox.

Yet measuring and managing complexity is the only way to tame the software monster. Without the proper discipline, our systems will become large and expensive to maintain. More expensive than the next company's system, perhaps.


Sunday, June 13, 2010

Debating foolish consistency

For the past thirty years, the model for databases has been the relational database with its SQL language for access. The tenets of relational databases have been noted with the acronym ACID, referring to the concepts of Atomicity, Consistency, Isolation, and Durability.

We are now debating those ideas. And high time, in my opnion.

The debate has been spurred by technology and the desire for performance. Applications like Twitter and Facebook have raised the bar for database size and performance. Facebook is approaching 500 million users, many of whom are on-line at the same time. The old model of centralized control and absolute consistency simply does not work for applications of this size.

Which is not to say that we must abandon the old ACID/SQL/centralized model for databases. They work for applications of certain sizes and certain transaction volume.

It used to be that ACID/SQL worked for the "high end" applications and the low end could be handled by plain text files.  This was a simple model, and vendors such as Microsoft and Oracle used it to their advantage.

The definition of "high end" has changed. It's been pushed far higher, and the new high end is beyond the capabilities of the old model. This has spurred people to develop the NoSQL model which which moves away from structured data and relaxes the requirement for consistency.

This issue is not without debate. Michael Stonebraker, in his article in the Communications of the ACM, debates the relaxaton of consistency requirements.

I'm not sure of the outcome of this debate. But I like the fact that we are having the debate. We selected the relational database model thirty years ago. The selection was made on little more than the say-so of E. F. Codd and IBM. Let's review the possibilities and discuss the mertis of different approaches.


Wednesday, June 9, 2010

Requiem for power users

This blog talks about settings, but says something important about power users: ignore them.

At first, I was angry. Then sad.

But I have seen the future, and power users have no place in it. With the demise of the home computer (replaced by smart phones and cool tablets) and the organizational support of the work computer, power users have no place.

Power users came into existence with the first corporate use of PCs. Power users were the folks who were the smartest kids on the block (or in the office) and knew how to make the computers work. In the days of PC-DOS, Lotus 1-2-3, screen drivers, and FORMAT commands, power users had a place in the corporate world. They helped the neophytes with these strange new devices.

Power users used magic to make the mysterious box do a person's bidding.

They existed only in corporations. Home users had the smart neighborhood child to make things work, but no one ever called him a "power user". You have power users only in a community of users, and you have users only in corporations. (Or perhaps schools.)

The days of power users are long gone. Corporations now have support organizations to answer the questions of regular users, but they are staffed by technicians. The web and search engines allow anyone to get answers quickly. Software (especially software for Apple devices) has gotten quite easy to use. You plug in the device and it works. Gone are the days of configuration files, custom settings, and poring through printed (and often poorly written) manuals.

It is time for us power users to gracefully retire, to quietly leave the scene and let folks enjoy their shiny hand-sized magical boxes. Corporations have no need for magicians.


Saturday, June 5, 2010

Creativity and open source

The contrast between open source and closed source can be seen in the creativity of the users. Open source communities have more focus on tool-building. This was clear at a recent Ruby meet-up, in which attendees listed their favorite Ruby gems and tools. (A Ruby "gem" is an add-on. The gem is a specific format for distribution and unpacking on your computer, much like an RPM file for Red Hat Linux or an MSI for Windows.)

At the meet-up, people came up with a list of thirty or so tools and plug-ins for Ruby, in about thirty minutes. The tools were diverse, from debuggers to code formatters.

In contrast, the Windows C#/.NET world would be hard-pressed to come up with thirty add-ons for Visual Studio, much less thirty add-ons that were "must-haves".

In part, this difference is due to the size and capabilities of Visual Studio. It has a lot of things built in. When you buy Visual Studio, you get all of the goodies and don't need much in the way of extras. Ruby, on the other hand, is just the interpreter and some libraries, so all of the goodies must be added in. The two models are different, and so lead to two different ecosystems.

i tend to think that the Ruby ecosystem is a bit stronger, since it involves more developers (who must develop tools to meet their needs). One could argue the opposite, saying that Visual Studio is the stronger community since users (of Visual Studio, that is, developers) can focus on the job at hand and be more productive. It's not an argument without merit.

So I will pick another example of creativity with open source. A local MSDN user group has a web site, and wants to increase its capabilities. they want more collaboration among group members. So the idea floated by one of the senior folks in the group: Add capabilities with Wordpress (which happens to be an open source CMS).

Ironic, isn't it?

Thursday, June 3, 2010

Wired doesn't get it

Wired sent me an e-mail, advertising their new Wird-on-iPad application.

I don't particularly mind the e-mail, as I am a subscriber to Wired and have given them my e-mail address. I expect they will send e-mails. And they have sent e-mails. None have interested me.

But I had heard about their app-for-iPad, and how it was poorly designed. I was not impressed with their e-mail either (although I have no specific fault with it).

So I decided to unsubscribe from the e-mail list. Their e-mail had the necessary "unsubscribe" link, and I used it. The link brought up a confirmation page (used by many unsubscribe mechanisms) and after clicking on the "confirm" button, I got a new page that read:

"Thanks! Please allow 10 business days for your subscription for Wired to be removed."

This message is amusing and disappointing. It tells me that Wired has an inefficient process, probably a manual one. The key is the phrase 'business days' in the text. Computers don't have the notion of business days, only people do. Ironic, as Wired is all about the new tech and shiny toys.

I suspect that I will be dropping my subscription to Wired. If they can't use the tech that they talk about, then they are demoted to the level of marketing. And I don't need marketing, I need information.

Virtualization, stage two

Now that products such as VMware have shown that there is a demand for virtualization, we're ready for the next stage. Our current use of virtualization is clumsy and inefficient. The next stage will provide solutions that are more effective.

The virtualization technologies that we have (VMware, Xen, Microsoft's Hypervisor) have a common problem: their child machines run real operating systems. We thought that we wanted little virtual PCs, all existing in a pretend environment, would help us. (And it does, up to a point.)

The configuration of a fake PC run with software inside of a real PC is an inefficient solution. It's inefficient because the fake PC expects to be a real PC with all of the quirks of a real PC. For example, a fake PC expects to deal with hardware like the PIC, video cards, and network cards.

Creating all of the pretend low-level hardware for a virtual PC is work. Lots of work. Most of VMware's effort is creating the fake hardware that the fake PC can talk to, and then either ignore or abstract away for higher levels of the operating system.

What we really want is not a complete fake PC but just the operating system API. We're long past the days of writing directly to the hardware as we did with the IBM PC. The operating system API has become opaque.

Since we cannot see beyond the API, we don't care about the innards of the operating system. So we don't care if we are talking to a "real" operating system that is driving the hardware or a "virtual" operating system that is a thin layer that translates requests to the hyperviser for execution.

A good point to make this "cut" is the hardware abstraction layer. Let the hypervisor engine handle everything below the abstraction layer, and let the operating system worry about the things above it.

The second stage of virtualization will be this stub layer of operating systems. With a bottom layer of the hypervisor and a thin layer of OS API, we will be able to easily create virtual machines. ("Provisioning", in the jargon.) Creating an instance of a stub OS API will be faster than creating an instance of a full OS, since you can ignore all of the hardware details. Run-time performance of virtual machines will improve, since we can eliminate the emulation of low-level hardware.

I suspect that Linux will get there first, since anyone around the world can start on the project. Apple and Microsoft will probably remain unconvinced, until the Linux project shows efficiencies and market demand. But once demonstrated, Microsoft will have to provide an answer of their own, since they cannot afford to lose the market for large-scale computing.