Friday, December 28, 2012

Yahoo's opportunity

In all of the news about Apple and Google and Amazon.com and Microsoft, we forget the other players. The one that I am pondering is Yahoo.

Can Yahoo succeed in the new land of mobile/cloud computing?

People like Yahoo's products. It's e-mail and Flickr offerings are capable and reliable. (I use the "pro" versions of both, so I pay a nominal annual fee for them.)

But Yahoo's success has been on the web. Can it move to mobile/cloud?

One challenge for Yahoo will be living in the world of combined (or at least coordinated) hardware and software. Apple, Microsoft, Google, and Amazon.com all sell solutions that encompass hardware and software, and this is re-enforced with the DRM-enabled walled garden for each. Yahoo's products live in the software realm, and Yahoo has no hardware to augment its offerings. (This may be a good thing. The market is crowded with iOS and Android devices, and even Microsoft is having a difficult time getting its Surface tablet into the market.)

Yahoo does partner with Microsoft for search -- Yahoo search is driven by Microsoft's Bing engine. But I think that Yahoo can survive with other areas.

Yahoo's biggest advantage may be its reputation. It doesn't have rabid followers like Apple or Microsoft; instead of fanboys it has what might be best described as a loyal following. Yahoo has been a quiet if not good member of the software world. It has not unsold books to people, or arbitrarily rejected applications, or treated developers poorly.

One person asked of Marissa Mayer: "Please, make Yahoo cool again." I agree. We need a company that makes computing cool. And Yahoo may just be the company that can do it.

Thursday, December 20, 2012

The Cheapening of IT

The prices for computing equipment, over the years, have moved in one direction: down. I believe that the decrease in prices for hardware has an affect on our willingness to pay for software.

In the early 1960s, a memory expansion for the IBM 1401 provided 8K of what we today call RAM, at a price of $258,000. That was only the expansion pack of memory; the entire system cost several times that amount. With an investment of over a million dollars for hardware, an additional investment of several tens of thousands of dollars for software was quite the bargain.

In 1977, a Heathkit 8-bit microcomputer with an 8080 processor, 4K of RAM, and a cassette tape recorder/player (used for long-term storage prior to floppy disks), cost almost $1500. Software for such a computer ran from $20 (for a simple text editor) to $400 (for the Microsoft COBOL compiler).

Today, smart phone or tablet costs range from $200 to $1000. (Significantly less than the Heathkit 8-bit system, once you account for inflation.) Tablet apps can cost as much as $10. Some are more, and some are free.

What affect does this decrease in the hardware cost have on the cost of software?

Here's my theory: as the cost of hardware decreases, the amount that we are willing to pay for software also decreases. I can justify spending $400 for software when the hardware costs several times that amount. But I have a harder time spending $400 on software when the hardware costs less than that. My bias is for hardware, and I am assigning higher intrinsic value to the hardware than the software. (The reasons behind this are varied, from the physical nature of hardware to the relationship with the vendor. I'm pretty sure that one could find a Master's thesis in this line of study.)

But if a cheapening of the hardware leads to a cheapening of the software, how does that change the industry? Assuming that the theory is true, we should see downward pressure on the cost of applications. And I think that we have seen this. The typical phone and tablet app holds a retail price that is significantly less than the price for a typical desktop PC application. "Angry Birds" costs only a fraction of the price of Microsoft Office.

I expect that this cost bias will extend to PC apps that move to tablets. Microsoft Word on the Surface will be priced at under $40 (perhaps as an annual subscription) and possibly less. The initial release of the Surface includes a copy of Word, although it is restricted to non-commercial use.

I also expect that the price of desktop PC apps will fall, keeping close to the prices of tablet apps. Why spend $400 for Word on the PC when one can get it for $40 on the tablet? The reduced price of apps on one platform drives down the price of apps on all platforms.

The cheapening affect may go beyond off-the-shelf PC applications. As the prices of desktop applications fall, we may see pressure to reduce the price of server-based systems, or server components of multiplatform systems. Again, this will be driven not by technology but by psychology: I cannot justify a multi-thousand dollar cost for a server component when the corresponding desktop applications have low costs. The reduced prices of desktop applications drives down the prices of equivalent server applications. Not all server applications, mind you; only the server applications that have desktop equivalents, and only then when those desktop equivalents are reduced in price to match tablet apps.

The general reduction of prices for desktop and server applications may create difficulties for the big consulting shops. These shops charge high prices for the development of custom applications for businesses. Psychology may cause headaches for their sales teams: why should I spend hundreds of thousands of dollars on a custom app (which includes clients for desktop PCs, tablets, and smartphones, of course) when I can see that powerful, competent apps are marketed for less than $10 per user? While there is value is a custom application, and while a large company may need many "downloads" for their many users, the argument for such high prices becomes difficult. Is a custom app really adding that much value?

Look for the large consulting houses to move into new technologies such as cloud and "big data" as ways of keeping their rates high. By selling these new technologies, the consulting houses can offer something that is not readily apparent in the off-the-shelf apps. (At least until their customers figure out that the off-the-shelf apps are also using cloud and "big data" tech.)

All of this leads to downward pressure on the prices of apps, whether they are simple games or complex systems. That pressure, in turn, will put downward pressure on development costs and upward pressure for productivity. Where a project was run with a project manager, three tech leads, ten developers, three testers, two analysts, and a technical writer, future projects may be run with a significantly smaller team. Perhaps the team will consist of one project manager, one tech lead, three developers, and one analyst. I'm afraid the "do more with less" exhortation will be with us for a while.

Tuesday, December 18, 2012

Windows RT drops some tech, and it hurts

Say what you will about Microsoft's innovation, licenses, or product quality, but one must admit that Microsoft has been quite good at supporting products and providing graceful upgrades. Just about every application that ran on Windows 3.1 will run on later versions of Windows, and most DOS programs will run in the "Command Prompt" application. Microsoft has provided continuity for applications.

The introduction of Windows RT breaks that pattern. With this new version of Windows, Microsoft has deliberately selected technologies into "keep" and "discard" piles -- and it has done so without even a "deprecated" phase, to give people some time to adjust.

The technologies in the "discard" pile are not insignificant. The biggest technology may be Silverlight, Microsoft's answer to Adobe's Flash. It is allowed in Windows 8 but not in Windows RT.

Such a loss is not unprecedented in the Microsoft community, but it is infrequent. Previous losses have included things like Microsoft Bob and Visual J#, but these were minor products and never gained much popularity.

The most significant losses may have been FoxPro and the pre-.NET version of Visual Basic. These were popular products and the replacements (Microsoft Access and VB.NET) were significantly different.

The loss of technologies hurts. We become attached to our favorite tech, whether it be Silverlight, Visual Basic, or earlier technologies such as Microsoft's BASIC interpreter (the one with line numbers), the 6502 processor, or DEC's PDP-11 systems.

Microsoft fans (with the exception of the FoxPro and Visual Basic enthusiasts) have not experienced a loss. Until Windows RT. Microsoft's strong support for backwards-compatibility in its operating systems, languages, and applications has sheltered its users.

Those of us from certain graduating classes, those of us who were around before the introduction of the IBM PC, have experienced loss. Just about everyone from those classes lost their favorite tech as the "new kid" of the IBM PC became popular, set standards, and drove out the other designs. The Apple II, the TRS-80, the Commodore systems, (and my favorite, the Heathkit H-89) were all lost to us. We had formed our loyalties and had to cope with the market-driven choices of new technology.

Folks who joined the tech world after the IBM PC have experienced no such loss. One may have started with PC-DOS and followed a chain of improved versions of DOS to Windows 3.1 to Windows NT to Windows XP, and a chain of upgrades for Word, Multiplan to Excel, and Access to SQL Server.

Windows RT marks the beginning of a new era, one in which Microsoft drops the emphasis on backwards-compatibility. The new emphasis will be on profitability, on selling Surface units and (more importantly) apps and content for Windows RT tablets.

To the Windows developers and users: I'm sorry for your loss, but I have gone through such losses and I can tell you that you will survive. It may seem like a betrayal -- and it is. But these betrayals happen in the tech world; companies make decisions on profit, not your happiness.

Wednesday, December 12, 2012

Windows blues

Windows "Blue" is the new version of Windows, and it is disruptive.

I expect Windows "Blue" to integrate with Microsoft's cloud services. This is an easy prediction: the industry moving to cloud services, and Microsoft will move with the industry as they have in the past. Also, Microsoft brands its cloud services with the name "Azure". (Get it? "Blue" and "Azure"?)

The more disruptive aspect of Windows Blue is the release schedule. Microsoft will be releasing this version of Windows one year after Windows 8. In doing so, Microsoft increases the frequency of Windows releases and matches Apple and Linux distributions. (The Ubuntu Linux folks release two versions each year!)

It may be that this is a fluke, a one-time occurrence of back-to-back releases of Windows. But it's more fun to assume that this is the new pace for Windows. What does such a change mean?

For starters, the Microsoft teams must be prepared to select features, implement them, test them, and prepare a release (including marketing, advertising, and fulfillment) on a much shorter schedule than before. A faster release schedule forces Microsoft to limit the new features of a release, which in turn means smaller changes from release to release. (Smaller "jumps" in features may be good for the users of Windows, giving them smaller "shocks" from one version to another.)

A faster release cycle lets Microsoft track changes in hardware. Microsoft can add new features (voice recognition, handwriting recognition, support for tablets, support for wireless tech) without resorting to the service packs and developer kits of the past. Keeping up with hardware lets Microsoft compete with Apple and Linux.

More frequent releases also lets Microsoft track changes in software. They can include later versions of browsers in their operating systems, and assume that people who install other browsers will install the latest version. The days of an "IE 6 browser" (outdated but popular from sheer inertia) may be over.

Consumers will probably change their view of PCs and computing devices (tablets, phones) and consider them more like small appliances and less like major purchases. They may be willing to upgrade their equipment every two years, as they do with cell phones.

Corporations and government agencies may dislike the increased frequency of releases. Larger organizations tend to avoid upgrades, trying to get the maximum life out of devices. I suspect that corporations prefer computing equipment to be like water or electricity: present all of the time and never changing. What these long-term users want is computing services, not computing equipment. Cloud computing should appeal to them. Fast releases of Windows (and browsers, and development IDEs) will not. (But where can they go? Microsoft, Apple, and Linux are all playing this game.)

Windows "Blue" will be a step forward in Microsoft's strategy. It may not be the step that everyone wants, and it may cause a few people some discomfort. We can complain, we can cling to old versions, or we can acknowledge the change and prepare for it.

Saturday, December 1, 2012

Unexpected success

George Lucas is know for the "Star Wars" movies, some of the most successful movies of all time.

Yet I suspect it started differently.

In 1977, Lucas had the movies "THX-1138" and "American Graffiti" behind him. "THX-1138" is an obscure movie, now most famous for being one of Lucas' creations. It is a decent movie, and respected by science fiction fans, but not known outside of fandom. "American Graffiti" was a successful movie: popular in its day but now more of a fond memory. (When was the last time you watched it?) There is nothing in either movie that says "genius movie maker".

I suspect that George Lucas made "Star Wars" and was hoping for a reasonable amount of success, and that he was not expecting the movie to become the foundation of a franchise and marketing empire.

I believe that Apple, with the first iPhone, was, like George Lucas, hoping for a reasonable amount of success. I also believe that the tremendous response far surpassed Apple's expectations. (I suspect it also surpassed AT&T's expectations, which conveniently explains the difficulties encountered by so many new iPhone customers when they activated their accounts.)

Their successes were due, in part, to the gambles that each made. Lucas used computers to control the models of X-wing fighters, selected classical music, and released in the summer. Apple created an elegant design quite different from contemporary cell phones, leveraged its "easy to install" ideas for apps from Mac OSX, and built an interface that was different from the traditional Windows (and even Mac) OS.

Lucas' work stamped itself onto our culture, with "The Force" and even the quote "I've got a bad feeling about this".

Apple's work changed the course of the industry, such that Microsoft Windows and the "windows, icons, mouse, and pointer" theme is no longer the design leader. Microsoft's introduction of Windows RT and the "Modern" UI shows the effect of the iPhone success.

All of which perhaps is evidence that success is something that cannot be planned, timed, or scheduled, and that success can come from taking risks and ignoring established ideas.

Thursday, November 29, 2012

How buildings learn... and automakers do not

Stewart Brand, in his book How Buildings Learn, analyzed buildings ... and identified ideas that are applicable to all sorts of systems. He noticed that buildings consisted of multiple layers, some which change easily and some of which are difficult to change. (For example: the furniture inside of a building is easy to change, the plumbing fixtures less so, and the locations of plumbing fixtures are even less easy to change.)

The idea of "layers of change" works for systems other than buildings. Just about any non-trivial system has layers of different change.

Automakers have, I think, failed to learn this lesson.

Recent models of automobiles now include wonderful electronic goodies such as navigation systems and interfaces to cell phones. But I fear that the automakers have designed these goodies in a low-change layer -- they expect their electronic additions to last the lifetime of the car.

The problem is that electronic gadgets are "cycled" on a more frequent basis than automobiles. The average life of a cell phone is something under two years; the average life of a car is quite a bit longer. (My automobile is twelve years old, and still in good shape. But my driving is less than the norm.)

The disparity of these two cycles means that a few years after the purchase of an automobile (yet well before the end of its useful life), the "goodies" will seem outdated. (Think about the usefulness of an iPhone 3 in today's world.)

I suspect few car owners will be happy with out-of-date electronics, especially if the (then new) phones cannot talk to their cars. I also suspect that automakers will upgrade systems only in exchange for large sums of money, and possibly not at all.

The lesson here applies to software. Most software systems exist in layers, some easy to change and others less so. Simply tacking on another layer is not enough. We must consider the technologies involved and our customers' desire for new generations of technology. Failing to do so will result in the equivalent of a modern-day automobile with outdated electronic gadgets, whatever that equivalent is in software. It also results in unhappy customers, or whatever that equivalent is in software systems.

Saturday, November 17, 2012

Off the top of the screen

In a recent conversation with a (somewhat younger) colleague, I uttered the phrase "scrolled off the top of the screen". I wanted to convey the notion of information that was not available at the present, but was available in the past.

As I used the metaphor, I realized that my (somewhat younger) colleague had most likely never used an honest-to-goodness real text-only terminal. With today's large-memory computers and graphic environments, the ability to scroll up and see previous content (for some types of windows) is taken as granted. But in the old days, when computers were large boxes locked away in secure rooms, users interacted with terminals, and terminals had limited memory and limited display capabilities. So limited that they displayed text in only one font and had no scrolling. (What you saw was all that there was.)

On these terminals, new text (usually) appeared at the bottom. When the screen was full of text and new text appeared, the top line was removed, all remaining lines were moved up one line, and the text appeared in the new, now blank, line at the bottom. This action was called "scrolling", named so because it mimicked the action of terminals that used paper in long rolls.

It strikes me that the verb "scroll" was kept when windowing environments displayed text that could be moved up and down.

But I digress. Let us return to the notion of information that is not available at present, but was available in the past.

Consider the programming skills needed to build data structures. I spent a good deal of time in college developing, testing, and debugging programs that built data structures and sorting algorithms. Today, such structures are baked into languages. Perl, Python, Ruby, C#, Java, and even C++ with its STL extensions all have data structures and sorting, available to the developer essentially "for free".

Asking an aspiring programmer to build such structures seems a waste of time, similar to asking an author to build their own typewriter or a taxi driver to refine their own gasoline. We have better things to do than to re-invent elementary constructs that are available (and debugged).

These skills of data structures have scrolled off the top of our collective screen. At least for the general developers.

General developers do not need to learn these skills. (Nor do they need to learn assembly language, or the ASCII code, or proper writing for programming sheets.)

But someone has to remember them. Languages like Python have these constructs, but they are available because someone put them in the general library. The Python language rests on the Python library, which in turn rests on libraries common to multiple languages, which in turn rest on functions provided by the operating system which in turn rest on functions private to the operating system. But there is an end, a definite stopping point. It is not "elephants all the way down".

Some language engineer must be available to add basic constructs to the language, and that means that some engineer must know the concepts. Skills may scroll off the top of the screen for the typical developer, but as an industry, we must retain them.

We're going to need an attic.

Tuesday, November 13, 2012

Which slice of the market can you afford to ignore?

Things are not the same in computer-land. The nice, almost-uniform world of Windows on every desktop was a simple place in which to live. With one set of development tools, one could build an application that everyone could run.

Everyone except for those Apple folks, but they had less than five percent of the market, and one could afford to ignore them. In fact, the economics dictated that one did ignore them -- the cost of developing a Mac-specific version of the application was larger than the expected revenue.

Those were simple days.

Today is different. Instead of a single operating system we have several. And instead of a single dominant version of an operating system, we have multiple.

Windows still rules the desktop -- but in multiple versions. Windows 8 may be the "brand new thing", yet most people have either Windows 7 or Windows XP. And a few have Windows Vista!

Outside of Windows, Apple's OSX has a strong showing. (Linux has a minor presence, and can probably be safely ignored.)

The browser world is fragmented among Microsoft Internet Explorer, Google Chromium, Apple's Safari, and Mozilla's Firefox.

Apple has become powerful with the mobile phones and dominant with tablets. The iOS operating system has a major market share, and one cannot easily ignore it. But there are different versions of iOS. Which ones should be supported and which ones can be ignored?

Of course, Google's Android has no small market share either. And Android exists in multiple versions. (Although most Android users want free apps, so perhaps it is possible to ignore them.)

Don't forget the Kindle and Nook e-reader/tablets!

None of these markets are completely dominant. Yet none are small. You cannot build one app that runs on all of them. Yet building multiple apps is expensive. (Lots of tools to buy, lots of techniques to learn, and lots of tests to run!)

What to do?

My suggestions:

  • Include as many markets as possible
  • Keep the client part of your app small
  • Design your application with processing on the server, not the client

Multiple markets gives you more exposure. It also forces you to keep your application somewhat platform-agnostic, which means that you are not tied to a platform. (Being tied to a platform is okay until the platform sinks, in which case you sink with it.)

Keeping your application small forces your application to a minimal user interface and a detached back end.

Pushing the processing part of your app insulates you from changes to the client (or the GUI, in 1990-speak). It also reduces the development and testing efforts for your apps, by centralizing the processing.

This technique has no surprises, perhaps. But then, it also requires no magic.

After all, which market segment can you afford to ignore?

Tuesday, October 30, 2012

BYOD can be easy with tablets

The "bring your own device" movement has caused quite a bit of heartburn among the corporate IT and security folks. More than is necessary, I think.

For those unfamiliar with the term "bring your own devices" (BYOD), it means this: employees select their own devices, bring them to the office, and use them for work. Such a notion causes panic for IT. It upsets the well-balanced apple cart of company-supplied PCs and laptops. Corporations have invested in large efforts to minimize the costs (purchase costs and support costs) of PCs and laptops. If employees were allowed to bring their own hardware, the following would happen (in the thinking of the corporate cost-minimizers):

  • Lots of employees would have problems connecting to the company network, therefore they would call the help desk and drive up support costs
  • Employee-selected hardware would vary from the corporate standard, increase the number of hardware and software combinations, and drive up support costs

And in the minds of IT security:

  • Employee-selected hardware would be vulnerable to viruses and other malware, allowing such things behind the corporate firewall

But these ideas are caused by misconceptions. The first is that employees want to bring their own PCs (or laptops). But employees don't. (Or at least not the folks with whom I have spoken.) Employees want to bring cell phones and tablets, not laptops and certainly not desktop PCs.

The second misconception is that smartphones and tablets are the same as PCs, except smaller. This is also false. Yes, smartphones and tablets have processors and memory and operating systems, just like PCs (and mainframes, if you want to get technical). But we use tablets and smartphones differently than we use PCs and laptops.

We use laptops and PCs as members of a network with shared resources. These laptops and PCs are granted access to various network resources (printers, NAS units, databases, etc.) based on the membership of the PC (or laptop) within a domain and the membership of the logged-in user of domain-controlled groups. The membership of the PC within a domain gives it certain privileges, and these privileges can create vectors for malware.

Smartphones and tablets are different. We don't make them members of a domain. They are much closer to a browser on a home PC, used for shopping or online banking. My bank allows me to sign on, view balances, pay bills, and request information, all without being part of their domain or their security network.

How is this possible? I'm sure that banks (and other companies) have security policies that specify that only corporate-owned equipment can be connected to the corporate-owned network. I'm also sure that they have lots of customers, some of whom have infected PCs. Yet I can connect to their computers with my non-approved, non-certified, non-domained laptop and perform work.

The arrangement works because my PC is never directly connected to their network, and my work is limited to the capabilities of the banking web pages. Once I sign in, I have a limited set of possibilities, not the entire member-of-a-network smorgasbord.

We should think of smartphones and tablets as devices that can run apps, not as small PCs that are members of a domain. Let the devices run apps that connect to back-end servers; let those servers offer a limited set of functions. In other words, convert all business applications to smartphone apps.

I recognize that changing the current (large) suite of business applications to smartphone apps is a daunting task. Lots of applications have been architected for large, multi-window screens. Many business processes assume that uses can store files on their own PCs. Moving these applications and processes to smartphone apps (or tablet apps) requires thought, planning, and expertise. It is a large job, larger than installing "mobile device management" packages and added new layers of security bureaucracy for mobile devices.

A large job, yet a necessary one. Going the route of "device management" locks us into the existing architecture of domain-controlled devices. In the current scheme, all new devices and innovations must be added to the model of centralized security.

Better to keep security through user authentication and isolate corporate hardware from the user hardware. Moving applications and business processes to tablet apps, separating the business task from the underlying hardware, gives us flexibility and freedom to move to other devices in the future.

And that is how we can get to "bring your own device".

Friday, October 26, 2012

The Microsoft Surface is an Office tablet

People compare Microsoft's Surface tablet to Apple's iPad. Some favor the Surface, others favor the iPad. But this is a false comparison.

Both the Surface and iPad are tablets. But they serve two different markets. The iPad is designed for consumers; the Surface is designed for businessmen.

The iPad is easy to use. It has lots of games and consumer-oriented apps. One can play music, read books, and play games. One can get apps for banking, for reminders, and for adjusting photographs. But it doesn't have apps for the workhorses of today's office: the Microsoft Office programs.

The Surface is designed for the office. It comes with 'home' versions of the Microsoft Office products, and 'real' versions are promised. (I suspect that the difference is mostly one of licenses.)

The Surface is less a general tablet and more a "tablet for running Office". It also happens to run other programs, but don't let that fool you. It's primary purpose is to be the tablet platform for MS Office. The innards of Windows RT use a number of old Microsoft technologies -- just the things needed to run the Microsoft Office suite.

As a specialized tablet, I expect that MS Office and the MS Surface will grow in parallel. New versions of MS Office will take advantage of new features in Windows RT and Surface tablets. Enhancements to tablets will be designed to support new versions of MS Office.

Heck, using this logic, Windows RT is simply a platform on which to run MS Office. It is an "Office operating system".

Saturday, October 20, 2012

Tablet fever? Take two tablets and call me in the morning

Microsoft announced their first tablet offerings, and the response has been positive. Enough people have submitted orders for the low-end model to sell out.

This shows two things:

First, it puts Microsoft in the top tier for the tablet market, and shows that people take Microsoft seriously. A number of manufacturers have produced tablets (Motorola, Acer, Dell, and even HP) the sales of non-Apple tablets have been tepid. By providing a tablet that sells out (even before it is for sale), Microsoft joins Apple and Google in the ranks of "suppliers of desired devices".

Second, that the interest in tablets is not an Apple phenomenon. Apple fanboys may lead the way for iPad sell-outs, but they had nothing to do with the Microsoft Surface sales. That interest is coming from a different part of the market.

Perhaps I am putting too much weight on this one event. It may be that Microsoft produced a small number of the low-end Surface tablets (the higher-end Surface tablets are still available). It may be that people are buying the Surface tablets as an experiment, or for corporate pilot projects, and the initial demand is higher than the "true market".

People (rightfully) point out the failures of Microsoft's phones, Zune, and Kin offerings. But those products never gained traction in the market, and none sold out -- much less prior to shipping.

There is interest in the Microsoft Surface tablets, and I believe all tablets. Apple and Microsoft get a lot of attention from brand recognition. Google does too, when it ships devices.

I think we are about to undergo a case of "tablet fever", with tablets becoming the most desired device. It should make for an interesting ride.

Thursday, October 18, 2012

No more PCs for me

This week I decided to never buy a PC again. I currently have four, plus a laptop. They will be enough for me to transition to the new world of tablets and virtual servers.

This is almost a bittersweet moment. I was there prior to the PC, when the Apple II and the Radio Shack TRS-80 were dominant (and my favorite, the Heathkit H-89 was ignored). I was there at the beginning of the PC age, when IBM introduced the PC (the model 5150) and everyone absolutely had to have them. PCs were the new thing, a rebel force against the established (IBM) mainframes and batch processing and centralized control. I resented IBM's power in the market.

I saw the rise of the PC clones, first the Compaq and later, everyone. I saw IBM's attempt to re-define the standard with the PS/2 and the market's reaction (buy into Compaq and other PC clones).

I saw the rise of Windows, and the change from command-line programs to GUI programs.

Now, I have seen the (Microsoft Windows) PC become the established computer, complete with centralized control. The new rebel force uses tablets and virtual servers.

I am uncomfortable with Apple's power over app suppliers for iOS devices, and Microsoft's power over app suppliers for Windows RT devices. I am leery of Google's power, although the Android ecosystem is a bit more open that iOS and Windows RT.

Yet I am swept along with the changes. I use an Android tablet in the morning, to check Facebook, Twitter, and news sites. (A Viewsonic gTablet, which received poor reviews for it's non-standard interface, yet I am coming to like it.) I use an Android smartphone during the day. I use a different Android tablet in the evening. (Although I am typing this on my Lenovo laptop with a Matias USB keyboard.) While I have not moved everything to the tablets, I have moved a lot and I expect to switch completely within a few months.

My existing PCs have been converted to the server version of Ubuntu Linux, with the one exception of a PC running Windows 7. I suspect that I will never convert that PC to Windows 8, but instead let it die with dignity.

I was there at the beginning, and I am here at the end. (Of the PC age.) Oh, I recognize that desktop and laptop PCs will be with us for a while, just as mainframes stayed with us. But the Cool New Stuff will be on tablets, not on PCs.

Monday, October 15, 2012

Unstoppable cannon balls and immovable posts

There is an old riddle: what happens when an unstoppable cannon ball strikes an immovable post?

The unstoppable cannon ball is a hypothetical construct. It is a cannon ball that, once in motion, stays in motion. (It's ability to defy friction comes from its hypothetical nature.)

The immovable post is also a hypothetical construct. It is a post that, once placed, cannot be moved.

While the riddle is entertaining, a real-world version of it is emerging with software development methodologies. Instead of an unstoppable cannon ball and an immovable post we have the waterfall methodology and the agile methodology. Perhaps the agile method is the equivalent of the cannon ball, as both emphasize motion. It doesn't really matter, though.

What matters is that the two methodologies are incompatible. One can develop software using one methodology, or the other, but one cannot use both. For small shops that have only one project (say a start-up), this is not an issue. Larger shops (any shop with multiple projects) must choose a methodology; you cannot use waterfall for some projects and agile methods for another. (Well, you can, provided that the two sets of projects have no interaction. But that is unlikely.)

The waterfall and the agile methods make two different promises. Waterfall promises a specific deliverable, with a specific set of features, and a specific quality, on a specific date. Agile promises that the product is always shippable (that is, very high quality), and promises to implement the most important features first, but makes no promises about a complete set of features on a specific date.

It is easy to dismiss these differences if we think of these approaches as methods of developing software. It is harder to dismiss them when we realize the truth: waterfall and agile are not methods for building software but philosophies for managing projects.

Managing projects is serious business, and is the core of most IT shops. (Projects can by managed by non-IT shops, too, but I will stick with the IT world.)

One cannot run a business (a successful business) with multiple coordinated projects that use project practices which differ to the extent that waterfall and agile differ. Waterfall, with its "big design up front" and locked-in delivery dates cannot mesh with agile, with its "choose as you go" method for selecting features. The rise of agile methods -- agile project management -- means that companies must choose one or the other. Which one is important, too, and depends on how the managers of the company want to run their company.

This schism has far-reaching effects. It can be felt in the acquisition of other companies, when one merges the management of project teams. It affects the hiring of new employees: are they familiar with the practices of the hiring company? It can even affect joint projects between companies.

I don't claim that one is better than the other. My preference is for agile project management, based on my experience and projects. It may be the right methodology for you. Or waterfall may be the better methodology for you. Managers must evaluate both and consider the benefits and costs.

My advice is to learn about both and make an informed decision.

Sunday, October 7, 2012

How I fix old code: I use the wisdom of those before me

I am often called in to a project to improve the code -- to make it more efficient, or more maintainable.  It seems that some programmers write code that is difficult to understand. (And luckily for me, a large number of them.)

I use the maxims provided by Kernighan and Plauger in their book "The Elements of Programming Style". Written in the 1970s, it contains wisdom that can be used by programmers today.

One of my favorites (possibly because so many programmers do not follow it) is "Each function should do one thing well."

Actually, Kernighan and Plauger wrote "Each module should do one thing well"; I am taking a (reasonable, in my mind) liberty to focus on functions. With today's object-oriented programming languages, the meaning of "module" is somewhat vague. Does it refer to the class (data, member functions, and all?) or does it refer to a separate compiled file (which may contain a single class, multiple classes, a portion of a class, or all three)? But such questions are distractions.

When I write code, I strive to make each function simple and easy to understand. I try to build functions of limited size. When I find that I have written a long function, I re-factor it into a set of smaller functions.

But these techniques are not used by all programmers. I have encountered no small number of programs which contain unreadable code. Some programs have large functions. Some programs have poor names of variables and functions. Some programs have complicated interactions between functions and data, otherwise known as "shared mutable state". It is these programs that I can improve.

Many times, I find that the earlier programmers have done a lot of the work for me, by organizing the code into reasonable chunks that just happen to be grouped into a single function. This makes it easy for me to create smaller functions, each doing one thing well.

I do more than reduce functions to maintainable sizes. I also move functions to their proper class. I find many functions have been placed in improper classes. Moving them to the right class makes the code simpler.

How does one know the proper class? I use this rule: When the function modifies data, the class that holds the data is the class that should hold the function. (In other words, only class functions can modify class data. Functions in cannot modify data in other classes.)

Functions that do not modify data but only read data belong to the class that holds the data.

If a function reads data from two classes, it is most likely doing too much and should be re-factored. If a function is changing data in two classes, it is definitely doing too much, and should definitely be re-factored. (These rules are for direct access of data. I have no problem with functions that invoke methods on multiple objects to retrieve or modify their data.)

These two simple rules ("each function should do one thing well" and "each function in its proper class"), give me guidance for most of my improvements. They have served me well.

I succeed because I simplify code. I simplify code by using not my own rules, but the maxims laid out by those who came before me.

Tuesday, October 2, 2012

The return of thin client for desktops, brought to you by zero-configuration

Some time ago, the notion of a "thin client" for desktop computers (mostly Windows PCs, as this was before the rise of Apple) got some attention but little traction. The "thin client" applications of the time evolved into "web applications" and people kept their desktop applications as "thick client" apps.

We may see the return of "thin client" apps for desktop computers, combined with a return of "zero-configuration" applications. Apple's iOS manages apps with close to zero configuration, as does Android and Windows RT. Apps are easy to install -- so easy that non-technical people can purchase and install apps. The experience is a far cry from the experience of the typical Windows install program, which asks for locations, menu groups, and possibly registration codes.

The success of smartphones and tablets is also due to the managed environment. Apps on smartphones have less control than their desktop PC counterparts. An app on a smartphone responds to a select set of events and the operating system can shut down the app at any time (say to allocate memory to a different app). On the PC (Windows, Mac, or Linux), the application once running has control and "takes over" the computer until it decides that it has finished. (Yes, all modern operating systems can shut down applications, but in general the operating system attempts to keep the program running.)

Will we see a managed environment for desktop PCs? Such an environment would be a program running on the computer that performs the same tasks as iOS, Android, and Windows RT. It would allow apps to be "installed" in its environment (not on the native OS) and these apps would respond to events just as smartphones apps respond to events.

Who would create such a product (and ecosystem)?

Well, not Apple. Apple has nothing to gain by extending iOS apps to the Mac OSX (or Windows) desktop. Nor do they gain with a new (competing) environment.

And I suspect not Google. Google also has nothing to gain by extending Android to the desktop.

Microsoft? In a sense, they already have built an app ecosystem for Windows: their recent Windows 8 product. It runs apps in "Windows New UI" mode and classic applications in "Windows 7" mode. But I'm not thinking about Windows 8, I'm thinking of a different environment, possibly an Android simulator or something new.

Canonical, the providers of Ubuntu Linux, are working on their "Unity" interface and they may change Linux from an application-based system to an app-based system, but that doesn't get anything onto Mac OSX or Windows desktops.

To be sure, such an effort has challenges and uncertain payoffs. The product is a combination of virtual machine, environment manager, and distribution network. Beyond the software, one has to create the ecosystem of developers and users; no small task now that Apple, Android, Microsoft, RIM/Blackberry, and Amazon.com/Kindle have started such efforts.

Yet it could be possible, for the right players. I see them as HP and VMware. The former has WebOS, a platform that had decent acceptance at the technical level. The latter has the technology for virtual machines and a good reputation. I could see a joint project to create an app environment, with each partner contributing. Add Salesforce.com with its development environment, and one could have a powerful combination.

Sunday, September 23, 2012

Project management is more than Microsoft Project

Microsoft Project is a nice tool, but I find that it is insufficient for managing projects. That is, it has incomplete information about the project and everything that I must manage.

When I manage projects, I think about the tasks for the project, delivery dates, and total effort. (All these items are in Microsoft Project.) I also worry about the skills of people on the team, including their technical skills, their familiarity with the code, and their people skills. (These items are not in Microsoft Project.)

Microsoft Project has elaborate information on the computed aspects of the project (total hours, etc.) but it does not know about the people. It assumes that people are interchangeable units.

Yet people are not interchangeable cogs. Individuals have specific knowledge about technology; some programmers know C++ and others PHP (and a few know both). Team members know the requirements for parts of the system but it is rare that a person knows the requirements for all areas. Some people get along with others, some are better at negotiating than others, and some have excellent skills at finding defects. I have to balance these skills and ensure that the different subteams have the right mix for their objectives (and I suspect that you have to balance those skills and needs too).

Yet Microsoft Project (and other project management software) provides no support for those kinds of analysis.

I find that simple techniques can support my decisions. For example, I often create a grid (by hand, on a blank piece of paper, with nothing more than a pen) with system subcomponents along one axis and people on the other. I then place an 'X' in each box for the match of a person with a subcomponent. When a person knows the subcomponent, I add an 'X'; when the person does not I leave the cell blank. (One can perform this exercise in a spreadsheet, if one desires a high-tech solution.)

This simple tool provides a summary of coverage for the system. I can quickly see which subcomponents have knowledge across the team and which can be supported by only a few (or perhaps even only one) team members. I use that knowledge when re-assigning or transferring team members.

Microsoft Project is clearly "necessary but not sufficient" for effective project management, yet I meet many managers who use Microsoft Project and only Microsoft Project. They perform no additional analyses. Consequently, they treat team members as interchangeable units and they act on that belief. And they often have difficulties meeting deadlines.

I sometimes think that Microsoft Project (and other project management software) and its blindness to individual skills and talents causes us managers to think of people as interchangeable units. If we have no way to note the differences between team members and our software cannot distinguish them, then we will think along those lines. Our tools shape our thoughts.

I also think, from time to time, that many managers want individuals to be interchangeable. Such a condition is easier to manage, requiring less thought. Changing project plans is easier, as there are fewer constraints.

But just because the management of a project is easier does not mean that we are more effective. Ignoring the tech and people skills of our team members does not mean that those skills go away or become meaningless. They may be ignored by the tool, but they still exist.

Wednesday, September 19, 2012

How to improve spreadsheets

Spreadsheets are the sharks of the IT world. They evolved before the IBM PC (the first spreadsheet ran on an Apple II) and have, for the most part, remain unchanged. They have migrated from the 6502 processor to the 8080, the Z-80, the 8086, and today's set. They have added graphs and fonts and extravagant file formats. They have grown from 256 rows to a rather large number. But they remain engines of data and formulas, with the data in well-defined grids and formulas with well-defined characteristics. They are sharks, evolved to a level of efficiency that has yet to be surpassed.

Spreadsheets get a number of things right:

- The syntax is easy to learn and consistent
- The data types are limited: numeric values, string values, and formulas
- Feedback for changes is immediate (no compiling or test scripts)
- Cells can only read values from other cells; they cannot assign a value to another cell

Immediate feedback is important. Teams using dynamic languages are at risk of a slew of problems; they use automated tests to identify errors. Agile processes emphasize the frequent use of tests. Spreadsheets provide this feedback without the need for tests.

The last item is most important. Cells can assign values to themselves, but they cannot assign values to other cells. This means that spreadsheets have no shared mutable state, no update collisions, no race conditions. That gives them stability.

Yet spreadsheets get a number of things wrong:

- All data is global
- Organization of data is dependent on the composer's skills and discipline

Our current spreadsheets are like the C language: fast, powerful, and dangerous. In C, one can do just about anything with the underlying machine. The C language lets you convert data from one form to another, and point to just about anything. It is quite powerful, but you have to know what you are doing.

Spreadsheets are not that dangerous. They don't have pointers, and the only things you can reference are (type safe because they are all of one type) cells within the spreadsheet.

But spreadsheets have the element of "you have to know what you are doing". The global nature of the data allows for formulas to refer to any cell (initialized or not) with little or no warning about nonsensical operations. In this sense, spreadsheet programming (in formulas) is much like C.

At first, I thought that the concepts of structured programming would improve spreadsheets. This is a false lead. Structured programming organizes code into sequences, iterations, and alternate paths. It clarifies code, but the formulas in spreadsheets are not a Turing-complete programming language. Structured programming can offer little to spreadsheets.

Instead, I think the concept of data encapsulation (from object-oriented programming) may help us advance the spreadsheet.

Spreadsheet authors tend to organize data into ranges. They may provide names for these ranges or leave them unnamed, but they will cluster their data into areas. Subsections of the grid will be used for specific types of data (for example, contact names in one range, regional sales in another).

For small spreadsheets, the "everything on one grid" concept works. Larger spreadsheets can see data split across pages (or tabs, depending on your spreadsheet manufacturer).

The problem with the spreadsheet grid is that it is, up to a point, infinite. We can add data to it without concern for the organization or structure. This becomes a problem over time; after a number of updates and revisions the effort to keep data organized becomes large.

An advanced spreadsheet would recognize that data is not stored in grids but in ranges, and would provide ranges as the key building block. Current spreadsheets let you define ranges, but the ability to operate on ranges is limited. In my new species of spreadsheet, the range would be the organizational unit, and ranges would not be infinite, empty grids. (They could expand, but only as a result of conscious action.)

Ranges are closer to tables in a database. Just as one can define a table and provide data for that table, one could define a range and provide data for that range. Unlike databases, ranges can be easily extended horizontally (more columns), re-sequenced, formatted, and edited. Unlike grids, ranges can be separated or re-combined to build new applications. Ranges must provide for local addresses (within the range) and external addresses (data within other ranges). Formulas must be able to read values from the current range and also from other grids.

If we do it right, ranges will be able to live in the cloud, being called in when needed and stored when not. Ranges will also be members of one application (or multiple applications), serving data to whatever application needs it.

Any improved spreadsheet will have to retain the advantages of the current tools. The immediacy of spreadsheets is a big advantage, allowing users of all skill levels to become proficient in a short time. Changing from grid-based spreadsheets to range-based spreadsheets must allow for this immediacy. This is a function of the UI, something that must be designed carefully.

I think that this new form of spreadsheet it possible, and offers some advantages. Now all I need is some time to implement it.

Sunday, September 16, 2012

The Agile and Waterfall Impedance Mismatch

During a lunchtime conversation with a colleague, we chatted about project management and gained some insights into the Waterfall and Agile project management techniques.

Our first insight was that an organization cannot split its project management methods. It cannot run some projects with a Waterfall process and other projects with Agile processes. (At least not if the projects must coordinate deliverables.) A company that uses Waterfall processes may be tempted to run a pilot project with Agile processes -- but if that pilot project "plugs in" to other projects, the result will be failure.

The problem is that Waterfall and Agile make two different promises. Waterfall promises a specific set of functionality, with a specific level of quality, delivered on a specific date. Agile makes no such promise;  it promises to add functionality and have a product that is always ready to ship (that is a high level of quality), albeit with an unknown set of functionality. The Agile process adds small bits of functionality and waits to get them correct before adding others -- thus ensuring that everything that has been added is working as expected, but not promising to delivery everything desired by a specific date. (I am simplifying things here. Agile enthusiasts will point out that there is quite a bit more to Agile processes.)

Waterfall processes make promises that are very specific in terms of feature set, quality, and delivery time -- but are not that good at keeping them. Hence, we have a large number of projects that are late or have low quality. Agile makes promises that are specific in terms of quality, and are good at keeping them. But the promises of the Agile processes are limited to quality; they do not propose the specifics that are promised by Waterfall.

With two different promises, it is no surprise that waterfall and agile processes don't co-exist. (There are other reasons that the two methods fail to cooperate, including the "design up front" of Waterfall and the "design as you go" of Agile.)

Our second insight was that transitioning an IT shop from Waterfall to Agile methods should not be accomplished by pilot projects.

Pilot projects are suitable to introduce people to the methods -- but those pilot projects must exist in isolation from the "regular" projects. Such projects were easy to establish in the earlier age of separate systems -- "islands of automation" -- that gave each system a measure of independence. Today, it is rare to see an IT system that exists in isolation.

Rather that use pilot projects, we like the idea of introducing ideas from Agile into the standard process used within the company. Our first choice is automated testing, the unit-level automated testing that can be performed by developers. Giving each developer the ability to run automated tests introduces them to a core practice of Agile, without creating an impedance mismatch.

After automated testing, we like the idea of allowing refactoring. Often omitted from plans for Waterfall projects, refactoring is another key practice of Agile development. Once unit tests are in place, developers can refactor with confidence.

Our third insight relates to project methods and project size. We think (and this is speculation) that Agile is better suited to small projects (and small systems) and Waterfall may be better suited to large systems. Thus, if you have large, complex systems, you may want to stay with Waterfall; if you have small systems (or even small applications) then you may want to use Agile.

We think that this relationship is correlation, not causal. That is, you can pick one side of the equation and drive the other. If you have large systems, you will end up with Waterfall. But the equation works in both directions, and you don't have to start with the size of your systems. If you use Agile methods, you will end up with smaller, collaborating systems.

Now, we realize that a lot of companies have large systems. We don't believe that by switching overnight to Agile methods will lead to smaller systems overnight. A base of large systems contains a certain amount of inertia, and requires a certain effect to redesign into smaller systems.

What we do believe is that you have a choice, that you can choose to use Agile (or Waterfall) methods (carefully, avoiding impedance mismatches), and that you can change the size of your systems. You can manage the size and complexity of your systems by selecting your processes.

These are choices that managers should accept and make carefully. They should not assume that "the system is the system" and "things are big and complicated and will always be so".

Thursday, September 13, 2012

The ultimate desktop OS

The phrase "ultimate desktop OS" is inspiring and attention-getting. While we might think that the "ultimate" desktop operating system is an unreachable dream, it is possible that it can exist, that it does exist, and that we have seen it.

That ultimate desktop operating system may be, in fact, Windows 7.

It is quite possible that Windows 7 is the peak of desktop operating systems. Its successor, Windows 8, is geared for tablets, not desktops. (And now you see why I have been carefully using the phrase "desktop operating system".)

Some might argue that it is not Windows 7 that is the "bestest" operating system for desktops, that the award for "best desktop operating system" should go to Windows XP, or perhaps Ubuntu Linux 10.04. These are worthy contenders for the title.

I won't quibble about the selection.

Instead, I will observe that desktop PCs have peaked, that they have reached their potential, and the future belongs to another device. (In my mind, that device is the tablet.)

Should you dispute this idea, let me ask you this: If you were to build a new app, something from scratch (not a re-hash of e-mail or word processing), would you build it for the desktop or for the tablet? I would build it for the tablet, and I think a majority of developers would agree.

And that is why I say that desktop operating systems have peaked. The future belongs to the tablet. (And the cloud, for back-end processing.)

If tablets are the future -- and I believe that they are -- then it really doesn't matter that Microsoft releases a new version of Windows for desktops. (Who gets excited when IBM releases a new version of MVS?) Yes, some folks will welcome the new version of Windows, but they will be a minority.

Instead of new versions of Windows, we will be looking for new versions of iOS and Android.

Tuesday, September 11, 2012

Apple got patch management right, at a cost

Looking at Apple's iOS and Microsoft's update services for Windows, one can see that Apple developed the more effective answer.

Apple's solution for iOS devices (iPads, iPhones, and iPods) covers all apps. With iTunes as the gateway for all apps, all apps can be updated through iTunes. One solution covers all apps.

In contrast, Microsoft's Windows has a messier solution. Windows update services cover Microsoft products, but other (non-Microsoft) products must provide their own solutions. The result is a hodge-podge of update methods, and while most use InstallShield they each have their eccentricities. Small-scale shops can handle the variance, but large shops have large headaches with patch management.

Patch management (or "update management", or "version management") are the headache that was solved with iOS and iTunes. It is also solved with Microsoft's new WinRT app manager (the Microsoft App Store) and is mostly solved with the major Linux distributions. (I say "mostly solved for the Linux distributions" since one can install software from outside the known repositories.)

The grand patch management solutions of iTunes and App Store provide a central location, a single method, for updating apps. The traditional Windows environment allowed for multiple applications from multiple vendors, and there was no one update manager that handled all of them.

But the neat solutions offered by Apple's iTunes and Microsoft's App Store come at a price. That price is part variety and part control. The centralized iTunes and App Store methods limit apps to those approved by the service administrators. Apple can reject any app for any reason (and has been accused of rejecting apps for trivial reasons). I'm certain that Microsoft's App Store will have similar challenges. Apple and Microsoft have "gatekeeper" control over their new distribution mechanisms: apps they like are permitted, apps they dislike are omitted.

Linux, with its strong emphasis on system administrators, has taken a different route. The Linux distros have central repositories for installation and update of packages, but the software managers use a "repository" model which sees software packages (which must conform to certain technical specifications) stored in repositories, and the set of repositories is open. That is, one can select the repositories for your software and receive updates from multiple sources. (The construction and maintenance of a repository takes some work, but the rules for building it are available and open.)

Patch management is important. Software is vulnerable to exploits, and patches can minimize one's exposure. It is something that Windows has worried and frustrated Windows administrators since the beginning of Windows.

In the future, when we look back, I think that the big contribution of iOS and Windows 8 will not be the new UI with its swipes and taps, nor the small form factor, but the patch management. A unified system for updates will be considered necessary for any "modern" system. Apple's iOS delivered that.


Friday, September 7, 2012

Windows 8 is version 1.0

Lots of people have critiqued Windows 8, and they seem to have forgotten the "rule of three" for Microsoft products. (The "rule of three" states that the early versions of new products are lame and that the third version of the product is the potent one. This rule also applies to the movie "The Maltese Falcon".)

Windows 8 is a step in a new direction for Microsoft. It defines a new UI and a new method for distributing software. It makes older versions of Windows obsolete.

There are a lot of changes, and I don't expect Microsoft to get it right on the first release. (They didn't with Windows, or with Visual Basic, or with .NET, either.)

What I *do* expect is that Microsoft will learn from their experience and release new versions. I expect a follow-on to Windows 8. Perhaps it will be called Windows 9. Perhaps it will be called Windows 8.5. Or Windows 8.1. Or maybe "Windows Tablet" or "Windows Mobile". Or maybe something prosaic like "Windows Touch". I don't know the exact name.

What I do know is that Microsoft releases new versions of its software. I expect a new version of its mobile device software, suitable for phones and tablets. I expect that Microsoft will fix the problems of the first release (what we call "Windows 8").

So I am not judging Microsoft solely on its Windows 8 product. I believe that they will release another version, and that later version will fix problems -- and change things.

Laud Microsoft or laugh at them. But don't expect them to stand still.

Wednesday, September 5, 2012

Microsoft's new revenue model

Windows 8 brings a number of changes to software. A new UI, a new Windows API, and a spiffy new logo. But there is one other change that has gotten little discussion.

Windows 8 brings a new revenue model for Microsoft.

Microsoft's former revenue model was to charge end users, server operators, and developers. That is, Microsoft charged fees for software that they provided (think Office), for programs running on servers (think SQL Server and Exchange), and for development tools (think Visual Studio). It is a system that has worked since the beginning of the PC era -- but it has a hole.

The hole in this revenue model has been third-party software. Were I to build software and sell it, I would pay Microsoft for the development tools (third-party development tools have long been pushed aside) but after that I was free to sell my software to anyone, with no royalties to Microsoft. The revenue collection point (the "tollbooth") was on the development tools.

Windows 8 moves the tollbooth for third-party software. Microsoft has moved the tollbooth from the receiving dock on third-party manufacturing facilities to the front door of the retail center (the Microsoft App Store, conveniently operated only by Microsoft).

This is the model used by Apple. Apple charges nominal fees for development tools and mandates that all apps are distributed through iTunes, where it can collect a part of the action.

One can assume that Microsoft will reduce the cost for development tools, perhaps even open-sourcing parts of their development tools. Doing so is in their best interest. The more people they have developing software for Windows 8 and selling (through the Microsoft App Store), the better for Microsoft. It is better for Microsoft to collect 30% on the sale of $5 retail units (sold in millions) than 100% on the sale of $500 development kits (sold in thousands).

Likewise, look to see Microsoft reduce the complexity of development for Windows 8. Instead of a large API with intricate calls, look to see a simplified API with easy-to-understand calls. A complicated API is good when you are selling the development tools; a simple API is better when you are selling (or collecting on) the retail units. You want to encourage the development of new apps.

I expect that the Windows 8 API will undergo a number of changes in the first few versions. Unlike the classic Windows API, the changes will simplify the API, not make it more complex. Microsoft has strong incentives to make it easy for the development of Windows 8 apps. (Besides the revenue model, Microsoft must compete with Apple and Android.)

The new model holds for the Windows 8 retail apps, the ones sold through the Microsoft App Store. For applications that run on servers, the old revenue model applies. Expect a bifurcation of Microsoft products and pricing, with cheap and easy-to-use tools for the development of retail apps and expensive and hard-to-use tools for server-based applications. (Until Microsoft releases their Microsoft Server App Store, which lets it collect revenue on the sale and use of server-based apps.)

Friday, August 31, 2012

Microsoft is serious about WinRT

The month of August taught us one thing: Microsoft is serious about WinRT and the new Win 8 UI.

I suspect that most Windows developers were secretly hoping that the Windows 8 UI (formerly known as "Metro") were a grand joke, a big bluff by Microsoft. But the release of Windows 8, complete with UI-makeover, has shown that Microsoft was not bluffing. Microsoft is serious about this Windows 8 thing.

The new Windows 8 UI is quite a departure from "good old Windows". It is a bigger change than the change from Windows 3 to Windows 95. Windows 8 introduces "tiles" (bigger and better app icons), swipes, taps, mouseless operation, and even keyboardless operation.

The changes in Windows 8 are not limited to the UI. Windows 8, in its "RT" flavor, boasts a new API, a smaller and more focussed API that breaks many current programs. (Programs that use the "classic" Windows API are permitted to run under "Windows desktop" mode on full-blown Windows 8, but cannot run under the more limited Windows 8 RT environment.

Worst of all, Windows 8 (in the new UI) eliminates the "Start" button. This change, I think, surpasses all others in terms of shock value. People will tolerate new APIs and new tiles, but they know and love their Start button.

But Microsoft is serious about these changes, and -- perhaps more shocking than anything Microsoft has done -- I agree with them.

Microsoft has to move into the tablet space. They have to move into mobile/cloud computing. The reason is simple: mobile/cloud is where the growth is.

The Windows platform (the classic Windows desktop platform) has become stagnant. Think about it: When was the last time that you purchased a new Windows application? I'm not talking about upgrades to Microsoft Office or Adobe Acrobat, but a purchase of a new application, one that you have not been using the past? If you're like me, the answer is: a long time ago. I have been maintaining a Windows platform and set of applications, but not expanding it.

The Windows platform (the classic desktop platform) has achieved its potential, and has nowhere to grow. The web took away a lot of the growth of Windows applications (why buy or create a Windows-only app when I can buy or create a web app?) and the mobile/cloud world is taking away the rest of Windows desktop potential. (It's also taking away the rest of Mac OSX potential and Linux desktop potential. The web and mobile/cloud are equal-opportunity paradigm shifts.)

Microsoft recognizes this change, and they are adapting. With Windows 8, they have created a path forward for their developers and customers. This path is different from previous Windows upgrades, in that Windows 8 does not guarantee to run all previous applications. (At least the Windows 8 RT path does not -- it has the reduced API that restricts apps to a limited set of operations.)

Windows 8 RT is a big "reset" for the Microsoft development community. It introduces a new API and a new toolset (Javascript and HTML5). It discards a number of older technologies (a big departure from Microsoft's previous policy of maintaining backwards-compatibility). It forces developers to the new tools and API, and knocks lots of experienced developers down to the junior level. In effect, it sets all developers on the same "starting line" and starts a new race.

But the tablet and mobile/cloud worlds are the worlds of growth. Microsoft has to move there. They cannot ignore it, nor can they move there in gentle, easy steps. Apple is there today. Google is there today. Amazon.com is there today. Microsoft must move there today, and must force its developers there today.

I see this move as a good thing for Microsoft. It will cause a lot of change (and a lot of pain) but it keeps them competitive.

Tuesday, August 28, 2012

The deception of C++'s 'continue' and 'break'

Pick up any C++ reference book, visit any C++ web site, and you will see that the 'continue' and 'break' keywords are grouped with the loop constructs. In many ways it makes sense, since you can use these keywords with only those constructs.

But the more I think about 'continue' and 'break', the more I realize that they are not loop constructs. Yes, they are closely associated with 'while' and 'for' and 'case' statements, but they are not really loop constructs.

Instead, 'continue' and 'break' are variations on a different construct: the 'goto' keyword.

The 'continue' and 'break' statements in loops bypass blocks of code. 'continue' transfers control to the end of the loop block and allows the next iteration to continue. 'break' transfers control to the end of the loop block and forces the loop to end (allowing code after the loop to execute). These are not loop operations but 'transfer of control' operations, or 'goto' operations.

Now, modern programmers have declared that 'goto' operations are evil and must never, ever be used. Therefore, 'continue' and 'break', as 'goto' in disguise, are evil and must never, ever be used.

(The 'break' keyword can be used in 'switch/case' statements, however. In that context, a 'goto' is exactly the construct that we want.)

Back to 'continue' and 'break'.

If 'continue' and 'break' are merely cloaked forms of 'goto', then we should strive to avoid their use. We should seek out the use of 'continue' and 'break' in loops and re-factor the code to remove them.

I will be looking at code in this light, and searching for the 'continue' and 'break' keywords. When working on systems, I will make their removal one of my metrics for the improvement of the code.

Sunday, August 26, 2012

Linux in the post-PC world

The advent of tablets and mobile computing devices has generated much discussion. The post-PC world offers convenience and reliability, and a stirring of the marketplace that could re-arrange the major players.

One topic that I have not seen is the viability of Linux. Can Linux survive in the post-PC world?

The PC world was defined by hardware. The "IBM PC standard" was set in 1981, with the introduction of the IBM PC.

The post-PC world is also defined by devices. It is a world in which the primary (and possibly only) devices we use (directly) are not PCs but tablets and smartphones (and possibly a few other devices).

What does this have to do with Linux?

Linux was -- and is -- a parasite in the PC world. It runs on PCs, and we can run it on PCs for two reasons. First, Linux is written to be compatible with the PC standard. Second, the PC standard is open and we can run anything on it. (That is, we can boot any operating system.)

The tablet world is different. Apple's iPads and Microsoft's Surface tablets are locked down: they run only approved software. An iPad will boot only iOS and a Microsoft Surface tablet will boot only a signed operating system. (It doesn't have to be Windows, but it does have to be signed with a specific key.) The lock-down is not limited to iPads and Surface tablets; Amazon.com Kindles and Barnes and Noble Nooks have the same restrictions.

This lock-down in the tablet world means that we are limited in our choice of operating systems. We cannot boot anything that we want; we can boot only the approved operating systems.

(I know that one can jail-break devices. One can put a "real" Linux on a Kindle or a Nook. IPads can be broken. I suspect that Surface tablets will be broken, too. But it takes extra effort, voids your warrantee, and casts doubt over any future problem. (Is the problem caused by jail-breaking?) I suspect few people will jail-break their devices.

Linux was able to thrive because it was easy to install. In the post-PC world, it will not be easy to install Linux.

I suspect that the future of Linux will lie in the server room. Servers are quite different from consumer PCs and the consumer-oriented tablets. Servers are attended by system administrators, and they expect (and want) fine-grained control over devices. Linux meets their needs. Consumers want devices that "just work", so they choose the easy-to-use devices and that creates market pressure for iPads and Surfaces. System administrators want control, and that creates market pressure for Linux.

Friday, August 24, 2012

How I fix old code

Over the years (and multiple projects) I have developed techniques for improving object-oriented code. My techniques work for me (and the code that has been presented to me). here is what I do:

Start at the bottom Not the base classes, but the bottom-most classes. The classes that are used by other parts of the code, and have no dependencies. These classes can stand alone.

Work your way up After fixing the bottom classes, move up one level. Fix those classes. Repeat. Working up from the bottom is the only way I have found to be effective. One can have an idea of the final result, a vision of the finished product, but only by fixing the problems at the bottom can one achieve any meaningful results.

Identify class dependencies To start at the bottom, one must know the class dependencies. Not the class hierarchy, but the dependencies between classes. (Which classes use which other classes at run-time.) I use some custom Perl scripts to parse code and create a list of dependencies. The scripts are not perfect but they give me a good-enough picture. The classes with no dependencies are the bottom classes. Often they are utility classes that perform low-level operations. They are the place to start.

Create unit tests Tests are your friends! Unit tests for the bottom (stand-alone) classes are generally easy to create and maintain. Tests for higher-level classes are a little trickier, but possible with immutable lower-level classes.

Make objects immutable The Java String class (and the C# String class) showed us a new way of programming. I ignored it for a long time (too long, in my opinion). Immutable objects are unchangeable, and do not have the "classic" object-oriented functions for setting properties. Instead, they are fixed to their original value. When you want to change a property, the immutable object techniques dictate that instead of modifying an object you create a new object.

I start by making the lowest-level classes immutable, and then working my way up the "chain" of class dependencies.

Make member variables private Create accessor functions when necessary. I prefer to create "get" accessors only, but sometime it is necessary to create "set" accessors. I find that it easier to track and identify access with functions than with member variables, but that may be an effect of Visual Studio. Once the accessors are in place, I forget about the "get" accessors and look to remove the "set" accessors"

Create new constructors Constructors are your friends. They take a set of data and build an object. Create the ones that make sense for your application.

Fix existing constructors to be complete Sometimes people use constructors to partially construct objects, relying on the code to call "set" accessors later. Immutable object programming has none of that nonsense: when you construct an object you must provide everything. If you cannot provide everything, then you are not allowed to construct the object! No soup (or object) for you!

When possible, make member functions static Static functions have no access to member variables, so one must pass in all "ingredient" variables. This makes it clear which variables must be defined to call the function. Not all member functions can be static; make the functions called by constructors static when possible. (Really, put the effort into this task.) Calls to static functions can be re-sequenced at will, since they cannot have side effects on the object.

Static functions can also be moved from one class to another, at will. Or at least easier than member functions. It's a good attribute when re-arranging code.

Reduce class size Someone (I don't remember where) claimed that the optimum class size was 70 lines of code. I tend to agree with this size. Bottom classes can easily be expressed in 70 lines. (if not, they are probably composites of multiple elementary classes.) Higher-level classes can often be represented in 70 lines or less, sometimes more. (But never more than 150 lines.)

Reducing class size usually means increasing the number of classes. You code size may shrink somewhat (my experience shows a reduction of 40 to 60 percent) but it does not reduce to zero. Smaller classes often means more classes. I find that a system with more, smaller classes is easier to understand than one with fewer, large classes.

Name your classes well Naming is one of the great challenges of programming. Pick names carefully, and change names when it makes sense. (If your version control system resists changes to class names, get a new version control system. It is the servant, not you!)

Talk with other developers Discuss changes with other developers. Good developers can provide useful feedback and ideas. (Poor developers will waste your time, though.)

Discuss code with non-developers Our goal is to create code that can be read by non-developers who are experts in the subject matter. We want them to read our code, absorb it, and provide feedback. We want them to say "yes, that seems right" (or even better, "oh, there is a problem here with this calculation"). To achieve that level of understanding, we need to strip away all of the programming overhead: temporary variables, memory allocation, and sequence/iteration gunk. With immutable object programming, meaningful names, and modern constructs (in C++, that means BOOST) we can create high-level routines that are readable by non-programmers.

(Note that we are not asking the non-programmers to write code, merely to read it. That is enough.)

These techniques work for me (and the folks on my projects). Your mileage may vary.

Wednesday, August 22, 2012

Microsoft changes its direction

Microsoft recently announced a new version of its Office suite (version 13), and included support for the ODF format. This is big news.

The decision to support ODF does not mean that the open source fanboys have "won".

As I see it, the decision to support ODF means that Microsoft has changed its strategy.

Microsoft became dominant in Windows applications, in part due to the proprietary formats of Microsoft Office and the network effect: everyone wanted Microsoft Office (and nothing else) because everyone that they knew (and with whom they exchanged documents) used Microsoft Office. The proprietary format ensured that one used the true Microsoft Office and not a clone or compatible suite.

Microsoft used that network effect to drive people to Windows (providing a Mac version of Office that was close but not quite the same as the Windows version). Their strategy was to sell licenses for Microsoft Windows, Microsoft Office, Microsoft Active Directory, Microsoft Exchange, Microsoft SQL Server, and other Microsoft products, all interlocking and using proprietary formats for storage.

And that strategy worked for two decades, from 1990 to 2010.

Several lawsuits and injunctions forced Microsoft to open their formats to external players. Once they did, other office suites gained the ability to read and write files for Office.

With Microsoft including the ODF formats in Office, they are no longer relying on proprietary file formats. Which means that they have some other strategy in mind.

That new strategy remains to be seen. I suspect that it will include their Surface tablets and Windows smartphones. I also expect cloud computing (in the form of Windows Azure) to be part of the strategy too.

The model of selling software on shiny plastic discs has come to an end. With that change comes the end of the desktop model of computing, and the dawn of the tablet model of computing.

Sunday, August 19, 2012

Windows 8 is like Y2K, sort of

When an author compares an event to Y2K, the reader is prudent to attend with some degree of skepticism. The Y2K problem was large and affected multiple platforms across all industries. The threat of mobile/cloud computing (if it can even be considered a threat) must be large and wide-spread to stand against Y2K.

I will say up front that the mobile/cloud platform is not a threat. If anything, it is an expansion of technical options for systems, a liberalization of solution sets.

Nor does the mobile/cloud platform have a specific implementation date. With Y2K, we had a very hard deadline for changes. (Deadlines varied across systems, with some earlier than others. For example, bank systems that calculated thirty-year mortgages were corrected in 1970.)

But the change from traditional web architectures to mobile/cloud is significant, and the transition from desktop applications to mobile/cloud is greater. The change from desktop to mobile/cloud requires nothing less than a complete re-build of the application: new UI, new data storage, new system architecture.

And it is these desktop applications (which invariably run under Microsoft Windows) that have an impending crisis. These desktop applications run on "classic" Windows, the Windows of Win32 and MFC and even .NET. These desktop applications have user interfaces that require keyboards and mice. These desktop applications assume constant and fast access to network resources.

One may wonder how these desktop applications, while they may be considered "old-fashioned" and "not of the current tech", can be a problem. After all, as long as we have Windows, we can run them, right?

Well, not quite. As long as we have Windows with Win32 and MFC and .NET (and ODBC and COM and ADO) then we can run them. But there is nothing that says Microsoft will continue to include these packages in Windows. In fact, the new WinRT offering does not include them.

Windows 8, on a desktop PC, runs in two modes: Windows 8 mode and "classic" mode. The former runs apps built for the mobile/loud platform. The latter is much like the old DOS compatibility box, included in Windows to allow us to run old, command-line programs. The "classic" Windows mode is present in Windows 8 as a measure to allow us (the customers and users of Windows) to transition our applications to the new UI.

Microsoft will continue to release new versions of Windows. I am reasonably sure that Microsoft is working on "Windows 9" even with the roll-out of Windows 8 under way. New versions of Windows will come out with new features.

At some point, the "classic Windows compatibility box" will go away. Microsoft may remove it in stages, perhaps making it a plug-in that can be added to the base Windows package. Or perhaps it will be available in only the premium versions of Windows. It is possible that, like the DOS command prompt that yet remains in Windows, the "classic Windows compatibility box" will remain in Windows -- but I doubt it. Microsoft likes the new revenue model of mobile/cloud.

And this is how I see mobile/cloud as a Y2K-like challenge. When the "classic Windows compatibility box" goes away, all of the old-style applications must go away too. You will have to either migrate to the new Windows 8 UI (and the architecture that such a change entails) or you will have to go without.

Web applications are less threatened by mobile/cloud. They run in browsers; the threat to them will be the loss of the browser. That is another topic.

If I were running a company (large or small) I would plan to move to the new world of mobile/cloud. I would start by inventorying all of my current desktop applications and forming plans to move them to mobile/cloud. That process is also another topic.

Comparing mobile/cloud to Y2K is perhaps a bit alarmist. Yet action must be taken, either now or later. My advice: start planning now.

Wednesday, August 15, 2012

Cloud productivity is not always from the cloud

Lots of people proclaim the performance advantages of cloud computing. These folks, I think, are mostly purveyors of cloud computing services. Which does not mean that cloud computing has no advantages or offers no improvements in performance. But it also does not mean that all improvements from migrating to the cloud are derived from the cloud.

Yes, cloud computing can reduce administration costs, mostly by standardizing the instances of hosts to a simple set of virtualized machines.

And yes, cloud computing can reduce the infrastructure costs of servers, since the cloud provider leverages economies of scale (and virtualized servers).

But a fair amount of the performance improvement of cloud computing comes from the re-architecturing of applications. Changing one's applications from monolithic, one-program-does-it-all to smaller collaborative apps working with common data stores and message queues has an affect on performance. Shifting from object-oriented programming to the immutable-object programming needed for cloud computing also improves performance.

Keep in mind that these architectural changes can be done with your current infrastructure -- you don't need cloud to make them.

You can re-architect your applications (no small task, I will admit) and use them in your current environment (adding data stores and message queues) and get those same improvements in performance. Not all of the improvements from moving to a cloud infrastructure, but the portion that arises from the collaborative architecture.

And such a move would prepare your applications to move to a cloud infrastructure.

Wednesday, August 8, 2012

$1000 per hour

Let's imagine that you are a manager of a development team. You hire (and fire) members of the team, set goals, review performance, and negotiate deliverables with your fellow managers.

Now let's imagine that the cost of developers is significantly higher than it is today. Instead of paying the $50,000 to $120,000 per year, you must pay $1000 per hour, or $2,000,000 per year. (That's two million dollars per year.) Let's also imagine that you cannot reduce this cost through outsourcing or internships.

What would you do?

Here is what I would do:


  • I would pick the best of my developers and fire the others. A smaller team of top-notch developers is  more productive than a large team of mediocre developers.
  • I would provide my developers with tools and procedures to let them be the most productive. I would weigh the cost of development tools against the time that they would save.
  • I would use automated testing as much as possible, to reduce the time developers spend on manual testing. If possible, I would automate all testing.
  • I would provide books, web resources, online training, and conferences to the developers, to give them the best information and techniques on programming.


In other words, I would do everything in my power to make them productive. When their time costs money, saving their time saves me money. Sensible, right?

But the current situation is the same. Developers cost me money. Saving their time saves me money.

So why aren't you doing what you can to save them time?

Sunday, August 5, 2012

The evolution of the UI

Since the beginning of the personal computing era, we have seen different types of user interfaces. These interfaces were defined by technology. The mobile/cloud age brings us another type of user interface.

The user interfaces were:
  • Text-mode programs
  • Character-based graphic programs
  • True GUI programs
  • Web programs
Text-mode programs were the earliest of programs, run on the earliest of hardware. Sometimes run on printing terminals (Teletypes or DECwriters), they had to present output in linear form -- the hardware operated linearly, one character after another. When we weren't investigating problems with hardware, or struggling to software, we dreamed about better displays. (We had seen them in the movies, after all.)

Character-based graphic programs used the capabilities of the "more advanced" hardware such as smart terminals and even the IBM PC. We could draw screens with entry fields -- still in character mode, mind you -- and use different colors to highlight things. The best-known programs from this era would be Wordstar, WordPerfect, Visicalc, and Lotus 1-2-3.

True GUI programs came about with IBM's OS/2, Atari's GEM, and Microsoft's Windows. These were the programs that we wanted! Individual windows that could be moved and resized, fine control of the graphics, and lots of colors! Of course, such programs were only possible with the hardware and software to support them. The GUI programs needed hefty processors and powerful languages for event-driven programming.

The web started in life as a humble method of viewing (and linking) documents. It grew quickly, and web programming surpassed the simple task of documents. It went on to give us applications such as brochures, shopping sites, and eventually e-mail and word processing.

But a funny thing happened on the way to the web. We kept looking back at GUI programs. We wanted web programs to behave like desktop PC programs.

Looking back was unusual. In the transition from text-mode programs to character-based graphics, we looked forward. A few programs, usually compilers and other low-output programs, stayed in text-mode, but everything else moved to character-based graphics.

In the transition from character-based graphics to GUI, we also looked forward. We knew that the GUI was a better form of the interface. No one (well, with the exception of perhaps a few ornery folks) wanted to stay with the character-based UI.

But with the transition from desktop applications and their GUI to the web and its user interface, there was quite a lot of looking back. People invested time and money in building web applications that looked and acted like GUI programs. Microsoft went to great lengths to enable developers to build apps that ran on the web just as they had run on the desktop.

The web UI never came into its own. And it never will.

The mobile/cloud era has arrived. Smartphones, tablets, cloud processing are all available to us. Lots of folks are looking at this new creature. And it seems that lots of people are asking themselves: "How can we build mobile/cloud apps that look and behave like GUI apps?"

I believe that this is the wrong question.

The GUI was a bigger, better incarnation of the character-based UI. Anything the former could do, the latter could do. And prettier. It was a nice, simple progression.

Improvements rarely follow nice simple progressions. Changes in technology are chaotic, with people thinking all sorts of new ideas in all sorts of places. The web is not a bigger, better PC and its user interface was not a bigger, better desktop GUI. Mobile/cloud computing is not a bigger, better web, and its interface is not a bigger, better web interface. The interface for mobile/cloud shares many aspects with the web UI, and some aspects with the desktop GUI, but they have their unique advantages.

To be successful,  identify the differences and leverage them in your organization.

Mobile/cloud needs a compelling application


There has been a lot of talk about cloud computing (or as I call it, mobile/cloud) but perhaps not so much in the way of understanding. While some people understand what mobile/cloud is, they don't understand how to use it. They don't know how to leverage it. And I think that this is part of the adoption of mobile/cloud, or any new technology.

Let's look back at personal computers, and how they were adopted.

When PCs first appeared, companies did not know what to make of them. Hobbyists and enthusiastic individuals had been tinkering with them for a few years, but companies -- that is, the bureaucratic entity of policies and procedures -- had no specific use for them. An economist might say that there was no demand for them.

Companies used PCs as replacements for typewriters, or as replacements for office word processing systems. They were minor upgrades to existing technologies.

Once business-folk saw Visicalc and Lotus 1-2-3, however, things changed. The spreadsheet enabled people to analyze data and make better decisions. (And without a request to the DP department!) Businesses now viewed PCs as a way to improve productivity. This increased demand, because what business doesn't want to improve productivity?

But it took that first "a-ha" moment, that first insight into the new technology's capabilities. Someone had to invent a compelling application, and then others could think "Oh, that is what they can do!" The compelling application shows off the capabilities of the new technology in terms that the business can understand.

With mobile/cloud technology, we are still in the "what can it do" stage. We have yet to meet the compelling application. Mobile/cloud technology has been profitable for the providers (such as Amazon.com) but not for the users (companies such as banks, pharmaceuticals, and manufacturers). Most 'average' companies (non-technology companies) are still looking at this mobile/cloud thing and asking themselves "how can we leverage it?".

It is a good question to ask. We will keep asking it until someone invents the compelling application.

I don't know the nature of the compelling application for mobile/cloud. It could be a better form of e-mail/calendar software. It might be analysis of internal operations. It could be a new type of customer relationship management system.

I don't know for certain that there will be a compelling application. If there is, then mobile/cloud will "take off", with demand for mobile/cloud apps and conversion of existing apps to mobile/cloud. If there is no compelling application, then mobile/cloud won't necessarily die, but will fade into the background (like virtualization is doing now).

I expect that there will be a compelling application, and that mobile/cloud will be popular. I expect our understanding of mobile/cloud to follow the path of previous new technologies: awareness, understanding, application to business, and finally acceptance as the norm.