Friday, August 31, 2012

Microsoft is serious about WinRT

The month of August taught us one thing: Microsoft is serious about WinRT and the new Win 8 UI.

I suspect that most Windows developers were secretly hoping that the Windows 8 UI (formerly known as "Metro") were a grand joke, a big bluff by Microsoft. But the release of Windows 8, complete with UI-makeover, has shown that Microsoft was not bluffing. Microsoft is serious about this Windows 8 thing.

The new Windows 8 UI is quite a departure from "good old Windows". It is a bigger change than the change from Windows 3 to Windows 95. Windows 8 introduces "tiles" (bigger and better app icons), swipes, taps, mouseless operation, and even keyboardless operation.

The changes in Windows 8 are not limited to the UI. Windows 8, in its "RT" flavor, boasts a new API, a smaller and more focussed API that breaks many current programs. (Programs that use the "classic" Windows API are permitted to run under "Windows desktop" mode on full-blown Windows 8, but cannot run under the more limited Windows 8 RT environment.

Worst of all, Windows 8 (in the new UI) eliminates the "Start" button. This change, I think, surpasses all others in terms of shock value. People will tolerate new APIs and new tiles, but they know and love their Start button.

But Microsoft is serious about these changes, and -- perhaps more shocking than anything Microsoft has done -- I agree with them.

Microsoft has to move into the tablet space. They have to move into mobile/cloud computing. The reason is simple: mobile/cloud is where the growth is.

The Windows platform (the classic Windows desktop platform) has become stagnant. Think about it: When was the last time that you purchased a new Windows application? I'm not talking about upgrades to Microsoft Office or Adobe Acrobat, but a purchase of a new application, one that you have not been using the past? If you're like me, the answer is: a long time ago. I have been maintaining a Windows platform and set of applications, but not expanding it.

The Windows platform (the classic desktop platform) has achieved its potential, and has nowhere to grow. The web took away a lot of the growth of Windows applications (why buy or create a Windows-only app when I can buy or create a web app?) and the mobile/cloud world is taking away the rest of Windows desktop potential. (It's also taking away the rest of Mac OSX potential and Linux desktop potential. The web and mobile/cloud are equal-opportunity paradigm shifts.)

Microsoft recognizes this change, and they are adapting. With Windows 8, they have created a path forward for their developers and customers. This path is different from previous Windows upgrades, in that Windows 8 does not guarantee to run all previous applications. (At least the Windows 8 RT path does not -- it has the reduced API that restricts apps to a limited set of operations.)

Windows 8 RT is a big "reset" for the Microsoft development community. It introduces a new API and a new toolset (Javascript and HTML5). It discards a number of older technologies (a big departure from Microsoft's previous policy of maintaining backwards-compatibility). It forces developers to the new tools and API, and knocks lots of experienced developers down to the junior level. In effect, it sets all developers on the same "starting line" and starts a new race.

But the tablet and mobile/cloud worlds are the worlds of growth. Microsoft has to move there. They cannot ignore it, nor can they move there in gentle, easy steps. Apple is there today. Google is there today. Amazon.com is there today. Microsoft must move there today, and must force its developers there today.

I see this move as a good thing for Microsoft. It will cause a lot of change (and a lot of pain) but it keeps them competitive.

Tuesday, August 28, 2012

The deception of C++'s 'continue' and 'break'

Pick up any C++ reference book, visit any C++ web site, and you will see that the 'continue' and 'break' keywords are grouped with the loop constructs. In many ways it makes sense, since you can use these keywords with only those constructs.

But the more I think about 'continue' and 'break', the more I realize that they are not loop constructs. Yes, they are closely associated with 'while' and 'for' and 'case' statements, but they are not really loop constructs.

Instead, 'continue' and 'break' are variations on a different construct: the 'goto' keyword.

The 'continue' and 'break' statements in loops bypass blocks of code. 'continue' transfers control to the end of the loop block and allows the next iteration to continue. 'break' transfers control to the end of the loop block and forces the loop to end (allowing code after the loop to execute). These are not loop operations but 'transfer of control' operations, or 'goto' operations.

Now, modern programmers have declared that 'goto' operations are evil and must never, ever be used. Therefore, 'continue' and 'break', as 'goto' in disguise, are evil and must never, ever be used.

(The 'break' keyword can be used in 'switch/case' statements, however. In that context, a 'goto' is exactly the construct that we want.)

Back to 'continue' and 'break'.

If 'continue' and 'break' are merely cloaked forms of 'goto', then we should strive to avoid their use. We should seek out the use of 'continue' and 'break' in loops and re-factor the code to remove them.

I will be looking at code in this light, and searching for the 'continue' and 'break' keywords. When working on systems, I will make their removal one of my metrics for the improvement of the code.

Sunday, August 26, 2012

Linux in the post-PC world

The advent of tablets and mobile computing devices has generated much discussion. The post-PC world offers convenience and reliability, and a stirring of the marketplace that could re-arrange the major players.

One topic that I have not seen is the viability of Linux. Can Linux survive in the post-PC world?

The PC world was defined by hardware. The "IBM PC standard" was set in 1981, with the introduction of the IBM PC.

The post-PC world is also defined by devices. It is a world in which the primary (and possibly only) devices we use (directly) are not PCs but tablets and smartphones (and possibly a few other devices).

What does this have to do with Linux?

Linux was -- and is -- a parasite in the PC world. It runs on PCs, and we can run it on PCs for two reasons. First, Linux is written to be compatible with the PC standard. Second, the PC standard is open and we can run anything on it. (That is, we can boot any operating system.)

The tablet world is different. Apple's iPads and Microsoft's Surface tablets are locked down: they run only approved software. An iPad will boot only iOS and a Microsoft Surface tablet will boot only a signed operating system. (It doesn't have to be Windows, but it does have to be signed with a specific key.) The lock-down is not limited to iPads and Surface tablets; Amazon.com Kindles and Barnes and Noble Nooks have the same restrictions.

This lock-down in the tablet world means that we are limited in our choice of operating systems. We cannot boot anything that we want; we can boot only the approved operating systems.

(I know that one can jail-break devices. One can put a "real" Linux on a Kindle or a Nook. IPads can be broken. I suspect that Surface tablets will be broken, too. But it takes extra effort, voids your warrantee, and casts doubt over any future problem. (Is the problem caused by jail-breaking?) I suspect few people will jail-break their devices.

Linux was able to thrive because it was easy to install. In the post-PC world, it will not be easy to install Linux.

I suspect that the future of Linux will lie in the server room. Servers are quite different from consumer PCs and the consumer-oriented tablets. Servers are attended by system administrators, and they expect (and want) fine-grained control over devices. Linux meets their needs. Consumers want devices that "just work", so they choose the easy-to-use devices and that creates market pressure for iPads and Surfaces. System administrators want control, and that creates market pressure for Linux.

Friday, August 24, 2012

How I fix old code

Over the years (and multiple projects) I have developed techniques for improving object-oriented code. My techniques work for me (and the code that has been presented to me). here is what I do:

Start at the bottom Not the base classes, but the bottom-most classes. The classes that are used by other parts of the code, and have no dependencies. These classes can stand alone.

Work your way up After fixing the bottom classes, move up one level. Fix those classes. Repeat. Working up from the bottom is the only way I have found to be effective. One can have an idea of the final result, a vision of the finished product, but only by fixing the problems at the bottom can one achieve any meaningful results.

Identify class dependencies To start at the bottom, one must know the class dependencies. Not the class hierarchy, but the dependencies between classes. (Which classes use which other classes at run-time.) I use some custom Perl scripts to parse code and create a list of dependencies. The scripts are not perfect but they give me a good-enough picture. The classes with no dependencies are the bottom classes. Often they are utility classes that perform low-level operations. They are the place to start.

Create unit tests Tests are your friends! Unit tests for the bottom (stand-alone) classes are generally easy to create and maintain. Tests for higher-level classes are a little trickier, but possible with immutable lower-level classes.

Make objects immutable The Java String class (and the C# String class) showed us a new way of programming. I ignored it for a long time (too long, in my opinion). Immutable objects are unchangeable, and do not have the "classic" object-oriented functions for setting properties. Instead, they are fixed to their original value. When you want to change a property, the immutable object techniques dictate that instead of modifying an object you create a new object.

I start by making the lowest-level classes immutable, and then working my way up the "chain" of class dependencies.

Make member variables private Create accessor functions when necessary. I prefer to create "get" accessors only, but sometime it is necessary to create "set" accessors. I find that it easier to track and identify access with functions than with member variables, but that may be an effect of Visual Studio. Once the accessors are in place, I forget about the "get" accessors and look to remove the "set" accessors"

Create new constructors Constructors are your friends. They take a set of data and build an object. Create the ones that make sense for your application.

Fix existing constructors to be complete Sometimes people use constructors to partially construct objects, relying on the code to call "set" accessors later. Immutable object programming has none of that nonsense: when you construct an object you must provide everything. If you cannot provide everything, then you are not allowed to construct the object! No soup (or object) for you!

When possible, make member functions static Static functions have no access to member variables, so one must pass in all "ingredient" variables. This makes it clear which variables must be defined to call the function. Not all member functions can be static; make the functions called by constructors static when possible. (Really, put the effort into this task.) Calls to static functions can be re-sequenced at will, since they cannot have side effects on the object.

Static functions can also be moved from one class to another, at will. Or at least easier than member functions. It's a good attribute when re-arranging code.

Reduce class size Someone (I don't remember where) claimed that the optimum class size was 70 lines of code. I tend to agree with this size. Bottom classes can easily be expressed in 70 lines. (if not, they are probably composites of multiple elementary classes.) Higher-level classes can often be represented in 70 lines or less, sometimes more. (But never more than 150 lines.)

Reducing class size usually means increasing the number of classes. You code size may shrink somewhat (my experience shows a reduction of 40 to 60 percent) but it does not reduce to zero. Smaller classes often means more classes. I find that a system with more, smaller classes is easier to understand than one with fewer, large classes.

Name your classes well Naming is one of the great challenges of programming. Pick names carefully, and change names when it makes sense. (If your version control system resists changes to class names, get a new version control system. It is the servant, not you!)

Talk with other developers Discuss changes with other developers. Good developers can provide useful feedback and ideas. (Poor developers will waste your time, though.)

Discuss code with non-developers Our goal is to create code that can be read by non-developers who are experts in the subject matter. We want them to read our code, absorb it, and provide feedback. We want them to say "yes, that seems right" (or even better, "oh, there is a problem here with this calculation"). To achieve that level of understanding, we need to strip away all of the programming overhead: temporary variables, memory allocation, and sequence/iteration gunk. With immutable object programming, meaningful names, and modern constructs (in C++, that means BOOST) we can create high-level routines that are readable by non-programmers.

(Note that we are not asking the non-programmers to write code, merely to read it. That is enough.)

These techniques work for me (and the folks on my projects). Your mileage may vary.

Wednesday, August 22, 2012

Microsoft changes its direction

Microsoft recently announced a new version of its Office suite (version 13), and included support for the ODF format. This is big news.

The decision to support ODF does not mean that the open source fanboys have "won".

As I see it, the decision to support ODF means that Microsoft has changed its strategy.

Microsoft became dominant in Windows applications, in part due to the proprietary formats of Microsoft Office and the network effect: everyone wanted Microsoft Office (and nothing else) because everyone that they knew (and with whom they exchanged documents) used Microsoft Office. The proprietary format ensured that one used the true Microsoft Office and not a clone or compatible suite.

Microsoft used that network effect to drive people to Windows (providing a Mac version of Office that was close but not quite the same as the Windows version). Their strategy was to sell licenses for Microsoft Windows, Microsoft Office, Microsoft Active Directory, Microsoft Exchange, Microsoft SQL Server, and other Microsoft products, all interlocking and using proprietary formats for storage.

And that strategy worked for two decades, from 1990 to 2010.

Several lawsuits and injunctions forced Microsoft to open their formats to external players. Once they did, other office suites gained the ability to read and write files for Office.

With Microsoft including the ODF formats in Office, they are no longer relying on proprietary file formats. Which means that they have some other strategy in mind.

That new strategy remains to be seen. I suspect that it will include their Surface tablets and Windows smartphones. I also expect cloud computing (in the form of Windows Azure) to be part of the strategy too.

The model of selling software on shiny plastic discs has come to an end. With that change comes the end of the desktop model of computing, and the dawn of the tablet model of computing.

Sunday, August 19, 2012

Windows 8 is like Y2K, sort of

When an author compares an event to Y2K, the reader is prudent to attend with some degree of skepticism. The Y2K problem was large and affected multiple platforms across all industries. The threat of mobile/cloud computing (if it can even be considered a threat) must be large and wide-spread to stand against Y2K.

I will say up front that the mobile/cloud platform is not a threat. If anything, it is an expansion of technical options for systems, a liberalization of solution sets.

Nor does the mobile/cloud platform have a specific implementation date. With Y2K, we had a very hard deadline for changes. (Deadlines varied across systems, with some earlier than others. For example, bank systems that calculated thirty-year mortgages were corrected in 1970.)

But the change from traditional web architectures to mobile/cloud is significant, and the transition from desktop applications to mobile/cloud is greater. The change from desktop to mobile/cloud requires nothing less than a complete re-build of the application: new UI, new data storage, new system architecture.

And it is these desktop applications (which invariably run under Microsoft Windows) that have an impending crisis. These desktop applications run on "classic" Windows, the Windows of Win32 and MFC and even .NET. These desktop applications have user interfaces that require keyboards and mice. These desktop applications assume constant and fast access to network resources.

One may wonder how these desktop applications, while they may be considered "old-fashioned" and "not of the current tech", can be a problem. After all, as long as we have Windows, we can run them, right?

Well, not quite. As long as we have Windows with Win32 and MFC and .NET (and ODBC and COM and ADO) then we can run them. But there is nothing that says Microsoft will continue to include these packages in Windows. In fact, the new WinRT offering does not include them.

Windows 8, on a desktop PC, runs in two modes: Windows 8 mode and "classic" mode. The former runs apps built for the mobile/loud platform. The latter is much like the old DOS compatibility box, included in Windows to allow us to run old, command-line programs. The "classic" Windows mode is present in Windows 8 as a measure to allow us (the customers and users of Windows) to transition our applications to the new UI.

Microsoft will continue to release new versions of Windows. I am reasonably sure that Microsoft is working on "Windows 9" even with the roll-out of Windows 8 under way. New versions of Windows will come out with new features.

At some point, the "classic Windows compatibility box" will go away. Microsoft may remove it in stages, perhaps making it a plug-in that can be added to the base Windows package. Or perhaps it will be available in only the premium versions of Windows. It is possible that, like the DOS command prompt that yet remains in Windows, the "classic Windows compatibility box" will remain in Windows -- but I doubt it. Microsoft likes the new revenue model of mobile/cloud.

And this is how I see mobile/cloud as a Y2K-like challenge. When the "classic Windows compatibility box" goes away, all of the old-style applications must go away too. You will have to either migrate to the new Windows 8 UI (and the architecture that such a change entails) or you will have to go without.

Web applications are less threatened by mobile/cloud. They run in browsers; the threat to them will be the loss of the browser. That is another topic.

If I were running a company (large or small) I would plan to move to the new world of mobile/cloud. I would start by inventorying all of my current desktop applications and forming plans to move them to mobile/cloud. That process is also another topic.

Comparing mobile/cloud to Y2K is perhaps a bit alarmist. Yet action must be taken, either now or later. My advice: start planning now.

Wednesday, August 15, 2012

Cloud productivity is not always from the cloud

Lots of people proclaim the performance advantages of cloud computing. These folks, I think, are mostly purveyors of cloud computing services. Which does not mean that cloud computing has no advantages or offers no improvements in performance. But it also does not mean that all improvements from migrating to the cloud are derived from the cloud.

Yes, cloud computing can reduce administration costs, mostly by standardizing the instances of hosts to a simple set of virtualized machines.

And yes, cloud computing can reduce the infrastructure costs of servers, since the cloud provider leverages economies of scale (and virtualized servers).

But a fair amount of the performance improvement of cloud computing comes from the re-architecturing of applications. Changing one's applications from monolithic, one-program-does-it-all to smaller collaborative apps working with common data stores and message queues has an affect on performance. Shifting from object-oriented programming to the immutable-object programming needed for cloud computing also improves performance.

Keep in mind that these architectural changes can be done with your current infrastructure -- you don't need cloud to make them.

You can re-architect your applications (no small task, I will admit) and use them in your current environment (adding data stores and message queues) and get those same improvements in performance. Not all of the improvements from moving to a cloud infrastructure, but the portion that arises from the collaborative architecture.

And such a move would prepare your applications to move to a cloud infrastructure.

Wednesday, August 8, 2012

$1000 per hour

Let's imagine that you are a manager of a development team. You hire (and fire) members of the team, set goals, review performance, and negotiate deliverables with your fellow managers.

Now let's imagine that the cost of developers is significantly higher than it is today. Instead of paying the $50,000 to $120,000 per year, you must pay $1000 per hour, or $2,000,000 per year. (That's two million dollars per year.) Let's also imagine that you cannot reduce this cost through outsourcing or internships.

What would you do?

Here is what I would do:


  • I would pick the best of my developers and fire the others. A smaller team of top-notch developers is  more productive than a large team of mediocre developers.
  • I would provide my developers with tools and procedures to let them be the most productive. I would weigh the cost of development tools against the time that they would save.
  • I would use automated testing as much as possible, to reduce the time developers spend on manual testing. If possible, I would automate all testing.
  • I would provide books, web resources, online training, and conferences to the developers, to give them the best information and techniques on programming.


In other words, I would do everything in my power to make them productive. When their time costs money, saving their time saves me money. Sensible, right?

But the current situation is the same. Developers cost me money. Saving their time saves me money.

So why aren't you doing what you can to save them time?

Sunday, August 5, 2012

The evolution of the UI

Since the beginning of the personal computing era, we have seen different types of user interfaces. These interfaces were defined by technology. The mobile/cloud age brings us another type of user interface.

The user interfaces were:
  • Text-mode programs
  • Character-based graphic programs
  • True GUI programs
  • Web programs
Text-mode programs were the earliest of programs, run on the earliest of hardware. Sometimes run on printing terminals (Teletypes or DECwriters), they had to present output in linear form -- the hardware operated linearly, one character after another. When we weren't investigating problems with hardware, or struggling to software, we dreamed about better displays. (We had seen them in the movies, after all.)

Character-based graphic programs used the capabilities of the "more advanced" hardware such as smart terminals and even the IBM PC. We could draw screens with entry fields -- still in character mode, mind you -- and use different colors to highlight things. The best-known programs from this era would be Wordstar, WordPerfect, Visicalc, and Lotus 1-2-3.

True GUI programs came about with IBM's OS/2, Atari's GEM, and Microsoft's Windows. These were the programs that we wanted! Individual windows that could be moved and resized, fine control of the graphics, and lots of colors! Of course, such programs were only possible with the hardware and software to support them. The GUI programs needed hefty processors and powerful languages for event-driven programming.

The web started in life as a humble method of viewing (and linking) documents. It grew quickly, and web programming surpassed the simple task of documents. It went on to give us applications such as brochures, shopping sites, and eventually e-mail and word processing.

But a funny thing happened on the way to the web. We kept looking back at GUI programs. We wanted web programs to behave like desktop PC programs.

Looking back was unusual. In the transition from text-mode programs to character-based graphics, we looked forward. A few programs, usually compilers and other low-output programs, stayed in text-mode, but everything else moved to character-based graphics.

In the transition from character-based graphics to GUI, we also looked forward. We knew that the GUI was a better form of the interface. No one (well, with the exception of perhaps a few ornery folks) wanted to stay with the character-based UI.

But with the transition from desktop applications and their GUI to the web and its user interface, there was quite a lot of looking back. People invested time and money in building web applications that looked and acted like GUI programs. Microsoft went to great lengths to enable developers to build apps that ran on the web just as they had run on the desktop.

The web UI never came into its own. And it never will.

The mobile/cloud era has arrived. Smartphones, tablets, cloud processing are all available to us. Lots of folks are looking at this new creature. And it seems that lots of people are asking themselves: "How can we build mobile/cloud apps that look and behave like GUI apps?"

I believe that this is the wrong question.

The GUI was a bigger, better incarnation of the character-based UI. Anything the former could do, the latter could do. And prettier. It was a nice, simple progression.

Improvements rarely follow nice simple progressions. Changes in technology are chaotic, with people thinking all sorts of new ideas in all sorts of places. The web is not a bigger, better PC and its user interface was not a bigger, better desktop GUI. Mobile/cloud computing is not a bigger, better web, and its interface is not a bigger, better web interface. The interface for mobile/cloud shares many aspects with the web UI, and some aspects with the desktop GUI, but they have their unique advantages.

To be successful,  identify the differences and leverage them in your organization.

Mobile/cloud needs a compelling application


There has been a lot of talk about cloud computing (or as I call it, mobile/cloud) but perhaps not so much in the way of understanding. While some people understand what mobile/cloud is, they don't understand how to use it. They don't know how to leverage it. And I think that this is part of the adoption of mobile/cloud, or any new technology.

Let's look back at personal computers, and how they were adopted.

When PCs first appeared, companies did not know what to make of them. Hobbyists and enthusiastic individuals had been tinkering with them for a few years, but companies -- that is, the bureaucratic entity of policies and procedures -- had no specific use for them. An economist might say that there was no demand for them.

Companies used PCs as replacements for typewriters, or as replacements for office word processing systems. They were minor upgrades to existing technologies.

Once business-folk saw Visicalc and Lotus 1-2-3, however, things changed. The spreadsheet enabled people to analyze data and make better decisions. (And without a request to the DP department!) Businesses now viewed PCs as a way to improve productivity. This increased demand, because what business doesn't want to improve productivity?

But it took that first "a-ha" moment, that first insight into the new technology's capabilities. Someone had to invent a compelling application, and then others could think "Oh, that is what they can do!" The compelling application shows off the capabilities of the new technology in terms that the business can understand.

With mobile/cloud technology, we are still in the "what can it do" stage. We have yet to meet the compelling application. Mobile/cloud technology has been profitable for the providers (such as Amazon.com) but not for the users (companies such as banks, pharmaceuticals, and manufacturers). Most 'average' companies (non-technology companies) are still looking at this mobile/cloud thing and asking themselves "how can we leverage it?".

It is a good question to ask. We will keep asking it until someone invents the compelling application.

I don't know the nature of the compelling application for mobile/cloud. It could be a better form of e-mail/calendar software. It might be analysis of internal operations. It could be a new type of customer relationship management system.

I don't know for certain that there will be a compelling application. If there is, then mobile/cloud will "take off", with demand for mobile/cloud apps and conversion of existing apps to mobile/cloud. If there is no compelling application, then mobile/cloud won't necessarily die, but will fade into the background (like virtualization is doing now).

I expect that there will be a compelling application, and that mobile/cloud will be popular. I expect our understanding of mobile/cloud to follow the path of previous new technologies: awareness, understanding, application to business, and finally acceptance as the norm.