Sunday, January 29, 2012

Productivity

Today, I wrote a small program to parse web pages and create a graph of relationships between them. The task got me thinking of the capabilities of programming languages.

Different programming languages offer different levels of "power". (By power, I mean the amount of work that can be done by a specific amount of code. Powerful languages require less code to perform the same task as weak languages.)

To some extent, power can be masked by the job. Some languages are weak but well-suited to some tasks. A jet airliner is a powerful craft, yet sometimes a hang glider is needed. Let's stick to mainstream projects, which operate in the middle range. We can avoid the hang glider tasks, and we can avoid the tasks that need supersonic stealth flight.

The languages for mainstream projects have, over time, improved. From the Elder Days of FORTRAN and COBOL, through the object revolution of C++, to the network era of Java and C#, later languages (and their IDEs and libraries) have been more capable than their predecessors.

Today I used Ruby and its libraries to parse the HTML in web pages, and GraphViz to render the graph of relationships. I was able to build my "system" in a few hours. Most of my work was "standing on the shoulders of giants", since I could leverage powerful tools. To build the same system in the early 1990s would require a bit more work (Ruby had not been invented, although GraphViz was around).

Languages and tools of today are more powerful than languages and tools of a decade ago. No surprise there. C++ is more powerful than C, Java more powerful than C++, C# slightly more powerful than Java (once you include the IDE and libraries), and Python and Ruby are more powerful than C#. The evolutionary ladder of languages moves us up the scale of power.

But let us think about legacy projects. Legacy projects often stay with their original technology. A project started in C++ often stays in C++; converting to a different language often means a complete re-write of the entire code base. (And often the re-write must be done in toto, converting all code at once.)

Project managers are right to evaluate the cost of converting to a different language. It is a significant effort, and the task can divert resources from enhancements and defect fixes.

But project managers must balance the costs of conversion against the costs of not converting.

A naive approach says that the cost of not converting is zero -- the project continues running and new versions are released.

A more savvy understanding of non-conversion includes the opportunity cost, the cost of lower productivity over time.

The decision of conversion to a new language is similar to the decision to purchase a new car. The old car runs, but needs maintenance and gets poor gas mileage. A new car will cost a lot of money up front, and will have lower maintenance and operating costs (and will probably be more reliable). When do you buy the new car?

An additional musing:

A large project using older technology can be viewed as inefficient, since it requires more developers to perform the same work in a similar system that uses a more powerful language. Therefore, the decision to defer conversion to a new language is a decision to increase the development team (or to forego the opportunity to reduce the development team).

Sunday, January 22, 2012

Best if viewed in...

Some web sites display the phrase "best if viewed in Internet Explorer 6 or higher".

In the past, this phrase would anger me. I could assume that the web site work work only with Internet Explorer, and my browser of choice (either Opera or Firefox) would fail in some way.

Now the phrase amuses me.

I ask myself, in today's world, why would someone put "works best with IE6" on their web site? What are they hoping to accomplish? What message are they sending?

Internet Explorer version 6 is old. Even Microsoft recommends a later version of Internet Explorer.

Today's browser "market" consists of IE, Firefox, Chrome, and Safari at a bare minimum, and Opera and Lynx for the truly browser-aware. Building a web site for only one browser is unthinkable.

So the message is from an earlier age.

I also suspect that the message was put on sites that performed transactions of some sort, sites that were more than brochure-ware. These sites have users who are attempting to perform some task, either shopping or submitting time cards or recording information. My guess is that the message was a disclaimer, designed to first reduce the number of users with other browsers, and second to provide an easy "out" for the help desk of said web site when people using the "wrong". (Anyone complaining would be pointed to the notice and told to use IE6. Support call closed, user issue, no problem!)

Today, the notion of turning away customers because they have a different browser is ... unusual. It is a rare company that can decline paying customers.

I read the "best if viewed in" phrase now as an indicator, a measure of a web site's age and maintenance. Only web sites designed and built in the late 1990s (and possibly early 2000s) would have this message. Therefore, a site that still bears this message was built in that era and has had no (major) maintenance. Any maintenance that has been performed (if any) has been specific and limited to the task at hand.

In other words, the site is not living on "internet time". It's owner is not updating the site, modifying it to meet new business conditions or leverage new technologies. The web site is... old.

I use the phrase as an indication a company, of how "with it" they are.

Saturday, January 21, 2012

Premature optimization?

It has been said that premature optimization is the root of all evil. (I read this as a loose translation of "optimizing too early can cause you grief", not as "it makes you a bad person".)

We often optimize our programs and systems. We admire system designers who can build systems that work smoothly and with minimal resources -- that is, systems that are optimized.

But what are optimizations? Most of the time, they are not pure optimizations (which use the smallest amount of system resources) but trade-offs (which use one resource in lieu of another). A simple trade-off optimization is caching: using memory (a cheap resource) to avoid database lookups (an expensive operation).

Optimization, or the selection of a specific set of trade-offs, is a good thing as long as the underlying assumptions hold.

Let us consider a long-standing tool in the developer world: version control systems (VCSs). We have used these tools for forty years, starting with SCCS and moving through various generations (RCS, CVS, PVCS, SourceSafe, Subversion, to name a few).

Many version control systems store revisions to files not as whole files but as 'deltas', the changes from one version to another. This decision is a trade-off: using computations (generating the change list) and reducing disk usage. (The list of differences is often smaller than the revised file.)

This trade-off relied on several assumptions:

  • The files stored in the VCS would be text files
  • The changes from one version to another would be a small fraction of the file
  • Disk space was expensive (compared to the user's time)

It turns out that, some forty years later, these assumptions do not always hold. We are using version control systems for more than source code, and some files that are not text files. (Non-text files are handled poorly by the 'delta' calculation logic, and most VCSs simply give up and store the entire file.) User time is expensive (and getting more so) and disk space is cheap (and also getting more so).

The trade-offs made by version control systems are now working against us. We grumble while our systems generate deltas. We care little that the Microsoft Word document files are stored in their entirety.

The latest version control systems ('git' is an example) do away with the notion of deltas. They store the entire file, with various techniques to compress the file and to de-duplicate data. (We still care about disk usage.)

The notion of storing revisions as deltas was an optimization. It is a notion that we are now moving away from. Was it a premature optimization? Was it a trade-off that we made in error? Is it an example of "the root of all evil"?

I think that the answer is no. At the time, with the technology that we had, using deltas was a good trade-off. It reduced our use of resources, and one can justify the claim of optimization. And most importantly, it worked for a long period of time.

An optimization becomes "bad" when the the underlying assumptions fail. At that point, the system is "upside down", or de-optimized. (Some might say "pessimized".) When that happens, we want to re-design the system to use a better technique (and thus reduce our use of resources). The cost of that change is part of the equation, and must be tallied. A long-running optimization with a low cost of change is good; a short-lived optimization (especially one with a high 'fix' cost at the end) is bad.

Optimizations are like leased cars. You can get by for a period of time with lower payments, but in the end you must turn in the car (or purchase it). Knowing the length of the lease and the tail-end costs is important in your decision. Optimizing without knowing the costs, in my mind, is the root of all evil.

Sunday, January 8, 2012

Microsoft discovers the need for an ecosystem?

I believe that we have seen a significant change in Microsoft's philosophy in 2011, one that will lead to a number of changes in the software market.

Allow me to point to two events in the past year:

  • The decision to support HTML5 and JavaScript as first-level development tools. This is a big change from the C#-centric world of .NET.
  • The (somewhat quieter) decision to bundle InstallShield LE with Visual Studio, and drop the Microsoft-built install packager. This is also a big change in the "we supply everything" strategy.

These two changes mark a significant shift in Microsoft's strategy. For the past decades (since the introduction of Windows), Microsoft has had a strategy of being the sole source for all Windows-based software. Microsoft was determined to be the dominant supplier in every market, from operating system to office applications, from development applications to business applications.

This "provide everything" strategy has been successful. If you look at a random PC running Windows today, I expect that you will find that it runs Microsoft applications, with possibly a few home-grown applications and possible some applications from Adobe.

Here's a history of the big products in the Windows world:


  • Microsoft made Word the best -- or at the least most popular -- word processor, beating the competition from WordStar, WordPerfect, and every other competitor.
  • Microsoft made Excel the best -- or the most popular -- spreadsheet, winning the battle with Lotus 1-2-3 and defeating smaller competitors like Borland's Sprint.
  • Microsoft's PowerPoint is the standard presentation software for Windows. There are no competitors to speak of.
  • Microsoft's Access (and later, SQL Server) pushed aside all other database engines: dBase, Paradox, Reflex, R:base, dbVista, and others. A few small competitors (like Faircom) exist in niche markets. IBM's DB2 and Oracle's products still exist, but are less "Windows products" and more "Windows implementations" of multi-platform products.
  • Microsoft made Visual Studio the IDE of choice, driving Borland out of the market.
  • Microsoft purchased the SourceSafe product, integrated it with Visual Studio, and made it the dominate version control system for Windows. (Microsoft recently replaced the old SourceSafe product with Team Foundation Server.) Other players exist, but only as a small portion of the market.
  • Microsoft overpowered Netscape, which re-incarnated itself as the open-source Mozilla project. Browsers are conspicuous in their plurality in Windows; no other major application has this status.
  • Microsoft absorbed network functions into Windows. (Remember Novell Netware?)
  • Microsoft absorbed virus-checking into Windows.
  • Microsoft built media capabilities into Windows, eliminating third-party music players and video players.
  • Microsoft developed SilverLight to compete with Adobe Flash, and has rolled out several successful, capable versions. I suspect that they would soon displace Adobe, had it not been for HTML5 trumping both Flash and SilverLight.

I'm charging of abuse of monopoly power, nor of using internal technical information to develop superior products. (Others have raised such issues.) I am looking at the results of Microsoft's actions, regardless of their methods. And the results are these: the Microsoft ecosystem is dying.

Compared to the ecosystem for the Apple iPhone/iPad market, the ecosystem for Microsoft is small. Lots of people, from individual developers to large companies, are developing apps for IOS. In contrast, few folks are building applications for Windows.

Comparing the Windows ecosystem of today against the Windows ecosystem of two decades ago shows the same pattern: fewer developers today.


This is no surprise. The only winning strategy in the commercial market is to become big. One cannot succeed by staying a small company. (Yes, I recognize that there are lots of small companies writing software for Windows. But are they successful? I humbly submit that they have plans to become larger, and are simply waiting for the "right market" or the "right opportunity".)


The problem with the Windows ecosystem is that Microsoft kills any company that becomes too large. (How large is "too large" is defined by Microsoft.) How can one even consider Windows as a long-term environment for a product? You either stay small or become large and get crushed by the Microsoft empire. And there are opportunities in the IOS and Android markets. Developers have noticed the possibilities.

And I think Microsoft has realized this.

The past few years Microsoft has been claiming that they have a large ecosystem; the claim is to impress (or soothe) corporate buyers. But looking at the products on the market and the attendance at Microsoft conferences and fairs (and looking closely at the demographics and not just the numbers) one can see that Microsoft is not winning the hearts of new developers.

I think that the ecosystem has been changing quietly over the past decade (since the introduction of Linux and the original iMac computers). Developers have been moving to the non-Microsoft platforms in response to the expense, the "buy one Microsoft thing and you need another Microsoft thing" dependencies of products, and the threat of competition from Microsoft.

After a decade of quiet changes, the difference is significant. Microsoft recognizes it, and realizes that they need to change.

I expect that Microsoft will change from a "we supply everything" shop and focus on items of strategic importance. Those items will be money-makers and important system components. Here's a plan for Microsoft:
  • Keep Windows, but make significant revisions. Windows is necessary for the Microsoft world. The brand is valuable. Lots of customers are committed to it and are not willing to move to other platforms. But large customers want better security and easier administration, and small customers want lower expenses and easier administration.
  • Keep ActiveDirectory. It competes with LDAP and is easier to administrate.
  • Keep Exchange for large customers. Microsoft must offer a cloud-based e-mail/calendar solution for smaller customers.
  • Keep parts of Microsoft Office. Word and Excel can compete with the Open Office and Libre Office products. Drop PowerPoint. Keep Visio.
  • Keep Visual Studio and C#. Drop Visual Basic. Increase support for F#.
  • Drop Internet Explorer. (It offers no strategic value, and other browsers do not harm Microsoft's web offerings.)
  • Drop IIS. (It offers no strategic value.)
  • Enhance SQL Server and Access (the front-end GUI) to support data in the cloud as well as local data. Provide data in the cloud so that customers need only Access on their hardware.
  • Develop the Microsoft App Store (or whatever they call it) and allow others to sell applications.
  • Develop an update system that updates all apps, not just the Microsoft-supplied programs.
The decision to drop products like Visual Basic, IE, and IIS may strike some as foolish. It may be my personal bias that dictates the decision for VB, but IE and IIS offer no revenue to Microsoft. The need for IE is long gone; the winning strategy is to have superior web applications, not control of the browser. Microsoft would be better focussing their efforts on superior web applications.

One risk of dropping these products is that once customers convert to other products they may see value in non-Microsoft products. By pushing customers to non-Microsoft products, Microsoft legitimizes non-Microsoft solutions. That is a risk that Microsoft must counter by providing superior value in the products it retains in its offerings.

Despite the risks, I think that this is a good direction for Microsoft. By letting non-Microsoft products thrive, Microsoft can restore developers' faith in the Microsoft ecosystem and encourage them to consider it for new projects. I see it as the best way for Microsoft to succeed.





    Saturday, January 7, 2012

    Predictions for 2012


    Happy new year!

    The turning of the year provides a time to pause, look back, and look ahead. Looking ahead can be fun, since we can make predictions.

    Here are my predictions for computing in the coming year:

    With the rise of mobile apps, we will see changes in project requirements and in the desires of candidates.

    The best talent will work on mobile apps. The best talent will -- as always -- work on the "cool new stuff". The "cool new stuff" will be mobile apps. The C#/.NET and Java applications will be considered "that old stuff". Look for the bright, creative programmers and designers to flock to companies building mobile apps. Companies maintaining legacy applications will have to hire the less enthusiastic workers.

    Less funding for desktop applications. Desktop applications will be demoted to "legacy" status. Expect a reduced emphasis on their staffing. These projects will be viewed as less important to the organization, and will see less training, less tolerance for "Fast Company"-style project teams, and lower compensation. Desktop projects will be the standard, routine, bureaucratic (and boring) projects of classic legacy shops. The C# programmers will be sitting next to, eating lunch with, and reminiscing with, the COBOL programmers.

    More interest in system architects. Mobile applications are a combination of front end apps (the iPhone and iPad apps) and back-end systems that store and supply data. Applications like Facebook and Twitter work only because the front end app can call upon the back end systems to obtain data (updates submitted by other users). Successful applications will need people who can visualize, describe, and lead the team in building mobile applications.

    More interest in generalists. Companies will look to bring on people skilled in multiple areas (coding, testing, and user interfaces). They will be less interested in specialists who know a single area -- with a few exceptions of the "hot new technologies".

    Continued fracturing of the tech world. Amazon.com, Apple, and Google will continue to build their walled gardens of devices, apps, and media. Music and books available from Amazon.com will not be usable in the Apple world (although available on the iPod and iPad in the Amazon.com Kindle app). Music and books from Apple will not be available on Amazon.com Kindles and Google devices. Consumers will continue to accept this model. (Although like 33 RPM LPs and 45 PRM singles, consumers will eventually want a music and books on multiple devices. But that is a year or two away.)

    Cloud computing will be big, popular, and confused. Different cloud suppliers offer different types of cloud services. Amazon.com's EC2 offering is a set of virtual machines that allow one to "build up" from there, installing operating systems and applications. Microsoft's Azure is a set of virtual machines with Windows/.NET and one may build applications starting at a higher level that Amazon's offering. Salesforce.com offers their cloud platform that lets one build applications at an even higher level. Lots of folks will want cloud computing, and vendors will supply it -- in the form that the vendor offers. When people from different "clouds" meet, they will be confused that the "other guy's cloud" is different from theirs.

    Virtualization will fade into the background. It will be useful in large shops, and it will not disappear. It is necessary for cloud computing. But it will not be the big star. Instead, it will be a quiet, necessary technology, joining the ranks of power management, DASD management, telecommunications, and network administration. Companies will need smart, capable people to make it work, but they will be reluctant to pay for them.

    Telework will exist, quietly. I expect that the phrase "telework" will be reserved for traditional "everyone works in the office" companies that allow some employees to work in remote locations. For them, the default will be "work in the office" and the exception will be "telework". In contrast, small companies (especially start-ups) will leverage faster networks, chat and videoconferencing, mobile devices, and social networks. Their standard mode of operation will be "work from wherever" but they won't think of themselves as offering "telework". From their point of view, it will simply be "how we do business", and they won't need a word to distinguish it. (They may, however, create a word to describe folks who insist on working in company-supplied space every day. Look for new companies to call these people "in-house employees" or "residents".)

    Understand the sea change of the iPad. The single-app interface works for people consuming information. The old-fashioned multi-windowed desktop interface works for people composing and creating information. This change leads to a very different approach to the design of applications. This year people will understand the value of the "swipe" interface and the strengths of the "keyboard" interface.

    Voice recognition will be the hot new tech. With the success of "Siri" (and Android's voice recognizer "Majel"), expect interest in voice recognition technology and apps designed for voice.

    Content delivery becomes important. Content distributors (Amazon.com, Google, and Apple) become more important, as they provide exclusive content within their walled gardens. The old model of a large market in which anyone can write and sell software will change to a market controlled by the delivery channels. The model becomes one similar to the movie industry (a few studios producing and releasing almost all movies) and the record industry (a few record labels producing and releasing almost all music) and the book industry (a few publishing houses... you get the idea).

    Content creation becomes more professional. With content delivery controlled by the few major players, the business model becomes less "anyone can put on a show" and more of "who do you know". Consumers and companies will have higher expectations of content and the abilities of those who prepare it.

    Amateur producers will still exist, but with less perceived value. Content that is deemed "professional" (that is, for sale on the market) will be developed by professional teams. Other content (such as the day-to-day internal memos and letters) will be composed by amateur content creators: the typical office worker equipped with a word processor, a spreadsheet, and e-mail will be viewed as less important, since they provide no revenue.

    Microsoft must provide content provider and enable professional creators. Microsoft's business has been to supply tools to amateur content creators. Their roots of BASIC, DOS, Windows, Office, and Visual Basic let anyone (with or without specific degrees or certifications) create content for the market. With the rise of the "professional content creator", expect Microsoft to supply tools labeled (and priced) for professionals.

    Interest in new programming languages. Expect a transition from the object-oriented languages (C++, Java, C#) to a new breed of languages that introduce ideas from functional programming. Languages such as Scala, Lua, Python, and Ruby will gain in popularity. C# will have a long life -- but not the C# we know today. Microsoft has added functional programming capabilities to the .NET platform, and modified the C# language to use them. C# will continue to change as Microsoft adapts to the market.

    The new year brings lots of changes and a bit of uncertainty, and that's how it should be.

    Wednesday, January 4, 2012

    Vox populi

    Apple's "Siri" has shown that voice recognition is not only possible but also practical (and fun). Voice-driven apps are naturals for the smartphone and tablet world, where keyboards and mice are foreigners. A voice-driven app solves the problem of composition on a tablet app: most smartphone and tablet apps are better for the consumption of data, with desktop apps better for composing data.

    The new world of voice-driven apps will bring a number of changes.

    Voice-driven apps require a new design. One cannot simple replace a keyboard and mouse with voice-driven commands; the user experience is too different.

    The (relatively) commonplace task of designing a GUI for a program becomes problematic with voice-directed apps. Do we use commands such as "place a button below the list box"? This may lead to a renaissance of SHRDLU and the old "Adventure" and "Zelda" programs, with their commands of "go north" and "take everything but the snake".

    The testing of apps will change, and require new tools and techniques. How does one test a voice-driven app? Do you have people speak the commands, or do you have pre-recorded commands and play them back to the app? How can you build a suite of automated tests for voice-driven apps?

    Our notions of the workplace will change. For the past four decades, we have built workplaces from cubicles. Cubicles are sufficient for people to type on keyboards, but are poor environments for speaking to computers. With voice-driven apps, and everyone speaking at their computer, the noise in the typical office increases. Voice-recognition software may be able to filter out background noise and voices; humans may have a harder time of it. Do we change our workplaces to individual offices?

    What about the folks working at home, or working in co-working locations? We'll need quiet places to perform our work. At home, that may mean a separate room with a door. Co-working sites may change to suites of hard-walled offices.

    The introvert/extrovert gap may be significant with voice-driven apps. Introverts emphasize written text, and they spend a lot of time composing their words. Extroverts speak readily, with less weight on their ideas. The extroverts share their ideas early and look to the group to help shape the final concepts; introverts think up front and have already decided by the time they hand you the document. I expect that extroverts will be more comfortable (or perhaps less uncomfortable) with voice-driven apps.

    Do we combine voice-driven with gesture-driven? Microsoft's Kinect has shown itself to be capable and reliable. Perhaps we will have a combination of voice and gesture to control computers and to create content. I can easily see the layout of a GUI being developed by a programmer with voice-driven and gesture-driven commands. "Place a new button at the bottom of the screen", says the developer, and the IDE will create a new button. "Change the text to 'Cancel'" says the developer, "Now move the button to the right" and the programmer gestures with his hands, pushing an imaginary button to the right until it is in the desired position.

    Voice recognition is here. I see lots of changes, for developers, testers, and users.