Tuesday, March 15, 2011

Microsoft should recognize "good enough"

The good folks in the tech support group came by today and upgraded my Microsoft Office from version 2007 to version 2010. The experience left me wondering why Microsoft bothered to introduce a new version of MS Office.

I understand the reasons for version 2007. It was a big change. It introduced the "ribbon", a new way of presenting the GUI menus to users. It also introduced the MS OOXML file formats, which made reading (and writing) files for MS Office much easier. Customers, third parties, and I suspect Microsoft, all benefitted.

With version 2010, I don't see the big changes. There *are* changes. Small changes. The non-intuitive "Office button" has been changed to a tab for the ribbon, which makes it closer to the old "File" menu that was the standard for GUI applications. The "skin" has changed its color scheme from silvery-blue to silvery-gray. There is a new menu item called "Team", which apparently lets one use Microsoft's Team Foundation Server to store and share documents. (But no connection to Microsoft SharePoint or Microsoft Live -- at least none that I can see.)

So I have to wonder: why release a new version? What's in it for Microsoft? More importantly, how do their customers benefit?

And I am not sure that we even need a new version of MS Office.

I suspect that we (in the app industry) have gotten pretty good at word processing, spreadsheets, and note-taking. Improvements beyond fancy skins will be hard to come by. We can do only so much with fonts, justification, and formulas. (In some ways, Intuit has the same problem with Quicken and Quickbooks. Each year they introduce new versions that are advertised as easier to use. Yet accounting has been around for centuries and we don't really need new, imaginative approaches.)

Looking at the situation from another angle: Word processing and spreadsheets, for the most part, are "gen 2" computer applications, and we are moving to "gen 3" apps.

"Gen 1" applications were the centralized, business oriented accounting and inventory apps, where users submitted well-defined transactions to well-defined central databases and received well-defined reports. "Gen 2" applications are the early PC applications with users controlling their own unstructured data and using it for their own (possibly unstructured) analysis. ("Islands of information", in the 1990s term.) Word processing and spreadsheets fit into this category.

The "gen 3" applications of shared data ("web 2.0", social media) have quite different motivators and usages. LiveJournal, Facebook, and Twitter fall into this group. The earlier applications will not fit into this group, no matter how hard you (or even Google) push.

We're done with serious development of the "gen 2" applications. We've made them "good enough". It's time to move on to new ideas.

Microsoft should recognize that MS Office is good enough. Their history has been one of shipping software when it was deemed "good enough". Yet now that it is a large income stream, they seem determined to maintain it. They need to move on.

Because the rest of us have.

Sunday, March 13, 2011

The rate of change

In the good old days, technology changed. (Yeah, it still changes.) But it changed at a slow rate. When computers were large, hulking beasts that filled entire rooms, changes occurred over years and were small. You might get a new FORTRAN or COBOL compiler -- but not a new language -- every half-decade or so. (FORTRAN-66 was followed by FORTRAN-77, for example.)

Today, computers get faster and more powerful every six months. (Consider the Apple iPad, with the second version released prior to a year after the first.) Microsoft has released compilers for C# in 2001, 2003, 2005, 2008, and 2010 -- about one every two years. And not only do we get compilers, but we also get new languages. In 1995, Java was the new thing. In 2001, it was C#. Now we have Ruby, Python, Erlang, Haskell, Scala, Lua, and a bunch more. Not all of these will be the "next big thing", but one (or possibly more) will be.

Organizations can absorb change at a certain rate, and no faster. Each organization has its own rate. Some companies are faster than others. Larger companies take more time, since they have more people involved in decisions and more legacy applications. Small companies with fewer people and "software assets" can adopt new technologies quicker. Start-ups with a small number of employees and a few lines of code can move the quickest.

We're in a situation where technology changes faster than most companies can absorb the change. In most (big-ish) companies, managers don't really work with technology but make decisions about it. They make decisions based on their experience, which they got when they were managers-to-be and still working with technology. So managers in today's organizations think that tech works like a C++ compiler (or maybe a Java JVM), and senior managers think that tech works like a COBOL compiler and IMS database. (I imagine that MBA graduates who have no direct experience with tech believe that tech works as a series of commoditized black boxes that are replaceable at the proper cost.)

This is a big deal. If managers cannot value technology and make good judgements, then the decision-making process within companies becomes political, with different groups pushing for their positions and advocating certain directions. Solutions are selected for perceived benefits and the results can be vastly different from the desired outcome. Mistakes can be very expensive for a company, and possibly fatal to the project or the company.

So what is a company to do? One could hire managers who have deep technical knowledge and keep abreast of changes, but such managers are hard to find and hard to keep. One could create a separate team of technologists to set a technical direction for the company, but this can devolve into a special interest group within the company and create additional politics.

I think the best thing a company can do is set a general direction to keep the company technically capable, to set an expectation of all employees to be technically aware, and to reward those teams that demonstrate the ability to manage technology changes. Instead of dictating a specific solution, look for and encourage specific behaviors.

Saturday, March 12, 2011

Stupid wizard tricks

Sometimes well-intentioned designs are less than effective.

For example, the Lenovo "select a PC" wizard on their web site. I am in the market for a laptop PC, and visited the Lenovo web site to view their wares. (I have had good experiences with IBM Thinkpads, so Lenovo has an advantage in the selection process.)

After viewing several web pages and being overwhelmed by the choices of PCs, I chose to use the Lenovo web "wizard" (my term, not theirs) to select a laptop. I had been looking at a netbook PC, and I knew that I did *not* want a netbook PC. I want a full-blown laptop PC. But their product line is larger than I can hold in my head at one time, so I ran their web wizard to ask me questions and pick "the right PC for me".

There web wizard is an awesome construct. It asks lots of questions, and then asks a second round of "balance factor X against factor Y" questions where X and Y are attributes like screen size and battery life.

Finally you get to the recommendation. This is, by scientific analysis of your answers to the multitude of questions, the best PC for you. And for me, the web wizard selected... the exact netbook that I had been looking at before, the one that I knew I did not want!

So here I am with the exact wrong answer. The web wizard, with its tiny brain, has decided that I need a netbook. Yet my gut tells me that I do not want it. What to do?

There is no "make adjustments" option. My only option is to start the entire web wizard (with its multitude of questions) from the beginning. I choose not to go through that again -- now it is an ordeal, not an assistant.

The end result: I did *not* (and to this date have not) purchased a laptop (or any other computer) from Lenovo. This is a :FAIL all around.

The moral for software developers: Use wizards -- that is, one-way selection processes of guided questions -- with care, and when you do, keep them short. The larger picture is to allow the user a degree of control, and the ability to make adjustments. Had the wizard displayed possible answers along the way, and let me narrow the set as I select attributes, I probably would be the owner (a happy owner) of a Lenovo laptop PC today.

Thursday, March 10, 2011

Did amazon.com invent cloud computing?

I was the recipient of an interesting idea at the local CloudCamp un-conference.

Amazon.com offers their EC2 virtual servers. At first, it was offered with no guarantees: yeah you can run things on EC2 servers, but they could crash at any time. It was cheap, but not reliable. What was a developer to do?

The price for amazon.com's EC2 service was too low to ignore, so developers did what they always do: they built software around the problems. For cheap servers that could crash at any time, that meant building software that was tolerant of servers "disappearing" at any moment.

That's a big part of cloud computing. With "the cloud", a server can drop off-line at any time. Your application must continue. The initial level of reliability of EC2 was low, and forced developers to think about portable processes, processes that could hop from server to server and continue the work. This idea (along with others) made cloud computing possible.

Prior to the idea of portable processes (and crashing servers), applications were built on the model of "the server is reliable". After EC2, software was built with the idea of "the server is not guaranteed". It's quite a change.

So we may have amazon.com to thank for the cloud.

Wednesday, March 9, 2011

Hybrid for functional languages

Are there hybrid functional languages? Languages that correspond to the hybrid object-oriented programming languages of C++, Object Pascal, and Visual Basic? A quick survey of the web (if there is such a thing as a quick survey) yields a page (from stackoverflow.com) that lists the following languages: Scala, Clojure, F#, Ruby, OCaml, Common LISP, Nemerle, Smalltalk, and O'Haskell. A few other web pages list these languages as hybrids.

So yes, there are hybrid functional languages. And this is important.

The jump from the "classic" languages of C++, C#, and Java to pure functional languages (Haskell, Erlang) is a large one. We were able to move from procedural languages (C, Pascal, and BASIC) to object-oriented languages (Java, C#) by using hybrid languages as an intermediate step. (A rather expensive intermediate step, as we look back at the mountains of hard-to-maintain C++ code we have created, but it was a step.)

We humans, for the most part and in most situations, tend to do better with small steps. Small changes in our programming languages are more acceptable to us that large changes. (Hence the migration from FORTRAN II to FORTRAN IV to FORTRAN 66 and then to Fortran 77, and not a migration from FORTRAN II to Pascal.)

The hybrid object-oriented languages became popular because they were useful.

I expect that hybrid functional languages will become popular for the same reason. We will want to move to the functional languages, to reduce programming time and improve reliability. The jump to a pure functional language is large, often requiring not only a complete re-write of the application but a re-thinking of our thinking for the application. Hybrid functional languages will allow a more gentle transition to functional programming.

Exactly *which* hybrid functional languages become popular is quite hard to predict. Microsoft will push F#, or possibly extend C# with functional qualities. The Java crowd will like Scala. The Ruby enthusiasts will like... Ruby. I'm not sure what the Apple camp will adopt.

Apple, with its devotion to C, C++, and Objective-C is in an interesting position. There is no clear path to functional programming on the Apple platform. This may present a problem for Apple. (Or maybe not, as they may choose to support a functional or hybrid functional language in the future.)

Sunday, March 6, 2011

One level down

A while back, I was build-master for a large project. The project consisted of twenty or so Visual C++ projects ("solutions", in Microsoft's terms) and five C#/.NET projects.

As build master, I had to maintain the build scripts and the system that ran them. The build system itself was a complicated application: A Java program with dozens of classes, XML files for the scripts, and an interface that ran on a web page. Maintaining the build system was expensive, and we chose to re-write the system. The resulting system was a simpler collection of batch files. The batch files looked something like this:


CD project-directory-1
MSDEV /build /solution project-1.sln /project Release
CD ..
CD project-directory-2
MSDEV /build /solution project-2.sln /project Release
CD ..
... repeat for all twenty-five projects

The one feature we needed in the system was for it to stop on an error. That is, if a Visual C++ solution failed to compile, we wanted the build system to stop and report the failure, not continue on and attempt to build the rest of the projects.

Batch files in Windows are not good at stopping. In fact, they are very good at continuing on. You can force a batch file to stop. Here's our first attempt:


CD project-directory-1
MSDEV /build /solution project-1.sln /project Release
IF %ERRORLEVEL% NEQ 0 EXIT /B 1
CD ..
CD project-directory-2
MSDEV /build /solution project-2.sln /project Release
IF %ERRORLEVEL% NEQ 0 EXIT /B 1
CD ..
... repeat for all twenty-five projects

This solution is pretty ugly, since it mixes in the control of the execution with the tasks of the execution. (Not to mention the repetitiveness of the 'IF/EXIT' command.) The problem was pervasive: we wanted our scripts to stop after a failure in any part of the process, not just compiling projects. Thus we needed 'IF/EXIT' lines sprinkled in the early phases of the job when we were getting files from version control and in the later part of the job when we were bundling files into an install package.

After a bit of thought and several discussions, we implemented a different solution. We wrote our own command processor, one that would feed commands to CMD.EXE one at a time, and check the results of each command. When a command failed, our command processor would stop and report the error.

The result was a much simpler script. We took out the 'IF/EXIT' lines, and the script once again focussed on the task of building our projects.

With our new command processor in place, we added logic to capture the output of the called programs. We captured the output of the compilers, the source control utilities, and the install packager. This allowed for an even simpler and more focussed script, since we removed the '>log.txt' and '2>errlog.txt' clauses on each line.

Looking back, I realize that our solution was to move the logic for error detection down one level. It took the problem out of the script space and into the space of the command processing.

Sometimes, pushing a problem to a different level is the right thing to do.

Saturday, March 5, 2011

Governing the speed of business

Governance of IT projects ensures that they bring value to the organization, by aligning priorities and coordinating resources for the best value to the organization. They do this by enforcing a set of standard processes for each project. The standards often specify the technology to be used, the project evaluation (including cost/benefit analysis), and the reporting of project progress.

A side effect of IT governance is the slow pace at which projects can move. The governance process includes meetings and conference calls between various groups, and buy-in from the involved teams. All stakeholders are included, from users to testers and support teams. Meetings between all of these groups requires time, and frequently the governance organization schedules meetings on a regular basis (say, once a month) to review project progress and discuss new projects. Projects must live on "governance time", and fit within the schedule of the governance organization. But the benefit is that the project meets the needs of the organization.

So the IT governance process slows projects to ensure quality and coordination.

Interestingly, early steam engines (well, some steam engines) had governors. They were used to limit the speed of the engine. They slowed the engine to ensure that the engine provided the proper power. (Or, one could say "to meet the needs of the organization".)

The problem with (traditional) IT governance is that it slows the project. Any project. Every project.

Traditional IT governance is a bureaucratic process. It obtains quality, coordination, and consistency at the cost of time.

It also constrains solutions to projects of a certain size, or of a size within a certain range of sizes.

IT projects come in all sizes, from the two-person, four-hour "let's try this" to the hundred-person, multi-year global development and implementation projects for thousands of users. But IT governance often uses a standard project template for projects, specifying the general steps for a project, the reporting requirements, and the technologies involved. Any project must go through the governance process, and like Play-Doh being pushed through a mold, the project is shaped into a form selected by the governance group. Small projects are increased to include users and deployment teams. Large projects are divided into smaller ones, to allow for deliverables within the standard timeframe.

In other words, the governance process "normalizes" all projects. Small projects are made standard-size, and large projects are made standard-size too.

This normalization makes sense from the view of the governance committee (who wants to see a consistent set of projects and reports) but not from the view of the business. Let's look at a hypothetical example:

Senior Manager (to project leader): "Customers keep calling and asking for an iPhone app. Can we get something built before the end of the quarter?"

Project Leader: "Possibly. We have a few folks who have been experimenting with iPhone apps. Let me look at schedules and see what I can arrange."

:: one week later ::

PL (to SM): "I can move people from the "Huron" and "Erie" projects, and they could build the iPhone app and deliver it in two weeks. But that means delaying "Huron" and "Erie", and the PMO refused to adjust the schedule for those projects. And even if they did, the standards committee needs two months to review the new tools for iPhone apps before allowing us to use them. And then the security group rejected our proposal, since they have not tested the iPhone in our environment. They want six months (and funding) to build a lab and hire four analysts and a manager for iPhone apps. Oh, and the PMO called and wants to discuss the iPhone app at their monthly project review meetings, and they can fit us in to the schedule in three months."

An idea brought "under control" by the governance process. Standardized and approved.

But perhaps not what the organization needs.