Monday, June 27, 2011

Run your business like an open source project

Businesses must make decisions about the different activities that they perform. They often classify activities as "drivers" and "supporting tasks" (or as "profit centers" and "cost centers"). Profit centers get funding; they are essential to the business and usually have a measurable return on investment (ROI). Cost centers get funding, grudgingly. They are considered a necessary evil and by definition they cannot provide ROI.

If you want to improve your ability to classify drivers and supporting utilities, look at open source projects. Not the software, but the projects that build the software. These projects often run with little or no funding, yet they succeed in delivering usable, quality software. They must be doing something right.

Any project that delivers usable, quality software is doing several things right. From setting objectives to managing resources, from marketing to coordinating activities, successful projects are... well, successful.

Open source projects are very good at identifying the drivers in their project, and focussing on them. They are also good at separating the supporting tasks, and either outsourcing them or automating them. Through outsourcing and automation, the project can drive the cost of supporting tasks to zero. Here, "outsourcing" does not mean "hire someone else" but "use external resources" which could be open web sites like Google Docs.

Open source projects use metrics to help them make decisions. They automate testing, which allows them to run tests often (and inexpensively). The results of the tests inform them of the quality of their product.

Open source projects don't think that they need special versions of everything. if a plain version control system is usable and lets them achieve their goals, they use it. If a plain compiler gets the job done, they use it. If allowing people to work around the world on schedules of their own choosing gets the job done, the project allows contributors to work around the world on their own schedule.

Very few open source projects -- successful open source projects -- can afford egos and custom stationery.

So are they all that different from a business?

Friday, June 24, 2011

The other disappearing papers

The internet age has brought about the demise of several newspapers, and threatens the entire industry. Many folks have blogged about this and even newspaper reporters have written stories about it.

But there is another set of papers that is threatened by the internet, a set of papers about which no one (to my knowledge) has worried, blogged, or reported.

That set of papers is the rental apartment advertising books.

You know these books. They live in flimsy (but apparently indestructably flimsy) boxes on streetcorners in downtown neighborhoods. The list page after page of beautiful apartment buildings, each nicer than the last.

At some point, the internet is going to eat these advertising books. As with newspapers, they are expensive to produce and distribute. Their demise is inevitable; it will be a simple business decision to ax them when they produce less revenue than cost.

Their disappearance will not cause hue and cry. There will be no period of mourning, no ceremony for their passing. They are, unlike newspapers, artifacts of pure advertising. They have no "content", in the internet/web sense.

These little booklets of advertisements are enablers of commerce, nothing more.

And when they are gone, it means that we have built other mechanisms to enable commerce, and people have accepted those other mechanisms. Those mechanisms might be web pages, or e-mail lists, or spiffy iPhone apps, or something else. But they will exist, for commerce will exist.

The demise of the paper advertisement will signal the acceptance of the on-line advertisement.

Monday, June 20, 2011

The parts are worth more than the whole

Some number of years ago, I would attend computer fairs. There was a particularly nice one near Trenton, NJ. The fair had a large flea market with people selling old, used hardware (and some software). Just about anything would be available, from PC parts to old eight-inch floppy disk drives to telephone equipment to minicomputer line printers... lots of things.

One of the most frustrating things was looking at old hardware that had been built for a specific system configuration. The device (whatever it was) would have various connectors; some for data and some for power. These connectors were not standard but custom, used only by that device (and the thing to which it attached).

Such custom connectors may have made sense -- perhaps there was no standard, or the standard did not meet the needs of the system. Or perhaps the manufacturer used a non-standard connector to lock customers into their hardware. Whatever the reason, the non-standard connectors effected the components when the system was decommissioned: they were unusable.

In contrast, the IBM PC used standard parts (the IBM PC set a new standard, but it was adopted by the industry) and parts, when a system was decommissioned, were still usable. When one upgraded from an IBM PC to an IBM PC-AT, one could swap in the old video and accessory cards. One could keep the display monitor, and the printer. The parts had value after the system lost its value.

Software systems have similar behavior. Software systems are built with components and layers (or so the pretty architecture diagrams lead us to believe) but these "components" often have the equivalent of the custom connectors of the old one-system-only hardware. Such custom components can be used within the system, but when the system is decommissioned, the components cannot be used elsewhere. Thus, when the system is decommissioned, the value of the components drops to zero. A component that can be used by one and only one system has no value outside of that system.

A good system architect will see not only the system but the components, and ensure that the components (as well as the system) have value. That means that the components must use standard APIs and be able to stand alone, so other systems can use them. A re-usable component must accept data in a reasonably neutral format, not one that is specific to a system or one that uses obscure object types. A re-usable component must run with a minimal number of dependencies, not the entire system. A re-usable component must be tested independently of the one system, to ensure that it can be run independently of the one system.

Building re-usable components helps retain value of software, and also retains value of people. Re-usable components can be used in other systems. (By definition.) Using components in other systems leverages the knowledge of the development team across multiple systems. In the long term, this can reduce your staffing costs.

Sunday, June 19, 2011

From the bottom up

Fixing software is hard. Experienced managers of development projects have memories (often painful memories) of projects to redesign a system. Redesign efforts are hard to justify, difficult to plan, and risk breaking code that works. All too often, a redesign project ends late, over budget, and with code that is just as hard to maintain. The result is the trifecta of project failures.

A big part of the failure is the approach to redesign. Projects to fix code often use one of the following approaches:

- Re-write the entire program (or system)
- Improve code along functional components (the "room by room" approach)
- Improve code as you modify it ("fix it as we make other changes")

Re-writing the system is expensive. Often too expensive to justify. ("You're going to spend how many dollars and how many months? And the result will be the same as what we have now?" ask the senior managers. And there is no good answer.) Plus, you have the risk of missing important functionality.

Changing the system in parts, akin to replacing tires on a car, is appealing but never seems to work. Components have too many connections to other components, and to truly fix a component one must fix the other components... and you're back to re-writing the entire system. (Except now you don't have the money or the budget for it.)

Fixing code as you make other changes (as you are implementing new functionality) doesn't work. It also has the "dependency effect" of the component approach, where you cannot fix code in one module without fixing code in a bunch of other modules.

Managers are poor at planning redesign projects, because they manage software like they manage the organization: from the top down. While top-down works for people, it does not work for code.

People organizations are, compared to software, fairly simple. Even a large company such as Wal-mart with its one million employees is small, compared to software projects of many million lines of code. Lines of code are not the same as people, but the measures are reasonable for comparison. Any line of code may "talk" with any other line of code; that is not true for most organizations.

People organizations are, compared to software, fairly smart. People are sentient beings and can act on their own. A corporate manager can issue directives to his reports and expect them to carry out orders. Software, on the other hand, cannot re-organize itself. Except for a few specialized AI systems, software must be guided.

I have worked on a number of projects. In each case, the only method that worked -- really worked -- was the "bottom up" approach.

In the bottom up approach, you start with the smallest components (functions, classes, whatever makes up your system), redesign them to make them "clean", and then move upwards. And to do this, you must know about your code.

You start with a survey the entire code. You need this survey to build a tree of the dependencies between functions and classes. Use the tree to identify the code at the bottom of the tree -- code that stands alone and does not depend on other code (except for standard libraries). That is the code to redesign first. Once you fix the bottom code, move up to the next layer of code -- code that depends only on the newly-cleaned code. Repeat this process until you reach the top.

Why the "bottom up" method for software?

Software is all about dependencies. Fixing software is more about changing dependencies than it is fixing code. Redesigning software changes the dependencies; improving code in the middle of a system will affect code above and below. Changes can affect more areas of the code, like ripples spreading across the surface of a lake. Starting at the bottom lets us change the code and worry about the ripples in only one direction.

Why not start at the top (with ripples in only one direction)? Changes at the top run the risk of leading us in the wrong direction. We think that we know how the software should be organized. But this impression is wrong. If we have learned anything from the past sixty years of software development, it is that we do not know, up front, how to organize software. If we start at the top, we will most likely pick the wrong design.

On the other hand, starting at the bottom lets us redesign the smallest components of the system, and leverage our knowledge of their current uses within the system. That extra knowledge lets us build better components. Once the smallest components are "clean", we can move to the next level of components and redesign them using knowledge of their use and clean subcomponents.

If anything, starting from the bottom reduces risk. By working on small components, a team can improve the code and make definite gains on the redesign of the system. The smaller objectives of bottom-up redesign are easier to accomplish and easier to measure. Once made, the changes are a permanent part of the system, easing future development efforts.

Software is not people; it does not act like people and it cannot be managed like people. Software is its own thing, and must be managed properly. Redesign projects from the top down are large, expensive, and risky. Projects from the bottom up have a better chance of success.

Wednesday, June 15, 2011

Modern times for websites

I visited a website recently; it recommended using Internet Explorer version 6.0 or higher. I found the idea nostolgic and amusing.

The website was clearly designed several years ago, when there were good reasons to require specific browsers.

But today, such a statement makes one look foolish. Firefox, Safari, Chrome, and Opera all perform well. Internet Explorer's share of the market has dropped, and one must design a website for the bunch, not the one.

There is no good reason to design a website for a single browser, nor demand IE6 as a minimum. Even Microsoft is requesting that people move on from IE6!

Websites are dynamic, and even websites with static content must change. They must grow with the standards or die.

Saturday, June 11, 2011

Better programming with immutable objects

Recent buzz in the development community has discussed functional programming languages. These languages have a different approach to the organization of code. They are not object-oriented languages, nor are they procedural. Functional programming languages include Haskell, Erlang, and F#.

Working with old-fashioned object-oriented languages (C++ and C#), I suffer from language envy. I can see the shiny new languages but use the old, dented languages for my daily work. (Of course, I recognize that at one time C# and even C++ were the new, shiny languages. I also recognize that at some point in the future, Haskell and F# will be considered the old, dented languages. But for now, the grass is greener over the fence and in the yard of the new languages.)

I may be suffering from envy, but I don't sit and curse the darkness. Instead, I try to use the techniques of the new languages in my current toolset. I did this when programming in C and gazing longingly at C++; I wrote code in a style that I called "thing-oriented" programming. My thing-oriented programming let me identify key objects in the code (data structures, not objects in the OO sense) and collect and organize functions by the objects they manipulated. It was a step towards object-oriented programming. (Eventually I did learn C++ and work on C++ projects.)

With the introduction of C# and .NET, I used the same idea. For this transition, I wrote class libraries in C++ that mimicked the class libraries in .NET. (Not the entire .NET collection, just the basic classes to get the work done.) The work was a bit difficult, as we were not allowed to use STL. But like "thing-oriented programming", it was a step towards .NET and made the transition easier.

So here I am again, and this time the target is functional programming languages. I want to transition to functional programming from object-oriented programming, but don't have the tools. (The day job uses "classic" OO languages, and I struggle to find time outside of the office.)

My transition technique is to use "immutable object programming".

This has some interesting effects:

1) To construct an object, all data must be present. I cannot instantiate an object and later give it more information. Once an object is instantiated, it is complete and final. (I can construct other objects from built objects, so I can construct a "sales price" object and later use it to construct a "discounted sales price" object. But I cannot wedge in a discount to the "sales price".

2) I find myself thinking about the sequence of operations and the dependencies between objects.

3) My programs are designed in three phases: collect data (and construct objects), construct other objects from the collected data, and produce output. This is a pattern that goes back to the beginnings of the programming age. It may be a natural fit for the "immutable object programming" style, or it may be driven by the calculations of the systems.

4) My thinking is more rigorous. Not necessarily up front, but the end result (the final code) is more disciplined. I write some code, and sometimes take non-immutable shortcuts (or even OO shortcuts). These shortcuts are obviously wrong, but they yield the correct output. Once I have the correct output (and tests to check that output), I revise the code and remove the shortcuts.

5) My code is easy to debug. When the output is incorrect, I know that an object has incorrect data. (Or possibly the output routine is incorrect.) If the object has incorrect data, I know that it was constructed improperly. (There is no other method that modifies the object.) Finding the problem is a simple matter of identifying the incorrect object and then identifying the construction of that object.

6) My code is easy to change. I have made a number of changes to the code, and all have been simple and obvious. (And therefore, they worked.) These were not trivial changes, either; some were large and affected multiple modules.

I will admit that I do not understand functional programming languages. I have created some small programs in Haskell, but not enough that I "grok" the concept. My "immutable object programming" trick may be a step towards functional programming or it may be a step away from functional programming. It feels right, and I like the results.

Wednesday, June 8, 2011

The tweet is more powerful than the sword

People are surprised that companies respond quickly to complaints and questions posted to them on Twitter.

I find the rapid response predictable, once you think about it.

E-mails are, for the most part, one-to-one messages. A complaint e-mail especially so. I'm not going to CC: friends on a complaint e-mail to my bank, or my insurance company, because that simply loads their inboxes with clutter. An e-mail is a private conversation between you and the company. The company can delay response or ignore it completely at practically no cost.

Tweets, on the other hand, are a one-to-many message. They are not a letter to a friend (or business) but are published to many people. They are viewed by one's followers, and some people have hundreds of thousands of followers.

If Twitter were that simple, then companies could evaluate the importance of the Tweeter and answer only those tweets from users with large followings.

But it's more complex. All tweets are part of the TwitterStream, and anyone can look at any message (just about). I recently tweeted about my bank and used a hashtag to identify them. The bank was not following me (and is still not following me, from what I can tell) but they found the message anyway, and responded. Tweets are public postings, messages published to potentially the entire world. Tweets with hashtags are advertisements, not approved by the marketing department.

With messages going out to the globe, a company has no choice but to respond, and respond quickly. If they don't, the next tweet might just be "hey, #companyX ignored my tweet. is anyone home?", which translates to more negative advertising.

I think every company, organization, and government office should be watching the TwitterStream for messages about themselves. You want to know what people are saying about you.

Thursday, June 2, 2011

Windows 8 shows Microsoft's big problem

We have seen the future, and for Windows, the future looks a lot like Windows Phone 7. Microsoft has let folks see glimpses of the not-yet-released version of Windows. Some have raved and others have yawned.

A long time ago, Windows was created with three goals in mind: provide a graphical interface for PCs, counter the threat from from the new Macintosh computer, and hide the command line interface from users.

Windows 8 continues those objectives. It provides a nice-looking interface, it counters the recent advances made by Apple, and once again it hides complexity from the user.

But Windows 8 is not Microsoft using the same strategy as Apple. For the iPods and iPhones, Apple created a new operating system, a new user interface, and a new way of computing. It made a clean break from the Mac OSX line. Microsoft, in contrast, is betting on backwards-compatibility, and Windows 8, with its shiny interface, can run older Windows programs.

Backwards compatibility is a blessing and a curse for Microsoft. It allows their customers to move gradually to the new operating system, keeping their existing applications. Yet it means that Microsoft must support a lot of complicated (and sometimes just plain wrong) APIs. (For example, the Windows Registry will be present in Windows 8, despite almost universal hatred of the thing. But too many programs depend on it, and Microsoft is stuck with it.)

Microsoft has trained customers to expect backwards compatibility.

Apple has trained its customers to expect new platforms. The history of Apple products in littered with abandoned platforms: Apple II DOS, the original Mac OS, the later Mac OS version 6 through 9, the Motorola processors, and the Power PC chips. When Apple introduced a new iOS operating system for the iPod and iPhone, no one blinked.

Microsoft could never get away with anything like that. The release of Windows Vista with its limited support for devices brought howls.

Microsoft's fate, like it or not, is tied to Windows. Which means that Microsoft must drag Windows to any market that it chooses to enter. The XBOX runs Windows, the Zune runs (OK, ran) Windows, the Media Center runs Windows... everything Microsoft does runs Windows.

Or to put it another way, Microsoft does everything with Windows. And the consequences of that are... Windows must be extended, stretched, folded, spindled, and mutilated into any new product.

Microsoft's growth is limited by its ability to expand Windows. And that is Microsoft's big problem.