Monday, July 12, 2010

Going rogue

The May/June issue of CODE magazine it used to stand for "COmponent DEveloper") breaks a precedent -- non-Microsoft technology. This is a big deal for the magazine.

CODE magazine started in 2000. It covered Microsoft and .NET technologies (also COM, DCOM, and the like) and in those first ten years it stayed neatly within the Microsoft world. A review of articles shows that it stayed completely within the Microsoft sphere, with a side trip to Crystal Reports. (Crystal Reports is bound tightly ti Windows and .NET, so I consider it well within the Microsoft environment.)

Things changed in 2009, with articles on Twitter and open source. I did not catch the articles; they could have been Microsoft-centered.

This issue (the May/June issue) has two articles on Ruby: one on Ruby on Rails and the other about Ruby, rake, and other Ruby tech. These articles are not specific to Microsoft and Windows and .NET; they apply to anyone using Ruby and Rails.

This is a big shift. CODE magazine has gone from a Microsoft cheerleader to a pragmatist, writing articles that it thinks will attract readers.

I've noticed one other change. I'm not sure when it happened, but Microsoft has stopped advertising in CODE magazine. I know of no relationship between EPS (the publisher behind CODE) and Microsoft, other than the arm's-length advertiser/publisher relationship. Yet perhaps there was pressure (or perceived pressure) to keep content in the Microsoft world, while Microsoft was paying some of the bills. Of course, when Microsoft stopped advertising, the pressure (real or perceived) would dissapate.

Is the appearance of tech-neutral articles driven by the absence of Microsoft advertising? I don't know. But I am glad to see them. The folks at CODE do a decent job of writing. There are too few resources for good technical information, and we should encourage the folks to do good work. CODE magazine is on that list.


Thursday, July 8, 2010

The Tyranny of Dates

The biggest problem I have with project planning software -- especially Microsoft Project -- is the date fields.

Microsoft Project (and other products) allow you to define a task, giving it a name and other attributes. You can specify precedent and dependent tasks, you can specify a duration, and you can assign people.

Project "helpfully" fills in the start date and end date for the task. For any task, there will be defined start and end dates. In this there is no question, no choice, no alternative.

My problem is that I don't work that way. Here's how I put together a project plan:

First I agree with the team on the objective. Then we list the major tasks -- the ones that we can list. We recognize that we will miss a few, depending on the project.

After agreeing on the tasks, we put them in sequence. We identify the dependencies.

Then we look at the effort for each one, estimating the duration of each task.

Finally with put dates in, which gives us our first draft of the project plan.

Sometimes this process is fast, and a single meeting can suffice. Sometimes this process takes weeks.

But until we get to the last phase, I don't want to see dates. Yet Microsoft Project insists on them. If we don't specify a date, it puts in some arbitrary value like January 1, 1980.

Microsoft Project has the notion of dates, but not null dates. Its as if the project manager configured the team with a 'ADD COLUMN START_DATE NONNULL, END_DATE NONNULL' script.

Worse, Project insists on precise dates. It accepts a specific date (and time, which can be deduced by scheduling some tasks for less than a ful 8-hour day). This is not how we handle our planning.

We need a date field that accepts the value "no date specified". We need a duration field that accepts the value "10 days, with a 20 percent chance of over or under by 2 days".

Using an arbitrary precise date or duration is not good enough. In fact, I consider it dangerous. Why not just plug in the value of 10 days? Because Project renders it as 10 days, and I have no way to indicate that it is a fuzzy value. I also want to look at the best case and worst case for all of the tasks, something that requires regeneration of the project schedule.

But the most dangerous thing about this behavior is that we have to bend to the software. We have to change our process to conform to software designed by someone else, probably someone who does not run projects.

I want software and computers to work for us, not the other way around.


Monday, July 5, 2010

Types of requirements

I am amused by the teams that divide their requirements into "functional" and "non-functional" groups.

Invariably it is in shops that used to have only functional requirements, and now have a second category.

The first group of requirements contains those specifications that are derived from business operations. They include things such as inventory levels, pricing functions, and billing terms.  The natural name for these specifications is "functional requirements" because, in the minds of the business analysts, there is a direct mapping from the software back to the business world.

The second group of requirements refer to  aspects of the system that are not tied to business operations. These can include things such as platforms, upgrades and updates, response times, system availability, and GUI design.

Frequently a development team starts with a single list of requirements, and then starts creating subgroups within the list. The non-functional requirements are usually groups together, since they don't fit in with any business initiatives. Sometimes the non-functional requirements have no official sponsors, or no sponsors from within the business.

The action of dividing requirements is a "lumping and splitting" problem. We humans associate like things and group them. We also split out unlike things. It's part of our psychology, and we should leverage it with software development and the management of requirements.

But the simple division of "functional" and "non-functional" is wrong. Even the term "non-functional" is a clue that one has given  little thought to the problem.

The term "non-functional" is an artificial one, akin to "manual transmission". (Prior to the introduction of the automatic transmission, a manual transmission was simply a transmission. No adjective was needed.) The term "non-functional" is needed to differentiate some requirements from the group of functional requirements.

The term "non-functional" is a negative term. It tells us what the requirement is not, but it does not state what the requirement *is*.

If you are using the term "non-functional requirements", then you are giving insufficient attention to those requirements. In addition, you implicitly make them second-class citizens in the requirements world.

Here is a list of terms that can replace "non-functional requirements":

- Performance requirements
- Maintainability requirements
- Availability requirements
- Upgrade requirements
- User interface requirements
- Business continuity requirements

The list is incomplete and perhaps not exactly what you need. Take the ones that make sense, and add others as you need.

Many folks will avoid using the specific terms and keep to "non-functional". The term is in place, everyone on the team agrees with it and understands it, and there is no clear benefit to these new terms. They would say that it makes no business sense to change.

And they will be right... until the list of non-functional requirements grows... and someone wants to split it.


Sunday, July 4, 2010

The question of value

Does source code have value? (And if it does, can we assign a monetary value to it?)

I'm talking about source code, not executable code.

Source code is not sold in stores or traded in a market. Without the market to guide us, we must devise other means to measure the value.

A simple measure is the total value of construction. Sum the costs of development: programmer salaries, testing, workstations and compilers, While a simple measure, it is wrong. Our history of projects shows us that many projects, some funded with millions of dollars, result in useless systems. The value of the result is not the value on the inputs.

The open source folks believe that source code has no value. Or rather, there is value in source code, and there is more value in sharing the source code than keeping the source code "under wraps".

I think that the value of source code can be related to the value of the executable. Yet that may not help. It's hard to assigned a value to the executable. Is Microsoft Word really worth the sticker price? (And what about Microsoft Internet Explorer? Since it is included in Windows, the price is effectively zero. And the price for Fireforx is clearly zero.)

An economist might say that the value of the software is its capabilities. In this sense, the value of Microsoft Word is not intrinsic to the code but in the number of .DOC files in your collection.

This leads to an interesting effect: When Word was the only word processor that could read .DOC files, it had a monopoly on the market, and a certain value. But when OpenOffice was released, with its abilities to read and write .DOC files, Word lost its monopoly and some of its value. (One no longer needed Word.) When Google Docs was released (and it can read and write .DOC files), the value of Word dropped again.

That analysis does not help us get closer to a monetary value, but it does show that the value changes. And if the value of the executable changes in response to external events, then I assert that the value of source code changes in response to external events.

Perhaps this idea is not so strange. Other things in life have values that respond to external events. Houses, cars, even tomatoes. Why not source code?

Such an idea may be received uncomfortably by some managers and accountants. The folks who assign value based on the input effort (and US tax law and accounting practices encourage this thinking) are poorly equipped to evaluate changes to the value based on external events.

Yet I think we must think of the value of source code, for the very simple reason that we must know when to abandon it.


Tuesday, June 29, 2010

Rugged individualism

What will hardware and software look like in the future? I don't know for certain, but I can make a few guesses. Let's look at the past:

The common wisdom for the evolution of hardware runs this way: from the early tabulators we built large computers (which were later named "mainframes"), then we learned to make minicomputers, then in 1976 we introduced the pre-PC microcomputers, followed by the IBM PC, workstations, servers, and finally the iPhone group. It's not completely accurate, but it's pretty good.

Common wisdom for the evolution of software runs along similar lines: from the early days of direct programming in machine language, we moved to monitors (operating systems) and the early compilers (FORTRAN and COBOL), batch processing, time-share systems and interactive processing (UNIX), virtual machines, a reversion to direct programming with the early microcomputers followed by "real" operating systems for PCs (Windows NT), client-server systems, web applications, and a re-invention of virtual computers, leading to today's modern world of smartphone apps and cloud computing.

Aside from the obvious trend of smaller yet more powerful hardware, we can see that advances in computing -- especially in software -- have favored the user. Starting with microcomputers in 1976, advances in technology have provided advantage to individual users. Such trends are clear in the iPhone and kin.

Advances in hardware have favored the individual, and so have software. Not only are iPhones made for individuals, but the software is tailored to individuals. iPhones do not plug in to the enterprise-level control systems (such as Windows Exchange servers). They remain controlled by the user, who selects the software (apps) to install and run.

The BlackBerry devices do plug in to the corporate systems, and while lots of people have them, the vast majority of people don't pay for them -- their employer does. BlackBerry phones are not the choice of individuals, they are the choice of IT managers.

I expect the trend toward the individual to continue. I expect that new cloud-based apps will be designed for individuals (Facebook, anyone) and not for corporations. The corporate applications (general ledger, billing, personnel, and markting) all fit in the mainframe world (with a possible web front-end). It is the individuals who will drive new tech.

Areas to look for improvements: pervasive computing (in which small computers are everywhere), automatic authentication (eliminating the need for passwords), and collaboration. The last is an interesting one, since it applies not to the individual but to a group. Yet it won't be the formal, structured group of the corporation; it will be a group of like-minded individuals with specific common goals.


Saturday, June 26, 2010

Books have the advantage in the new world order

Books, magazines, newspapers, musicians, and movie-makers must all learn to live in the new world order of the internet. All of the key players must abandon the old ways and learn the new. Book publishers may be in the best position of the crowd, due to the public domain.

Book publishers, magazine publishers, newspaper publishers, recording labels, and film studios all face the same problems of the internet: it is easy to copy a digital good. The news web sites and blogs are awash in articles about the downward trend in revenues, the upward trend in piracy, and the imminent collapse of their business models. And for a number of specific publishers (I'm using the larger definition that includes all media) the demise is indeed imminent. But not all; some will figure out the rules of the brave new world. (Just which ones, though, is a matter that must wait until we arrive in the new world.)

Of the media, book publishers have an advantage: the public domain. Or rather, a collection of works that is in the public domain. There is a large number of books that are available for free -- that is without the encumbrance of copyright. These works were published some time ago and have fallen "out of copyright". Books such as "Pride and Prejudice" and "Alice in Wonderland".

Such a collection is useful to book publishers and the makers of e-book readers. E-reader makers can use this collection as an enticement, in fact the Kobo reader comes with one hundred such classics.

Book publishers can use the collection too, indirectly. They can let the e-reader makers include this collection and train the customers in the use of e-book formats. Rather than buying a single copy of a book and passing it on to friends, a customer can become accustomed to downloading books to their personal device. From there it is a small matter of paying a modest fee (and the modesty and reasonableness of the fee is important) for a current book. This gives the publishers a business model.

Other media lack this collection of free works. The music industry has been quite good at keeping all of its works in copyright; consequently they have no free collection to use for training customers. Everything must be purchased: every song or collection must have money transacted. By being grabby the recording industry and the movie industry have gained in the short term but are losing in the long term.

The book industry is making gains with the e-readers such as the Kindle and the Nook. These gains are due in part to the free collection of materials. Good materials. If the music and film industries want to make similar gains with customers, they may have to consider a similar strategy. Sadly, I think that they will be unable to do so. They will cling tightly to every work in their collection, demanding payment for every use, every viewing, and every excerpt. And they may pay the price for making customers pay the price.


Monday, June 21, 2010

Caveat emptor -- and measurements

We focus on the development of systems (programs, applications, or apps, depending on the environment) but we must not forget the buyer, the client, or the person who requests the system. They too have responsibilities in the development of the solution.

One overlooked responsibility is the specification for the durability of the system, or its expected life span. A program designed as a quick fix to an immediate problem can be a wonderful thing, but it is not necessarily designed for a long life span. Platforms and development environments change, and programs can be designed for a narrow or general need.

With ownership comes responsibility. A system owner must understand the durability of their systems. We often hear of a temporary fix living for years. Sometimes this is acceptable, yet many times it entails risks. Quick, temporary solutions are just that: hastily assembled solutions to specific problems. There is no guarantee that they will work for all future data.

Yet owners can think that a quick fix today can be used for a long period of time. This is a mistake. It is equivalent to fixing a flat tire with "fix-flat goo" and replacing the tire. The former does repair the flat tire but it does not provide you with a new tire. If you want a new tire, you have to pay for it.

Similarly, if you want a durable software solution, you have to pay for it. If you are willing to accept a temporary solution, you must acknowledge that the solution is temporary and you will have to replace it in the future. You cannot pay for a temporary solution and receive a permanent one. Yet system owners rarely accept this responsibility.

Part of the problem is that we have no way to measure the durability of a solution. A system owner cannot tell the difference between a temporary fix and a permanent solution, other than the project duration and cost. Unlike a physical good, software is unobservable and unmeasurable. With no ability to rate a solution, all solutions look the same.

All solutions are not the same. We need a way to measure solutions and identify their quality.