Wednesday, June 26, 2013

The Red Queen's Race requires awareness

Does your software development project use mainstream technology? (Let's assume that you care about your technology.) Some project managers want to stay in the mainstream, others want to stay ahead of the crowd, and some want leading edge tech.

Starting a project at a specific position in technology is easy. Keeping that position, on the other hand, is not so easy.

Over time, languages and compilers and libraries change. There are new versions with enhanced features and bug fixes.


Source code, once written, is a "stake in the ground". It is fixed in place, tied to a language, possibly a compiler, and probably several libraries. Keeping up with those changes requires effort. Just how much effort will vary from language to language. The C++ language has been fairly stable over its thirty-year life; Visual Basic changed dramatically in the 1990s.

Thus we have a variant of the Red Queen's Race, in which one must run just to stay in place. (In the proper race, described in "Alice Through the Looking Glass", one must run as fast as one can. I've reduced the mandate for this discussion.) A software development project must, over time, devote some effort towards "running in place", that is, keeping up with the toolset.

This effort may be small (installing a new version of the compiler) or large (a new compiler and changes to a majority of the source modules). Sometimes the effort is very large: converting a project from Perl to Python is a major effort.

Failing to move to the current tools means that you slowly drift back, and possibly fall out of your desired position. A project that starts in the leading edge drifts to the mainstream, and a project in the mainstream becomes a laggard.

The Red Queen's Race for software requires not just changes to technology (updates to compilers and such) but also an awareness of technology. In the day-to-day activities of a software project, it is easy to focus inwards, looking at new requirements and defect reports. Maintaining one's position within tech requires looking outward, at updates and new technologies and techniques. You must be aware of updates to your toolset. You must be aware of new tools for testing and collaboration. You must be aware of other groups and their technologies.

When running in a herd, it's good to look at the herd, at least once in a while.


Monday, June 24, 2013

Ever-increasing maintenance costs

We like to think that a program, once written, tested, and deployed, is "finished". We like to think that it is a completed work, something we can leave in place and forget about.

But programs are not static elements. The maintenance costs of a given program tend to increase over time.

Now, programs often change over time. Once version 1.0 has been released, we often start working on version 1.1 (or 1.2, or 2.0, depending on one's ideas towards version numbers). Those changes usually increase the size and complexity of the code, which means an increase in maintenance expense. More code costs more to maintain.

Yet even with no changes, maintenance costs increase. At first glance, this seems wrong. Why would cost increase when the code remains unchanged?

Compilers and languages change over time. New versions of compilers implement new versions of languages. Sometimes, these changes cause existing code to break.

A subtler change occurs when new features are added to a language. The C++ language was modified to include the STL. Microsoft modified C# to include the LINC functions. Sun and Oracle have enhanced Java.

While enhancements do not (usually) break existing code, they do change the core knowledge used by programmers. Today, it is a poor C++ programmer who does not use the STL, and a poor C# programmer who avoids LINC. The enhancements become part of the "working set of knowledge".

Yet those old programs, the ones we think are "finished", may be old enough to pre-date the enhancements. When maintained, programmers have to think back to "the old days" and remember the old-fashioned methods for accomplishing certain tasks. A programmer may be able to use the new language features to add functionality to a program, but he must be able to read the old-style code.

When the idioms of old-style code are no longer popular, then fewer programmers understand them. Thus, to successfully maintain an older program, you as project manager must find a programmer who does understand the old-style code, or you must allow a less-experienced programmer time to learn the old idioms. That's the increase to your maintenance costs: more money for a scarce resource, or more time for an available resource.

No program stays "modern", at least not without constant upkeep. We build programs on top of compilers, languages, libraries, operating systems, databases, and communication protocols. All of these "platforms" change, regardless of our business.

The idea that a program is "finished" is a false one. Programs live in an ocean of change, and they must evolve and adapt in that ocean.

Saturday, June 22, 2013

Functional programming has no loud-mouth advocate

I'm reading "The Best of Booch", a collection of essays written by Grady Booch in the 1990s.

In the 1990s, the dominant programming method was structured programming. Object-oriented programming was new and available in a few shops, but the primary programming style was structured. (Structured programming was the "better" form of programming introduced in the 1970s.)

We had learned how to use structured programming, how to write programs in it, and how to debug programs in it. We (as an industry) had programming languages: C, Visual Basic, and even structured methods for COBOL. Our compilers and IDEs supported those languages.

We had accepted structured programming, adopted it, and integrated it into our processes.

We had also learned that structured programming was not perfect. Our programs were hard to maintain. Our programs contained bugs. Our programs were difficult to analyze.

Object-oriented programming was a way forward, a way to organize our programs and reduce defects. Booch was an advocate, out in front of the crowd.

Now, these essays were also a way for Booch to hawk his business, which was consulting and training in UML and system design. Boosh was one of the early proponents of object-oriented programming, and one of the participants in the "notation wars" prior to UML. The agreement on UML was a recognition that there was more money to be made in supplying a uniform notation for system design than in fighting for one's own notation.

But despite the advertising motivation, the articles contain a strong set of content. One can feel the passion for object-oriented programming in Booch. These are legitimate articles on a new technology first, and convenient advertising for his business a distant second.

Fast-forward to the year 2013. We have accepted object-oriented programming as mainstream technology. People (and projects) have been using it for years -- no, decades. We have learned how to use object-oriented programming, how to write programs in it, and how to debug programs in it. We (as an industry) have programming languages: C++, Java, Python, and Ruby. Our compilers and IDEs support these languages.

We have accepted object-oriented programming, adopted it, and integrated it into our processes.

We have also learned that object-oriented programming is not perfect. Our programs are hard to maintain. Our programs contain bugs. Our programs are difficult to analyze.

In short, we have the same problems with object-oriented programs that we had with structured programs.

Now, functional programming is a way forward. It is a way to organize our programs and reduce defects. (I know; I have used it.)

But where are the proponents? Where is the Grady Booch of functional programming?

I have seen a few articles on functional programming. I have read a few books on the topic. Most articles and books have been very specific to a new language, usually Scala or Haskell. None have had the broader vision of Booch. None have described the basic benefits of functional programming, the changes to our teams and processes, or the implications for system architecture and design.

At least, none that I have read. Perhaps I have missed them, in my journeys in technical writings.

Perhaps there is no equivalent to Grady Booch for functional programming. Or perhaps one does not exist yet. Perhaps the time is too early, and the technology must develop a bit more. (I tend to think not, as functional programming languages are ready for use.)

Perhaps it is a matter of passion and business need. Booch wrote his articles because he felt strongly about the technology, and because he had a business case for them.

Perhaps it is a matter of distribution. The software business in 2013 is very different from the software business in 1997; the big change is the success of open source.

Open source projects emerge into the technology mainstream through channels other than books and magazine articles. They are transmitted from person to person via e-mail, USB drives, and Github. Eventually, magazine articles are written, and then books, but only after the technology is established and "running". The Linux, Apache, and Perl projects followed this path.

So maybe we don't need an obvious advocate for a new technology. Perhaps the next programming revolution will be quieter, with technology seeping slowly into organizations and not being forced. That might be a good thing, with smaller shocks to existing projects and longer times to learn and adopt a new programming language.

Thursday, June 13, 2013

Ed Snowden's Disclosure is Small Compared to Booze-Allen-Hamilton's

The NSA, Booz-Allen-Hamilton, the US Justice Department, various politicians, the media, and developers have been stirred up by the activities of Edward Snowden, who disclosed activities of the NSA. While we've all been watching the fun, there is a small detail that has been overlooked, one that may have significant consequences.

Booz-Allen-Hamilton terminated the employment of Snowden:

Booz Allen can confirm that Edward Snowden, 29, was an employee of our firm for less than 3 months, assigned to a team in Hawaii. Snowden, who had a salary at the rate of $122,000, was terminated June 10, 2013 for violations of the firm’s code of ethics and firm policy. News reports that this individual has claimed to have leaked classified information are shocking, and if accurate, this action represents a grave violation of the code of conduct and core values of our firm. We will work closely with our clients and authorities in their investigation of this matter.

This of course is not surprising.

The termination of his employment is not what interests me. What interests me is what people have been ignoring:

Snowden, who had a salary at the rate of $122,000

That's a high salary for a 29-year-old with no college degree.

I'm sure that a lot of employees of Booz-Allen-Hamilton are thinking about their salary, and perhaps finding it somewhat less that Snowden's.

Now, Snowden was employed in Hawaii, and Hawaii has a high cost of living. Someone in North Carolina could live just as comfortably on a lower salary, and I'm sure that Booz-Allen-Hamilton adjusts their compensation with geographic areas.

Booz-Allen-Hamilton has said nothing about Snowden's specific duties or the skills required for his position, and there may be some obscure talent needed.

But even adjusting for those factors, that salary is going to get people thinking. It's going to get developers thinking. And they will be thinking: "Hey, perhaps I can earn that kind of salary."

Snowden, in an interview with The Guardian, claimed that his compensation was $200,000. Booz-Allen-Hamilton states that his salary was $122,000. The two claims are not necessarily in conflict; Booz-Allen-Hamilton may have paid a bonus on top of the salary. (I have no insight into Booz-Allen-Hamilton's compensation packages, and I am guessing here.)

Bonus or not, by stating his annual salary, Booz-Allen-Hamilton will have started a lot of people thinking. My guess is that developers will start asking for compensation to match Booz-Allen-Hamiltion's statement.

Compensation information for large companies, is, as a rule, held as confidential. By exposing this salary point, Booze-Allen-Hamilton may have created a problem across the entire US IT industry.

Monday, June 10, 2013

The next ERP is a private App Store

The BYOD initiatives in corporations have been treating mobile devices (smart phones and tablets) as small PCs (typically small Windows PCs). They must be managed because corporate data is confidential and must remain within the corporation. A valid reason, but the wrong conclusion.

Instead of thinking of smart phones and tablets as small PCs, we should think of them as smart phones and tablets. That is, we should think of them as devices that run apps, and corporations should provide apps to access data and perform business functions. The apps should authenticate the user, retrieve and store data (only on servers, not on the device), and govern the use and access to data.

In such a system, a corporation needs a way to build and distribute its custom apps. Common functions like e-mail and calendars can be handled with generic apps. Functions that are specific to the business must be developed by the business and somehow distributed to users.

One could put them on the public stores (Apple's iTunes, Google's Play, and Microsoft's App sStore) or one could put them on a private store. The latter may have more appeal to large organizations.

Using a store (public or private) and the existing update infrastructure simplifies the task of software distribution. New employees can go to the store and download what they need. Updates are pushed to current users.

(Using authentication -- and not apps -- to control access to data lets anyone have any app. You do not have to limit apps to employees, or even subgroups such as executives. Think of it like Microsoft Excel -- anyone can buy MS Excel but only those people who can read the corporate spreadsheets can see their contents.)

Building a private store has some advantages. It is a single point for all apps, including apps across platforms. The Apple, Google, and Microsoft stores are limited to their platfoms. A private corporate app store is a "one stop" source. Also, one is not beholden to the whims of the store managers -- all have removed apps for unknown reasons, at one time or another.

Building an app store is not a company's main purpose. (Unless the company is selling apps to all comers.) Look for the big consultancies to offer app store services and frameworks. IBM, HP, Dell, and even folks like Accenture and Booze-Allen-Hamilton may offer them. I also suspect that there will be an open-source app store framework.

I expect that app stores will be limited to large enterprises. Small Mom-and-Pop shops don't need an app store to control and measure app usage. The Fortune 500 will use them, and the next tier may, but below that the public stores may be sufficient. The market for app store frameworks and implementations may be narrow, like the ERP market.

Come to think of it, the ERP vendors may be the first of offer app store frameworks and support.

Friday, May 31, 2013

The rise of the simple UI

User interfaces are about to become simpler.

This change is driven by the rise of mobile devices. The UI for mobile apps must be simpler. A cell phone has a small screen and (when needed) a virtual keyboard. The user interacts through the touchscreen, not a keyboard and mouse. Tablets, while larger and often accompanied by a real (small-form) keyboard, also interact through the touchscreen.

For years, PC applications have accumulated features and complexity. Consider the Microsoft Word and Microsoft Excel applications. Each version has introduced new features. The 2007 versions introduced the "ribbon menu", which was an adjustment to the UI to accommodate the increase.

Mobile devices force us to simplify the user interface. Indirectly, they force us to simplify applications. In the desktop world, the application with the most features was (generally) considered the best. In the mobile world, that calculation changes. Instead of selecting an application on the raw number of features, we are selecting applications on simplicity and ease of use.

It is a trend that is ironic, as the early versions of Microsoft Windows were advertised as easy to use (a common adjective was "intuitive"). Yet while "intuitive" and "easy", Windows was never designed to be simple; configuration and administration were always complex. That complexity remained even with networks and Active Directory -- the complexity was centralized but not eliminated.

Apps on mobile don't have to be simple, but simple apps are the better sellers. Simple apps fit better on the small screens. Simple apps fit better into the mobile/cloud processing model. Even games demonstrate this trend (compare "Angry Birds" against the PC games like "Doom" or even "Minesweeper").

The move to simple apps on mobile devices will flow back to web applications and PC applications. The trend of adding features will reverse. This will affect the development of applications and the use of technology in offices. Job requisitions will list user interface (UI) and user experience (UX) skills. Office workflows will become more granular. Large, enterprise systems (like ERP) will mutate into collections of apps and collections of services. This will allow mobile apps, web apps, and PC apps to access the corporate data and perform work.

Sellers of PC applications will have to simplify their current offerings. It is a change that will affect the user interface and the internal organization of their application. Such a change is non-trivial and requires some hard decisions. Some features may be dropped, others may be deferred to a future version. Every feature must be considered and placed in either the mobile client or the cloud back-end, and such decisions must account for many aspects of mobile/cloud design (network accessibility, storage, availability of data on multiple devices, among others).

Monday, May 27, 2013

Airships and software

Airships (that is, dirigibles, blimps, and balloons) and software have more in common than one might think. Yet we think of them as two very different things, and we even think about thinking about them in different ways.

Both airships and software must be engineered, and the designs must account for various trade-offs. For airships, one must consider the weight of the materials, the shape, and the size. A larger airship weighs more, yet has more buoyancy and can carry more cargo. Yet a larger airship is affected more by wind and is less maneuverable. Lighter materials tend to be less durable than heavy ones; the trade-off is long-term cost against short-term performance.

The design of software has trade-offs: some designs are cheaper to construct in the short term yet more expensive to maintain in the long term. Some programming languages allow for the better construction of a system -- and others even require it. Comparing C++ to Java, one can see that Java encourages better designs up front, while C++ merely allows for them.

I have observed a number of shops and a number of projects. Most (if not all) have given little thought to the programming language. The typical project picks a programming language based on the current knowledge of the people present on the team -- or possibly the latest "fad" language.

Selecting the programming language is important. More important than current knowledge or the current fad. I admit that learning a new language has a cost. Yet picking a language based on nothing more than team's current knowledge seems a poor way to run a project.

The effect does not stop at languages. We (as an industry) tend to use what we know for many aspects of projects: programming languages, database, front-end design, and even project management. If we (as an organization) have been using the waterfall process, we tend to keep using it. If we (as an organization) have been using an SQL database, we tend to keep using it.

Using "what we know" makes some sense. It is a reasonable course of action -- in some situations. But there comes a time when "the way we have always done it" does not work. There comes a time when new technologies are more cost-effective. Yet sticking with "what we know" means we have no experience with "the new stuff".

If we have no experience with the new technologies, how do we know when they are cost-effective?

We have (as an industry) been using relative measures of technologies. We don't know the absolute cost of the technologies for software. (We do know the absolute cost of technologies for airships -- an airship needs so many yards of material and weighs so much. It carries so much. It has so much resistance and so much wind shear.)

Actually, the problem is worse than relative measures. We have no measures at all (for many projects) and we rely on our gut feelings. A project manager picks the language and database and project process based on his feelings. There are no metrics!

I'm not sure why we treat software projects so differently from other engineering projects. I strongly believe that it cannot continue. The cost of picking the wrong language, the wrong database, the wrong management process will increase over time.

We had better start measuring things. The sooner we do, we can learn the right things to measure.