Sunday, October 23, 2011

Functional programming pays off (part 2)

We continue to gain from our use of functional programming techniques.

Using just the "immutable object" technique, we've improved our code and made our programming lives easier. Immutable objects have given us two benefits this week.

The first benefit: less code. We revised our test framework to use immutable objects. Rather than instantiating a test object (which exercises the true object under test) and asking it to run the tests, we now instantiate the test object and it runs the tests immediately. We then simply ask it for the results. Our new code is simpler than before, and contains fewer lines of code.

The second benefit: we can extract classes from one program and add them to another -- and do it easily. This is a big win. Often (too often), extracting a class from one program is difficult, because of dependencies and side effects. The one class requires other classes, not just direct dependencies but classes "to the side" and "above" in order to function. In the end, one must import most of the original system!

With immutable objects, we have eliminated side effects. Our code has no "side" or "above" dependencies, and has fewer direct dependencies. Thus, it is much easier for us to move a class from one program into another.

We took advantage of both of these effects this week, re-organizing our code. We were productive because our code used immutable objects.

Wednesday, October 19, 2011

Engineering vs. craft

Some folks consider the development of software to be a craft; others claim that it is engineering.

As much as I would like for software development to be engineering, I consider it a craft.

Engineering is a craft that must work within measurable constraints, and must optimize some measurable attributes. For example, bridges must support a specific, measurable load, and minimize the materials used in construction (again, measurable quantities).

We do not do this for software.

We manage not software but software development. That is, we measure the cost and time of the development effort, but we do not measure the software itself. (The one exception is measuring the quality of the software, but that is a difficult measurement and we usually measure the number and severity of defects, which is a negative measure.)

If we are to engineer software, then we must measure the software. (We can -- and should -- measure the development effort. Those are necessary measurements. But they are not, by themselves, sufficient for engineering.)

What can we measure in software? Here are some suggestions:

 - Lines of code
 - Number of classes
 - Number of methods
 - Average size of classes
 - Complexity (cyclomatic, McCabe, or whatever metric you like)
 - "Boolean complexity" (the number of boolean constants used within code that are not part of initialization)
 - The fraction of classes that are immutable

Some might find the notion of measuring lines of code abhorrent. I will argue that it is not the metric that is evil, it is the use of it to rank and rate programmers. The misuse of metrics is all too easy and can lead to poor code. (You get what you measure and reward.)

Why do we not measure these things? (Or any other aspect of code?)

Probably because there is no way to connect these metrics to project cost. In the end, project cost is what matters. Without a translation from lines of code (or any other metric) to cost, the metrics are meaningless. The code may be one class of ten thousand lines, or one hundred classes of one hundred lines each; without a conversion factor, the cost of each design is the same. (And the cost of each design is effectively zero, since we cannot convert design decisions into costs.)

Our current capabilities do not allow us to assign cost to design, or code size, or code complexity. The only costs we can measure are the development costs: number of programmers, time for testing, and number of defects.

One day in the future we will be able to convert complexity to cost. When we do, we will move from craft to engineering.

Tuesday, October 11, 2011

SOA is not DOA

SOA (service oriented architecture) is not dead. It is alive and well.

Mobile apps use it. iPhone apps that get data from a server (e-mail or Twitter, for example) use web services -- a service oriented architecture.

SOA was the big thing back in 2006. So why do we not hear about it today?

I suspect it had nothing to do with SOA's marketability.

I suspect that no one talks about SOA because no one makes money from it.

Object oriented programming was an opportunity to make money. Programmers had to learn new techniques and new languages; tool vendors had to provide new compilers, debuggers, and IDEs.

Java was a new programming language. Programmers had to learn it. Vendors provided new compilers and IDEs.

UML was big, for a while. Vendors provided tools; architects, programmers, and analysts learned it.

The "retraining load" for SOA is smaller, limited mostly to the architects of systems. (And there are far fewer architects than programmers or analysts.) SOA has no direct affect on programmers.

With no large-scale training programs for SOA (and no large-scale training budgets for SOA), vendors had no incentive to advertise it. They were better off hawking new versions of compilers.

Thus, SOA quietly faded into the background.

But it's not dead.

Mobile apps use SOA to get work done. iPhones and Android phones talk to servers, using web services. This design is SOA. We may not call it that, but that's what it is.

When the hype of SOA vanished, lots of companies dropped interest in SOA. Now, to move their applications to the mobile world, they will have to learn SOA.

So don't count SOA among the dead.

On the other hand, don't count on it for your profits. You need it, but it is infrastructure, like electricity and running water. I know of few companies that count on those utilities as a competitive advantage.

Monday, October 10, 2011

Talent is not a commodity

Some companies treat their staff as a commodity. You know the symptoms: rigid job titles, detail (although often inaccurate) job descriptions, and a bureaucracy for hiring people. The underlying idea is that people, like any other commodity, can be hired for specific tasks at specific times. The management-speak for this idea is "just in time provisioning of staff".

Unfortunately for the managers, talented individuals are not stocked on shelves. They must be found and recruited. While companies (hiring companies and staffing companies) have built an infrastructure of resumes and keyword searches, the selection of candidates is lengthy and unpredictable. Hiring a good programmer is different from ordering a box of paper.

The "talent is a commodity" mindset leads to the "exact match" mindset. The "exact match" mindset leads hiring managers (and Human Resource managers) to the conclusion that the only person for the job is the "right fit" with the "complete set of skills". It is an approach that avoids mistakes, turning away candidates for the smallest of reasons. ("We listed eight skills for this position, and you have only seven. Sorry, you're not the person for us!")

Biasing your hiring decisions against mistakes means that you lose out on opportunities. It also means that you delay bringing a person on board. You can wait until you find a person with the exact right skills. Depending on the package (and it's popularity), it may take some time before you find the person.

It might take you six months -- or longer -- to find an exact match. And you may never find an exact match. Instead, with a deadline looming, you compromise on a candidate that has skills that are "close enough".

I once had a recruiter from half-way across the country call me, because my resume listed the package GraphViz. GraphViz generates and manipulates network graphs, and while used by lots of people, it is rarely listed on resumes. Therefore, recruiters cannot find people with the exact match to the desired skills -- the keyword match fails.

Of course, when you bring this person on board, you are under a tight schedule. You need the person to perform immediately. They do their best, but even that may be insufficient to learn the technologies and your current system. (Not to mention the corporate culture.) The approach has a high risk of mistakes (low quality deliverable), slow performance (again, a low quality deliverable), cost overruns from overtime (high expenses), and possibly a late delivery.

Let's consider an alternative sequence.

Instead of looking for an exact match, you find a bright programmer who has the ability to learn the specialized skill. Pay that person for a week (or three) to learn the package. Then have them start on integrating the package into your system.

You should be able to find someone in a few weeks, much less than the six months or more for the exact match. (If you cannot find a bright programmer in a week or two, you have other problems.)

Compromising on specific skills (while keeping excellence in general skills) provides some advantages.

You start earlier, which means you can identify problems earlier.

Your costs may be slightly higher, since you're paying for more time. On the other hand, you may be able to find a person at a lower rate. And even at the higher rate, a few months over a long term of employment is not that significant.

You invest in the person (by paying him to learn something new), and the person will recognize that. (You're hiring a clever person, remember?)

You can consider talent as an "off-the-shelf" commodity, something that can be hired on demand. For commonly used skills, this is a workable model. But for obscure skills, or a long list of skills, the model works poorly. Good managers know how and when to compromise on small objectives to meet the larger goals.


Saturday, October 8, 2011

What Microsoft's past can tell us about Windows 8

Microsoft Windows 8 changes a lot of assumptions about Windows. It especially affects developers. The familiar Windows API has been deprecated, and Microsoft now offers WinRT (the "Windows Runtime").

What will it be like? What will it offer?

I have a guess.

This is a guess. As such, I could be right or wrong. I have seen none of Microsoft's announcements or documentation for Windows 8, so I might be wrong at this very moment.

Microsoft is good at building better versions of competitors' products.

Let's look at Microsoft products and see how they compare to the competition.

MS-DOS was a bigger, better CP/M.

Windows was a better (although perhaps not bigger) version of IBM's OS/2 Presentation Manager.

Windows 3.1 included a better version of Novell's Netware.

Word was a bigger version of Wordstar and Wordperfect.

Excel was a bigger, better version of Lotus 1-2-3.

Visual Studio was a bigger, better version of Borland's TurboPascal IDE.

C# was a better version of Java.

Microsoft is not so much an innovator as it is an "improver", one who refines an idea.


It might just be that Windows 8 will be not an Innovative New Thing, but instead a Bigger Better Version of Some Existing Thing -- and not a bigger, better version of Windows 7, but a bigger, better version of someone else's operating system.

That operating system may just be Unix, or Linux, or NetBSD.

Microsoft can't simply take the code to Linux and "improve" it into WinRT; doing so would violate the Linux license.

But Microsoft has an agreement with Novell (yes, the same Novell that saw it's Netware product killed by Windows 3.1), and Novell has the copyright to Unix. That may give Microsoft a way to use Unix code.

It just may be that Microsoft's WinRT will be very Unix-like, with a kernel and a separate graphics layer, modules and drivers, and an efficient set of system calls. WinRT may be nothing more than a bigger, better version of Unix.

And that may be a good thing.

Tuesday, October 4, 2011

What have you done for you lately?

The cynical question that one asks of another is "What have you done for me lately?".

A better question to ask of oneself is: "What have I done for me lately?".

We should each be learning new things: new technologies, new languages, new business skills... something.

Companies provide employees with performance reviews (or assessments, or evaluations, or some such thing). One item (often given a low weighting factor) is "training". (Personally, I think it should be considered "education"... but that is another issue.)

I like to give myself an assessment each year, and look at education. I expect to learn something new each year.

I start each year with a list of cool stuff that sounds interesting. The items could be new programming languages, or a different technologies, or interpersonal skills. I refer to that list during the year; sometimes I add or change things. (I don't hold myself to the original list -- technology changes to quickly.)

Employers and companies all too often take little action to help their employees improve. That doesn't mean that you get a free pass -- it means that you must be proactive. Don't wait for someone to tell you to learn a new skill; by then it will be too late. Look around, pick some skills, and start learning.

What are you doing for you?

Sunday, October 2, 2011

The end of the PC age?

Are we approaching the end of the PC age? It seems odd to see the end of the age, as I was there at the beginning. The idea that a technology should have a shorter lifespan than a human leads one to various contemplations.

But perhaps the idea is not so strange. Other technologies have come and gone: videotape recorders, hand-held calculators, Firewire, and the space shuttle come to mind. (And by "gone", I mean "used in limited quantities, if at all". The space shuttles are gone; VCRs and calculators are still in use but considered curiosities.

Personal computers are still around, of course. People use them in the office and at home. They are entrenched in the office, and I think that they will remain present for at least a decade. Home use, in contrast, will decline quickly, with personal computers replaced by game consoles, cell phones, and tablets. Computing will remain in the office and in the home.

But here's the thing: People do not think of cell phones and tablets as personal computers.

Cell phones and tablets are cool computing devices, but they are not "personal computers". Even Macbooks and iMac computers are not "personal computers". The term "PC" was strongly associated with IBM (with "clone" for other brands) and Microsoft DOS (and later, Windows).

People have come to associate the term "personal computer" with a desktop or laptop computer of a certain size and weight, of any brand, running Microsoft Windows. Computing devices in other forms, or running other operating systems, are not "personal computers": they are something else: a Macbook, a cell phone, an iPad... something. But not a PC.

Microsoft's Windows 8 offers a very different experience from the "classic Windows". I believe that this difference is enough to break the idea of a "personal computer". That is, a tablet running Windows 8 will be considered a "tablet" and not a "PC". New desktop computers with touchscreens will be considered computers, but probably not "PCs". Only the older computers with keyboards and mice (and no touchscreen) will be considered "personal computers".

Microsoft has the opportunity to brand these new touchscreen computers. I suggest that they take advantage of this opportunity. I recognize that their track record with product names has been poor ("Zune", "Kin", and the ever-awful "Bob") but they must do something.

The term "personal computer" is becoming a reference to a legacy device, to our father's computing equipment. Personal computers were once the Cool New Thing, but no more.