Sunday, December 19, 2010

Are you ready for another step?

The history of programming languages has introduced concepts over time. Most of these concepts add rigor to our programming most of these changes were accompanied by complaints and much gnashing of teeth from the "old school" programmers. The big changes were high-level languages, structured programming, and object-oriented programming.

The step to high level languages was perhaps the most traumatic, since it was the first. It saw the deprecation of assembly language and fine control of the program in exchange for the ability to write programs with little concern for memory layout and register assignment.

The step to structured programming was also controversial. The programming police took away our GOTO statement, that workhorse of flow control and limited us to sequential statements, if/then/else blocks, and while loops.

When we moved from structured programming to object-oriented programming, we had to learn a whole new "paradigm" (and how to spell and pronounce the word "paradigm"). We learned to organize our code into classes, to encapsulate our data, to build a class hierarchy, and to polymorphize our programs.

Funny thing about each of these steps: they built on the previous steps. Structured programming assumed the existence of high level languages. There was no (noticeable) movement for structured assembly language. Object-oriented programming assumed the tenets of structured programming. There was no GOTO in object-oriented programming languages, except for the C++ "goto" keyword which was offered up on the altar of backwards compatibility and only then with restrictions.

And now we are about to move from object-oriented programming to functional programming. Once again, the "old school" programmers will gnash their teeth and complain. (Of course, it will be a different bunch than the teeth-gnashers of the golden age of assembly language. The modern teeth-gnashers will be those who advocated for object-oriented programming two decades ago.) And once again we will move to the new thing and accept the new paradigm of programming.

Yet those shops which attempt to move up to the step of functional programming will find a challenge.

Here's the problem:

Many shops, and I suspect most shops, use only a small fraction of object-oriented programming concepts in their code. They have not learned object-oriented programming.

Big shops (and medium-size shops, and small shops) have adopted object-oriented languages (C++, Java, C#) but not adopted the mindset of object-oriented programming. Much of the code is procedural code, sliced into classes and methods. It is structured code, but not really object-oriented code.

Most programmers are writing not clean, object-oriented code, but FORTRAN. The old saying "I can write FORTRAN in any language" is true because the object-oriented languages allowed for the procedural constructs. (Mind you, the code is FORTRAN-77 and not FORTRAN IV or FORTRAN II, but FORTRAN and procedural it is.)

This model breaks when we move to functional languages. I suspect that one can write object-oriented code (and not pure functional code) in functional languages, but you cannot write procedural code in functional languages, just as you cannot write non-structured code in object-oriented languages. The syntax does not allow for it.

The shops that move to functional languages will find that their programmers have a very hard transition. They have been writing procedural code, and that technique will no longer work.

What is an IT shop to do? My recommendations:

First, develop better skills at object-oriented programming. This requires two levels of learning: one for individuals, and the second for the organization. Individuals must learn to use the full range of object-oriented programs. Organizations must learn to encourage object-oriented programming and must actively discourage the older, structured programming techniques.

Second, start developing skills in functional programming. If using a "plain" object-oriented programming language such as C++, build discipline in techniques of functional programming. My favorite is what I call "constructor-oriented" programming, in which you use immutable objects. All member variables are set in the constructor and do not allow methods to change any values. This exercise gives you experience with some of the notions in functional programming.

The transition to functional languages will occur, just as the transition to object-oriented languages occurred. The question is, do you want your shop to move to functional programming on your schedule, or on someone else's schedule? For if you make no plans and take no action, it will occur as the external world dictates.

Friday, December 17, 2010

Think like a programmer

From time to time, it behooves me to think about how I think.

This past week, I've been working on a legacy application. It consists of 20,000 lines of C++ code, written and modified by a number of people of varying talent. The code works -- mostly -- and it certainly compiles and runs. But the code is poorly organized, uses unclear names for functions and variables, and relies on the C preprocessor. Identifying the source of problems is a challenge. As I work on the code, I look at the coding style and the architecture. Both make me wince. I think to myself: "Programmers -- real programmers --don't think this way."

It's an arrogant position, and I'm not proud of the arrogance. The code was put together by people, some who are not professional programmers, doing their best. But the thought crosses my mind.

Cut to my other activity this week: reading about the Haskell programming language and the concepts behind functional programming. These concepts are as different from object-oriented programming as object-oriented programming was to procedural programming. In functional programming, the design is much more rigorous. The central organizational concepts are recursion and sets. The programs are hard to read -- for one who is accustomed to object-oriented code or the procedural-laden code that exists in many shops. Yet despite the strangeness, functional programming has elegance and a feeling of durability. As I look at programs written in Haskell, I think to myself: "Programmers -- real programmers -- don't think this way."

This is not the arrogant position I have with poorly structure code, but one of humility and frustration. I am disappointed with the programmers and the state of the craft. I covet the attributes of functional programming. I want programs to be elegant, readable, and reliable. I want programmers to have a high degree of rigor in their thinking.

In short, I want programmers to think not like programmers, but like mathematicians.

Monday, December 13, 2010

The importance of being significant

In engineering computations, we have the notion of "significant figures". This notion tells us how many digits of a number are accurate or "significant", and which digits should be ignored. This sounds worse than it really is; let me provide an example.

If I tell you that I have 100 dollars in my pocket, you will assume that I have *about* 100 dollars. I may have exactly 100 dollars, or I may have 95 or 102 or maybe even 120. My answer provides information to a nice round number, which is convenient for our conversation. (If I actually have $190 something more than $150, I should say "about 200 dollars", since that is the closer round number.) The phrase "100 dollars" is precise to the first digit (the '1' in '100') but not down to the last zero.

On the other hand, if I tell you that I have 100 dollars and 12 cents, then you can assume that I have indeed $100.12 and not something like $120 or $95. By specifying the 12 cents, I have provided an answer with more significant figures; five in the latter case, one in the former.

The number of significant figures is, well, significant. Or at least important. It's a factor in calculations that must be included for reliable results. There are rules for performing arithmetic with numbers, and significant figures tell us when we must stop adding digits of precision.

For example, the hypothetical town of Springfield has a population of 15,000. That number has two significant figures. If one person moves into Springfield, is the population now 15,001? The arithmetic we learned in elementary school says that it is, but that math assumes that the 15,000 population figure is precise to all places (five significant figures). In the real world, town populations are estimates (mostly because they change, but slowly enough that the estimate is still usable). The 15,000 figure is precise to two figures; it has limited precision.

When performing calculations with estimates or other numbers with limited precision, the rule is: you cannot increase precision. You have to keep to the original level of precision, or lose precision. (You cannot make numbers more precise than the original measurements, because that is creating fictional information.)

With a town estimate of 15,000 (two "sig-figs"), adding a person to the town yields an estimate of... 15,000. It's as if I told you that I had $100 in my pocket, and then I found a quarter and tucked it into my pocket. How much do I now have in my pocket? It's not $100.25, because that would increase the number of significant figures from one to five, and you cannot increase precision. We have to stick with one digit of precision, so I *still* have to report $100 in my pocket, despite my windfall.

In the engineering world, respecting the precision of the initial estimates is important for accurate estimates later in the calculations.

I haven't seen this concept carried over to the computer programming world. In programming languages, we have the ability to read and write integers and floating point numbers (and other data types). With integers, we often have the ability to specify the number of character positions for the number; for floating point, we can specify the number of digits and the number of decimal places. But the number of decimal places is not the same as the number of significant figures.

In my experience, I have seen no programming language or class library address this concept. (Perhaps someone has, if so please send info.) Knuth covers the concept in great detail in "The Art of Computer Programming" and explains how precision can be lost during computations. (If you want a scary read, go read that section of his work.)

There may be several reasons for our avoidance of significant figures:

It takes effort to compute. Using significant figures in calculations requires that we drag around additional information and perform additional adjustments on the raw results. This is a problem of computational power.

It requires additional I/O There is more effort to specify the significant figures on input (and to a lesser extent, output) This is an argument of language specification, numeric representation, and input/output capacity.

It reduces the image of authority associated with the computer In Western culture, the computer holds a place of authority of information. Studies have shown that people believe the data on computer printouts more readily data on than hand-written documents. This is an issue of psychology.

Some domains don't need it The banking industry, for example, uses numbers that are precise to a fixed decimal place. When you ask a bank for your balance, it responds with a number precise to the penny, not "about $100". This is in issue of the domain.

My thinking is that all of these arguments made sense in their day, but should be re-examined. We have the computational power and the parsing capabilities for accepting, tracking, and using significant figure information. While banking may be immune to significant figures (and perhaps that is only the accounting side of banking), many other domains need to track the precision of their data.

As for the psychological argument, there is no amount of technology, hardware, or language features that will change our thinking. It is up to us to think about our thinking and change it for the better.

Sunday, December 12, 2010

Simple or complex

Computers have been complex since the beginning. Computer users have a love/hate relationship with complexity.

We can add new features by adding a new layer onto an existing system, or expanding an existing layer within a system. Modifying an existing system can be difficult; adding a new layer can be fast and cheap. For example, the original Microsoft Windows was a DOS program that ran on PCs. Morphing DOS into Windows would have been a large effort (not just for development but also for sales and support to users who at the time were not convinced of the whole GUI idea) and a separate layer was the more effective path for Microsoft.

But adding layers is not without cost. The layers may not always mesh, with portions of lower layers bleeding through the upper layers. Each layer adds complexity to the system. Add enough layers, and the complexity starts to affect performance and the ability to make other changes.

The opposite of "complex" is "simple"; the opposite of "complexify" (if I may coin the word) is "simplify". But the two actions do not have equivalent effort. Where adding complexity is fast and cheap, simplifying a system is hard. One can add new features to a system; if users don't want them, they can ignore them. One has a harder time removing features from a system; if users want them they cannot ignore that the features are gone.

Complexity is not limited to PCs. Consumer goods, such as radios and televisions, were at one point complex devices. Radios had tubes that had to be replaced. TVs had tubes also, and lots of knobs for things like "horizontal control", "color", "hue", and "fine tuning". But those exposed elements of radio and TV internals were not benefits and not part of the desired product; they were artifacts of utility. We needed them to make the device work and give us our programming. They disappeared as soon as technically possible.

Automobiles had their share of complexity, with things like a "choke" and a pedal for the transmission. Some features have been removed, and others have been added. Cars are gaining complexity in the form of bluetooth interfaces to cell phones and built-in GPS systems.

Software is not immune to the effect of layers and complexity. Microsoft Windows was one example, but most systems expand through added layers. The trick to managing software is to manage not just the growth of features but to manage the reduction in complexity. Microsoft eventually merged Windows and DOS and Windows became an operating system in its own right. Microsoft continues to revise Windows, but they do it in both directions: they add features and expand capabilities with new layers, but they also remove complexity and the equivalent of the "fine tuning" knob.

Google's Cr-48 laptop is a step in simplifying the PC. It removes lots of knobs (no local programs, and even no Caps Lock key) and pushes work onto the internet. I predict that this will be a big seller, with simplicity being the sales driver.

Friday, December 10, 2010

Goodbye, Capslock!

The newly-announced Google Chrome OS has proposed the elimination of the venerable Caps Lock key. Many folks are outraged, but I am content to see the Caps Lock key ride off into the sunset.

To truly understand the Caps Lock key, one needs to know the history of computer hardware, programming languages, and the typewriter. The notion of Caps Lock started with typewriters, which allowed their users to shift between lower case and upper case letters with a "shift" key. (Most typewriters had two shift keys, one on either side of the main keyboard. Depressing the key moved the key assembly up and changed the impact area of each key from the lower case letter to the corresponding upper case letter. The "Shift Lock" key engaged a mechanism that kept the key assembly in the upper position, allowing the typist to easily type a series of upper case letters.

The early data terminals duplicated this capability, but with "logical" shift keys that changed the keystrokes from lower case to upper case. Very early data entry devices, such as IBM keypunch machines from the 1950s, her only upper case and therefore needed no shift or shift lock keys. Very early IBM systems such as the IBM 1401 used a character set that had only upper case letters. Later systems (although still early in the computer age) allowed for upper and lower case.

For computers, the distinction between upper and lower case was important. Early (pre-1970, for the most part) systems worked only in upper case. Data terminals that allowed lower case input were a nuisance, since the lower case letters were ignored or rejected by the programs. Programming languages such as COBOL and FORTRAN (and even BASIC) expected the programs to be entered in upper case. For these systems, the Caps Lock key was a boon, since it let one type large quantities of text in upper case.

The mavericks that changed the world were Unix and the C programming language. The pair allowed (and even encouraged) the use of lower case letters. Soon, compilers for Pascal and FORTRAN were allowing upper and lower case letters, and doing the right thing with them.

By the time the IBM PC came along, the computer world was ready to accept upper and lower case. Yet there was enough inertia to keep the Caps Lock key, and the IBM PC kept it. Not only did it keep it, but it added the Num Lock and Scroll Lock keys.

Yet the Caps Lock key has outlived its usefulness. I don't use it; I haven't used it since I wrote COBOL programs back in the late 1980s. The varied tasks of my work expect upper and lower case letters, and long strings of upper case are not used. Tools such as Microsoft Word, Microsoft Excel, and Microsoft Visual Studio for C++ or C# do not need blocks of upper case letters. The modern languages of C++, Java, and C# are case-sensitive, and using them requires upper and lower case.

I say, let go of Caps Lock. We can do what we need with the rest of the keyboard. (Actually, I think we can let go of the Num Lock and Scroll Lock keys, too.)

Thursday, December 9, 2010

WebMatrix rides again -- but does it matter?

Microsoft has re-introduced a new version of its "WebMatrix" package for entry-level web developers. Not merely an incremental upgrade, the new WebMatrix is a totally new product with the old branding.

WebMatrix is its own thing, living in the space below Visual Studio (even the "Express" version) and doing its own thing. Yet it still uses .NET and C#. It has a neat little syntax, with things like:

value = @Request["username"];

It can use the auto-typing constructs of the modern C# language, and you can declare variables with "var" and let the compiler figure out the type. Microsoft has taken the unusual step of supporting an alien technology (PHP) in the package. They've worked hard on this product.

The real question is: does this new incarnation of WebMatrix have a future? It's a neat tool for web development. Some might say that it is what ASP.NET should have been. If ASP was "version 1", and ASP.NET was "version 2", then this is the famous "version 3" with which Microsoft succeeds.

But the time for web development has come and gone. WebMatrix is aimed at hobbyists, people working with a lighter tool than even the free Express version of Visual Studio. I don't know how many people are in that situation.

The WebMatrix product fills in a box on Microsoft's "we have solutions for all markets" diagram, but I have to wonder how many people live in that particular corner. WebMatrix is for hobbyists, and hobbyists have little money. The money is clearly in the enterprise packages, and enterprises won't be using WebMatrix. If Microsoft wants to grow an "ecosystem" of developers, their tools for hobbyists will have to offer something more than web apps. Hobbyists have moved on to mobile apps, mainly the iPhone and Android. The idea of WebMatrix strikes me as too little, too late.


Sunday, December 5, 2010

We don't need no stinkin phone numbers

Why do we need phone numbers? If you ask the average person, they will tell you that you need a phone number to call someone. I'm not sure that we need phone numbers.

A phone number is an interesting thing. What's more interesting is what it is not: it's not an attribute or property of a phone. That is, there is nothing in your phone (I'm thinking old-style land-line phone here) that knows its phone number. 

And that makes sense. I can take my land-line phone, unplug it from my house, carry it to your house, plug it in, and use it. When I do, calls made are charged to your phone number (your account) and not mine. The fact that I'm using the handset from my house doesn't matter.

One can view a phone number as an address, a target for phone calls. This is a pretty good description of a phone number. The phone jacks in your house are bound to a "pair number" at the central office, and a phone number is associated with that pair number. This level of indirection allows the phone company to move phone numbers around. For example, when my sister moved from one house to another in the same town, she kept her phone number. The phone company simply associated her phone number with the pair number associated with her new home.

Cell phones have a similar arrangement. There is no pair of copper wires running to a cell phone, obviously, but each phone has a unique identifier and the phone company associates your phone number with that identifier. Once associated, the phone company can route calls to your phone based on the identifier.

Another view of a phone number is that it is an instruction to a switching system. Specifically, an instruction to the public switched telephone network (PSTN) to connect you to a particular phone. This is a slightly more useful concept, since it binds the phone number to the switching system and not the device.

Binding the phone number to the switching system means that it is an artifact of the switching system. If we replace the switching system, we can replace the phone number. This is precisely what Skype does, and I suspect competing systems (Vonage, Google Phone) do the same thing.

With Skype, you don't need phone numbers. Instead, you use the Skype ID. I'm on Skype, and I can talk with other Skype users. To connect to another Skype user, I need only their Skype ID -- no phone number is necessary. The Skype switching system can connect us with just the ID.

Skype does let me call people who are not on Skype -- people on the PSTN. To call them, I *do* need a phone number. But this is not Skype's requirement; it is a requirement of the PSTN. (If you want to talk to anyone through the PSTN, you must provide a phone number.) When Skype takes my call and routes it to the PSTN, Skype must provide the destination phone number.

Just as phone numbers are instructions to the PSTN, and Skype IDs are instructions to the Skype switching system, URLs are instructions to the web system. (The switching system for the web is perhaps a bit more complex than that of the PSTN, as the web has multiple layers of addresses, but the concept is the same.)

We may soon see the decline of the phone number. It works for the PSTN, but not for other systems. Other systems (the web, Skype, Google Phone) have no need of phone numbers, other than to connect to the PSTN. If we abandon the PSTN for other systems, we will also abandon phone numbers.


Monday, November 29, 2010

The return of frugality

We may soon see a return to frugality with computing resources.

Ages ago, computers were expensive, and affordable by only the very wealthy (governments and large companies). The owners would dole out computing power in small amounts and charge for each use. They used the notion of "CPU time", which was the amount of time actually spent by the CPU processing your task.

The computing model of the day was timesharing, the allocation of a fraction of a computer to each user, and the accounting of usage by each user. The key aspects measured were CPU time, connect time, and disk usage.

The PC broke the timesharing model. Instead of one computer shared by a number of people, the PC let each person have their own computer. The computers were small and low-powered (laughingly so by today's standards) but enough for individual needs. With the PC, the timesharing mindset was discarded, and along with it went the attention to efficiency.

A PC is a very different creature from a timesharing system. The purchase is much simpler, the installation is much simpler, and the administration is (well, was) non-existent. Instead of purchasing CPU power by the minute, you purchased the PC in one lump sum.

This change was significant. The PC model had no billing for CPU time; the monthly bill disappeared. That made PC CPU time "free". And since CPU time was free, the need for tight, efficient code become non-existent. (Another factor in this calculus was the availability of faster processors. Instead of writing better code, you could buy a new faster PC for less than the cost of the programming time.)

The cloud computing model is different from the PC model, and returns to the model of timesharing. Cloud computing is timesharing, although with virtual PCs on large servers.

With the shift to cloud computing, I think we will see a return to some of the timesharing concepts. Specifically, I think we will see the concept of billable CPU time. With the return of the monthly bill, I expect to see a renaissance of efficiency. Managers will want to reduce the monthly bill, and they will ask for efficient programs. Development teams will have to deliver.

With pressure to deliver efficient programs, development teams will look for solutions and the market will deliver them. I expect that the tool-makers will offer solutions that provide better optimization and cloud-friendly code. Class libraries will advertise efficiency on various platforms. Offshore development shops will cite certification in cloud development methods and efficiency standards. Eventually, the big consultant houses will get into the act, with efficiency-certified processes and teams.

I suspect that few folks will refer to the works of the earlier computing ages. Our predecessors had to deal with computing constraints much more severe than the cloud environments of the early twenty-first century, yet we will (probably) ignore their work and re-invent their techniques.

Sunday, November 28, 2010

Measuring the load

If I worked for a large consulting house, and offered you a new process for your software development projects, would you buy it?

Not without asking a few other questions, of course. Besides the price (and I'm amazed that companies will pay good money for a process-in-a-box) you'll ask about the phases used and the management controls and reports. You'll want to be assured that the process will let you manage (read that as "control") your projects. You'll want to know that other companies are using the process. You'll want to know that you're not taking a risk.

But here's what you probably won't ask: How much of a burden is it to your development team?

It's an important question. Every process requires that people participate in the data entry and reporting for the project: project definition, requirements, task assignment, task status updates, defects, defect corrections, defect verifications, build and test schedules, ... the list goes on. A process is not a thing as much as a way of coordinating the team's efforts.

So how much time does the process require? If I offered you a process that required four hours a day from each developer, seven hours a day from requirements analysts, and nine hours a day from architects, would you buy my process? Certainly not! Involvement in the process takes people away from their real work, and putting such a heavy "load" on your teams diverts too much effort away from the business goal.

Managers commit two sins: failure to ask the question and failure to measure the load. They don't ask the salesmen about the workload on their teams. Instead, they worry about licensing costs and compatibility with existing processes. (Both important, and neither should be ignored, but their view is too limited. One must worry about the affect on the team.) Managers should research the true cost of the process, like any other purchase.

After implementing the process, managers should measure the load the process imposes on their teams. That is, they should not take the salesman's word for it, assuming the salesman was able to supply an estimate. An effective manager will measure the "load" several times, since the load may change as people become familiar with the process. The load may also change during the development cycle, with variations at different phases.

Not measuring is bad, but assuming that the process imposes no load is worse. Any process will have some load on your staff. Using a process and thinking that it is "free" is self-delusion and will cause distortions in your project schedule. Even with a light load of one hour per day, the assumption of zero load introduces an error of 12%. You think you have eight hours available from each person, when in fact you have only seven. That missing hour will eventually catch up with you.

It's said that the first step of management is measurement. If true, then the zeroth step of management is acknowledgment. You must acknowledge that a process has a cost, a non-zero cost, and that it can affect your team. Only after that can you start to measure the cost, and only then can you start to manage it.

Wednesday, November 24, 2010

The line doesn't move; the people in the line do

An engineer friend once commented, while we were waiting in line somewhere (it may be have been for fast food) that the line doesn't move; it's the people in the line who move.

The difference between the line and the people in line is a subtle one, yet obvious once it is brought to your attention. At 10:00, there is a line from the cash register out onto the floor. An hour later, the line will still be there, from the cash register out onto the floor. The line (a collection of people) has not moved -- it's still in the same store, starting at the same location. The fact that the people in the line have changed does not mean the line has moved. (If the line were to move, it would start in a different place or extend to another part of the shop.)

There is a similar subtle difference with bits and bit-built entities like songs and books and on-line magazines. We think that buying a song means buying bits, when in fact we're buying not a bunch of bits but a bunch of bits in a sequence. It's not the bits that make the song, it's the sequence of bits.

A sequence is not a tangible thing.

For a long time, sequences of bits were embedded in atoms. Books were sequences of characters (bits) embedded in ink on paper. Copying the sequence of bits meant collecting your own atoms to hold the copy of bits. In the Middle Ages, manuscripts were copied by hand and creating the new set of atoms was an expensive operation. The printing press and movable type made the copy operation less expensive, but there was still a significant cost for the atom "substrate".

In the twenty-first century, the cost of the atom substrate has dropped to a figure quite close to zero. The cost of the copy operation has also dropped to almost zero. the physical barriers to copies are gone. All that is left (to save the recording industry) is tradition, social norms, and law (and law enforcement). And while tradition and social norms may prevent folks born prior to 1980 from making copies, they don't seem to be holding back the later generations.

The RIAA, record labels, and other copyright holders want it both ways. They want bit sequences to be cheap to copy (for them) but hard to copy (for everyone else). They want bit sequences that are easily copied and distributed to consumers. Once the bits arrive in our in-box, or on our iPod, they wants the bits to magically transform into non-movable, non-copyable bits. They want bit sequences that are easy to move until they want them to be fixed in place. That's not how bits work.

In the end, physics wins. Wily E. Coyote can run off the cliff into air and hang there, defying gravity. But when he looks down, gravity kicks in.

Tuesday, November 23, 2010

Getting from here to there

Cloud computing is much like the early days of microcomputers. Not the early days of PCs (1981 to 1984) but the early days of microcomputers (1976 to 1980).

In the pre-PC era, there was no one vendor of hardware and software, and there was no one standard format for exchangeable media (also know as "floppy disks"). The de facto standard for an operating system was CP/M, but Apple had DOS and Radio Shack had TRS-DOS, and the UCSD p-System was lurking in corners.

Even with CP/M as a standard across the multitude of hardware platforms (Imsai, Sol, North Star, Heathkit, etc.) the floppy disk formats varied. These floppy disks were true floppies, in either 8 inch or 5.25 inch forms, with differences in track density, track count, and even the number of sides. There were single-sided disks and double-sided disks. There were 48-tpi (tracks per inch) disks and 96-tpi disks. Tracks were records in single density, double density, quad density, and extended density.

Moving data from one computer to another was an art, not a science, and most definitely not easy. It was all too common to have data on one system and desire it on another computer. Truly, these early computers were islands of automation.

Yet the desire to move data won out. We used null modem cables, bulletin board systems, and specially-written software to read "foreign" disks. (The internet existed at the time, but not for the likes of us hobbyists.)

Over time, we replaced the null modem cables and bulletin board systems with real networks. Today, we think nothing of moving data. Indeed, the cell phone business is the business of moving data!

The situation of cloud computing is similar. Clouds can hold data and applications, but we're not in a position to move data from one cloud to another. Well, not easily. One can dump the MySQL database to a text file, FTP it to a new site, and then import it into a new MySQL database; this is the modern-day equivalent of the null modem cables of yore.

Data exchange (for PCs) grew over a period of twenty years, from the early microcomputers, to the first IBM PC, to the releases of Netware, Windows for Workgroups, IBM OS/2, Windows NT, and eventually Windows Server. The big advances came when large players arrived on the scene: first IBM with an open hardware platform that allowed for network cards, and later Novell and Microsoft with closed software platforms that established standards (or used existing ones).

I expect that data exchange for cloud apps will follow a similar track. Unfortunately, I also expect that it will take a similar period of time.

Sunday, November 21, 2010

Just how smart is your process?

Does your organization have a process? Specifically, does it have a process for the development of software?

The American mindset is one of process over skill. Define a good process and you don't need talented (that is, expensive) people. Instead of creative people, you can staff your teams with non-creative (that is, low wage) employees, and still get the same results. Or so the thinking goes.

The trend goes back to the scientific management movement of the early twentieth century.

For some tasks, the de-skilling of the workforce may make sense. Jobs that consist of repeated, well-defined steps, jobs with no unexpected factors, jobs that require no thought or creativity, can be handled by a process.

The creation of software is generally unrepeatable, has poorly-defined steps, has unexpected factors and events, and requires a great deal of thought. Yet many organizations (especially large organizations) attempt to define processes to make software development repeatable and predictable.

These organizations confuse software development with the project of software development. While the act of software development is unpredictable, a project for software development can be fit into a process. The project management tasks (status reports, personnel assignment, skills assessment, cost calculations, etc.) can be made routine. You most likely want them routine and standardized, to allow for meaningful comparison of one project to another.

Yet the core aspect of software development remains creative, and you cannot create a process for creative acts. (Well, you can create a process, and inflict it upon your people, but you will have little success with it.) Programming is an art more than science, and by definition an art is something outside of the realm of repeated processes.

Some organizations define a process that uses very specific requirements or design documents, removing all ambiguity and leaving the programming to low-skilled individuals. While this method appears to solve the "programming is an art" problem, it merely shifts the creative aspect to another group of individuals. This group (usually the "architects", "chief engineers", "tech leads", or "analysts") are doing the actual programming. (Perhaps not programming in FORTRAN or C#, but programming in English.) Shifting the creative work away from the coders introduces several problems, including the risk of poor run-time performance and the risk of specifying impossible solutions. Coders, the folks who wrangle the compiler, have the advantage of knowing that their solutions will either work or not work -- the computer tells them so. Architects and analysts who "program in English" have no such accurate and absolute feedback.

Successful management of software development consists not of reducing every task to a well-defined, repeatable set of steps, but of dividing tasks into the "repeatable" and "creative" groups, and managing both groups. For the repeatable tasks, use tools and techniques to automate the tasks and make them as friction-free as possible. For the creative tasks, provide well-defined goals and allow your teams to work on imaginative solutions.

Thursday, November 18, 2010

The new new thing

The history of personal computers (or hobbyist computers, or microcomputers) has a few events that define new technologies which grant non-professionals (otherwise known as amateurs) the power to develop new, cutting edge applications. Such events are followed by a plethora of poorly-written, hard-to-maintain, and mediocre quality applications.

Previous enabling tools were Microsoft BASIC, dBase II, and Microsoft Visual Basic. Each of these packages "made programming easy". Consequently, lots of people created applications and unleashed them upon the world.

Microsoft BASIC is on the list due to its ease-of-use and its pervasiveness. In 1979, every computer for sale included a version of Microsoft BASIC (with the possible exception of the TRS-80 model II and the Heathkit H-8, H-11, and H-89 computers). Microsoft BASIC made lots of applications possible, and made it possible for just about anyone to create an application. And they did, and many of those applications that were poorly written impossible to maintain.

dBase II from Aston-Tate allowed the average Joe to create database applications, something possible in Microsoft BASIC only with lots of study and practice. dBase II used high-level commands to manipulate data, and lots of people wrote dBase II apps. The apps were poorly written and hard to maintain.

Microsoft's Visual Basic surpassed the earlier "MBASIC" and dBase II in popularity. It let anyone write apps for Windows. It was much easier than Microsoft's other Windows development tool, Visual C++. Microsoft scored a double win here, as apps in both Visual Basic and Visual C++ were poorly written and hard to maintain.

Languages and development environments since then have been designed for professional programmers and used by professional programmers. The "average Joe" does not pick up Perl and create apps.

The tide has shifted again, and now there is a new development tool, a new "new thing", that lets average people (that is, non-programmers) develop applications. It's called "salesforce.com".

salesforce.com is a cloud-based application platform that can build data-intensive applications. The name is somewhat deceptive, as the applications are not limited to sales. they can be anything, although the model leads one to the traditional view of a database with master/child relationships and transaction updates to a master file. I would not use it to create a word proccessor, a compiler, or a social network.

The important aspects are ease-of-use and availability. salesforce.com has both, with a simple, GUI-based development environment (and a web-based one at that!) and free access for individuals to experiment. The company even offers the "App Exchange", a place to sell (or give away) apps for the salesforce.com platform.

Be prepared for a lot of salesforce.com applications, many written by amateurs and poorly designed.

Wednesday, November 17, 2010

Just like a toaster

Computers have gotten easy to use, to the point that the little hand-held computers that we call phones (or tablets) are not considered computers but simply smart appliances.

In the pre-PC days, lots of computers were sold as kits. The Altair was the first popular, practical kit computer for individuals. And while other pre-PC microcomputers such as the TRS-80 and Commodore PET were sold fully assembled, they needed some cable-plugging and lots of learning.

IBM PCs with DOS required cable-plugging too, although IBM made the cabling easier by using unique, asymmetric plugs for each type of cable. It was impossible to plug the wrong things together, and impossible to plug the right cable into the right jack but in the wrong orientation. Yet IBM PC DOS required lots of learning.

Microsoft Windows made things easier -- eventually. The early versions of Windows required a lot of set-up and configuration. Early Windows was a program that ran on top of DOS, so to run Windows you had to configure DOS and then install and configure Windows.

The Apple Macintosh was the one computer that made things easy. And today's PC with Windows pre-installed and configured with automatic updates is very easy to use. But let's ignore those computers for now. I want to focus on the "it's hard to set up and use a computer" concept.

When computers were difficult to use, only the people who wanted to use computers would use computers. Like-minded geeks would join together in user groups and share their hard-earned knowledge. User group members would respect each other for their accomplishments: installing an operating system, attaching peripheral devices, or building a computer.

In today's world, computers are easy to use and lots of people support them. One can walk into any number of consumer stores (including Wal-Mart) and buy a PC, take it home, and do interesting things with it.

Not only can you buy PCs, but the businesses that one deals with know how to support PCs. When calling a company for technical support, the company (whether it is the local internet provider, a bank, or a movie distributor) has a customer support department that understands computers and knows how to get them working.

Computers have changed from the hard-to-use, only-for-geek devices to plain consumer appliances. They are almost the equivalent of toasters.

If they are running Windows.

You can buy PCs with Windows in just about any store. You can buy Macintosh computers in a few places -- but not as many as the places that sell Windows PCs. And you can buy PCs with Linux in a very few places, if you know where to look.

Businesses have customer support departments that know how to fix Windows PCs. And a few can support Apple Macintosh PCs. And a very few will support Linux PCs.

Linux, for all of its popularity, is still a do-it-yourself operating system. As an enterprise, you can purchase Linux support services, but as an individual you are expected (by our society) to use Windows (or maybe a Mac).

Linux geeks, for the most part, buy PCs and install Linux. They don't look for PCs with Linux installed. They will buy a PC without an operating system, or they will buy a PC with Windows and then install Linux on it (possibly saving Windows, possibly not). This behavior skews the market research, since marketers count sales and the Linux PCs are not selling well.

Linux geeks also respect each other for their accomplishments: installing Linux, adding peripheral devices, and re-compiling the kernel. They have to respect each other, because they need each other. Linux users cannot count on outside entities for support like Windows users can.

Some Linux distros have made installation and upgrades very easy. These distros lower the barriers of entry for individuals and expand the potential population of Linux users. It's very easy to install Ubuntu Linux or SuSE Linux.

The difference between an easy-to-user Linux and Windows is now not in the installation of the operating system, nor in the software that is supported. The difference is in the external support. Windows users have lots of options (not always effective options, but lots); Linux users must be rugged individuals with the right social networks. Getting Linux fully accepted into the support structure will take a lot of work -- possibly more work than getting the install to work on different hardware.

Sunday, November 14, 2010

Java's necessary future

Now that Oracle has purchased Sun, we have a large cloud of uncertainty for the future of Java. Will Oracle keep Java, or will it kill it off? Several key Java folks have left Oracle, pursuing other projects, and Oracle has a reputation of dropping technologies that have no direct affect on the bottom line. (Although one has to wonder why Oracle, a database company, chose to purchase Sun, a hardware company that happened to own Java and MySQL. Purchasing Sun to get MySQL seems to be an expensive solution, one that is not in Oracle's usual pattern.)

Java is an interesting technology. It proved that virtual processors were feasible. (Java was not the first; the UCSD p-System was a notable predecessor. But Java was actually practical, whereas the earlier attempts were not.) But Java has aged, and needs not just face-lift but a re-thinking of its role in the Oracle stack.

Here's my list of improvements for "Java 2.0":

- Revamp the virtual processor. The original JRE was custom-built for the Java language. Java 2.0 needs to embrace other languages, including COBOL, FORTRAN, LISP, Python, and Ruby.

- Expand the virtual processor to support functional languages, including the new up-and-coming languages of Haskell and Erlang. This will help LISP, Python, and Ruby, too.

- Make the JRE more friendly to virtualization environments like Oracle VM, VMWare, Parallels, Xen, and even Microsoft's Virtual PC and Virtual Server.

- Contribute to the Eclipse IDE, and make it a legitimate player in the Oracle universe.

Java was the ground-breaker for virtual processor technologies. Like other ground-breakers such as FORTRAN, COBOL, and LISP, I think it will be around for a long time. Oracle can use this asset or discard it; the choice is theirs.

Thursday, November 11, 2010

Improve code with logging

I recently used a self-made logging class to improve my (and others') code. The improvements to code were a pleasant side-effect of the logging; I had wanted more information from the program, information that was not visible in the debugger, and wrote the logging class to capture and present that information. During my use of the logging class, I found the poorly structured parts of the code.

A logging class is a simple thing. My class has four key methods (Enter(), Exit(), Log(), and ToString() ) and a few auxiliary methods. Each method writes information to a text file. (The text file being specified by one of the auxiliary methods.) Enter() is used to capture the entry into a function; Exit() captures the return from the function; Log() adds an arbitrary message to the log file, including variable values; and ToString() converts our variables and structures to plain text. Combined, these methods let us capture the data we need.

I use the class to capture information about the flow of a program. Some of this information is available in the debugger but some is not. We're using Microsoft's Visual Studio, a very capable IDE, but some run-time information is not available. The problem is due, in part, to our program and the data structures we use. The most common is an array of doubles, allocated by 'new' and stored in a double*. The debugger can see the first value but none of the rest. (Oh, it can if we ask for x[n], where 'n' is a specific number, but there is no way to see the whole array, and repeating the request for an array of 100 values is tiresome.)

Log files provide a different view of the run-time than the debugger. The debugger can show you values at a point in time, but you must run the program and stop at the correct point in time. And once there, you can see only the values at that point in time. The log file shows the desired values and messages in sequence, and it can extract the 100-plus values of an array into a readable form. A typical log file would be:

** Enter: foo()
i == 0
my_vars = [10 20 22 240 255 11 0]
** Enter: bar()
r_abs == 22
** Exit: bar()
** Exit: foo()

The log file contains the text that I specify, and nothing beyond it.

Log files give me a larger view than the debugger. The debugger shows values for s single point in time; the log file shows me the values over the life of the program. I can see trends much easier with the log files.

But enough of the direct benefits of log files. Beyond showing me the run-time values of my data, they help me build better code.

Log files help me with code design by identifying the code that is poorly structured. I inject the logging methods into my code, instrumenting it. The function

double Foobar::square(double value)
{
return (value * value);
}

Becomes

double Foobar::square(double value)
{
Logger::Enter("Foobar::square(double)");
Logger::Log("value: ", Logger::ToString(value));
return (value * value);
Logger::Exit("Foobar::square(double)");
}

A bit verbose, and perhaps a little messy, but it gets the job done. The log file will contains lines for every invocation of Foobar::square().

Note that each instrumented function has a pair of methods: Enter() and Exit(). It's useful to know when each function starts and ends.

For the simple function above, one Enter() and one Exit() is needed. But for more complex functions, multiple Exit() calls are needed. For example:

double Foobar::square_root(double value)
{
if (value < 0.0)
return 0.0;
if (value == 0.0)
return 0.0;
return (pow(value, 0.5));
}

The instrumented version of this function must include not one but calls to Exit() for each return statement.

double Foobar::square_root(double value)
{
Logger::Enter("Foobar::square_root(double)");
Logger::Log("value: ", Logger::ToString(value));
if (value < 0.0)
{
Logger::Exit("Foobar::square_root(double)");
return 0.0;
}
if (value == 0.0)
{
Logger::Exit("Foobar::square_root(double)");
return 0.0;
}
Logger::Exit("Foobar::square_root(double)");
return (pow(value, 0.5));
}

Notice all of the extra work needed to capture the multiple exits of this function. This extra work is a symptom of poorly designed code.

In the days of structured programming, the notion of simplified subroutines was put forward. It stated that each subroutine ("function" or "method" in today's lingo) should have only one entry point and only one exit point. This rule seems to have been dropped.

At least the "only one exit point" portion of the rule. Modern day languages allow for only one entry point into a method. They allow for multiple exit points, and this lets us write poor code. A better (uninstrumented) version of the square root method is:

double Foobar::square_root(double value)
{
double result = 0.0;

if (is_rootable(value))
{
result = pow(value, 0.5);
}

return result;
}

bool Foobar::is_rootable(double value)
{
return (value > 0.0);
}

This code is longer but more readable. Instrumenting it is less work, too.

One can visually examine the code for the "extra return" problem, but instrumenting the code with my logging class made the problems immediately visible.


Sunday, November 7, 2010

Where Apple is falling behind

Apple is popular right now. It has a well-received line of products, from MacBooks to iPhones to iPads. It has easy-to-use software, from OSX to iTunes and GarageBand. it has beaten Microsoft and Google in the markets that it chooses.

Yet in one aspect, Apple is falling behind.

Apple is using real processor technology, not the virtual processors that Microsoft and Google use. By "virtual processors", I mean the virtual layers that separate the application code from the physical processor. Microsoft has .NET with its virtual processor, its IL code, and its CLR. Google uses Java and Python, and those languages also have the virtual processing layer.

Most popular languages today have a virtual processing layer. Java uses its Java Virtual Machine (JVM). Perl, Python, and Ruby use their virtual processors.

But Apple uses Objective-C, which compiles to the physical processor. In this, Apple is alone.

Compiling to physical processor has the advantage of performance. The virtual processors of the JVM and .NET (and Perl, and Python...) impose a performance cost. A lot of work has been done to reduce that cost, but the cost is still there. Microsoft's use of .NET for its Windows Mobile offerings means higher demands for processor power and higher draw from the battery. An equivalent Apple product can run with less power.

Compiling to a virtual processor also has advantages. The virtual environments can be opened to debuggers and performance monitors, something not possible with a physical processor. Therefore, writing a debugger or a performance monitor in a virtual processor environment is easier and less costly.

The languages which use virtual processors all have the ability for class introspection (or reflection, as some put it). I don't know enough about Objective-C to know if this is possible, but I do know that C and C++ don't have reflection. Reflection makes it easier to create unit tests and perform some type checking on code, which reduces the long-term cost of the application.

The other benefit of virtual processors is freedom from the physical processor, or rather from the processor line. Programs written to the virtual processor can run anywhere, with the virtual processor layer. This is how Java can run anywhere: the byte-code is the same, only the virtual processor changes from physical processor to physical processor.

The advantages of performance are no longer enough to justify a physical processor. Virtual processors have advantages that help developers and reduce the development costs.

Is it possible that Apple is working on a virtual processor of its own? The technology is well within their abilities.

I suspect that Apple would build their own, rather than use any of the existing virtual processors. The two biggies, .NET and Java, are owned by companies not friendly to Apple. The engines for Perl, Python, and Ruby are nice but perhaps not suitable to the Apple set of applications. An existing engine is not in Apple's best interests. They need their own.

Apple doesn't need the virtual processor engine immediately, but they will need it soon -- perhaps within two years. But there is more to consider.

Apple has pushed the Objective-C, C, and C++ languages for its development platform. For the iPhone, iPod, and iPad, it has all but banished other languages and technologies. But C, C++, and Objective-C are poorly suited for virtual processors. Apple will need a new language.

Given Apple's desire for reliable (that is, crash-free) applications, the functional languages may appeal to them. Look for a move from Objective-C to either Haskell or Erlang, or possibly a new language developed at Apple.

It will be a big shift for Apple, and their developers, but in the long run beneficial.

Friday, November 5, 2010

Kinect, open source, and licenses

So Microsoft has introduced "Kinect", a competitor to the Wii game-station.

Beyond winning an award for a stupid product name, Microsoft has also become rather protective of the Kinect. Microsoft wants to keep Kinect for itself, and prevent anyone else from writing code for it. (Anyone without a license, apparently.)

The Adafruit company is offering a bounty for an open source Kinect driver. Such a driver could allow Kinect to work with systems other than the XBOX. Horrors! People actually using a Microsoft project for something they choose!

Microsoft's response includes the following: “With Kinect, Microsoft built in numerous hardware and software safeguards designed to reduce the chances of product tampering. Microsoft will continue to make advances in these types of safeguards and work closely with law enforcement and product safety groups to keep Kinect tamper-resistant.”

The interesting part of that statement is "law enforcement". Microsoft seems to be confusing breach of contract with criminal law.

In contract law, you can file a suit against someone for breach of contract. You win or lose, depending on the contract and the case you present (and the case that the defense presents). But here is the thing: a win does not condemn the accused to prison, nor do you get punitive damages. Punitive damages are not available in contract law -- the contract specifies the penalties for breach.

Unless Microsoft is counting on the DCMA, and its clause for bypassing copy-protection devices. They may win a criminal case on those grounds.

But Microsoft is missing the bigger point. They think that they must control Kinect and allow it to run only on the XBOX, therefore increasing sales for the XBOX and Windows. But its a selfish strategy, and one that will limit the growth of Kinect in the future.

Microsoft must decide the more important of two goals: sales of Kinect software or control of Kinect software.

Wednesday, November 3, 2010

Apple's Brave New World

Apple is bringing the goodness of the App Store to Macintosh computers and OSX. The popular application manager on the iPhone/iPod/iPad application will be part of OSX "Lion".

Apple has set a few rules for applications in the App Store, including "doesn't crash" and "doesn't duplicate existing apps".

I like and dislike this move.

First the dislikes: The App Store is a toll booth for Apple, a chokepoint on Mac applications that ensure Apple gets a cut of every sale. It also gives Apple the ability to suppress any app. (For any reason, despite Apple's propaganda.) It is a lot of power concentrated in one entity.

Now for the likes: It raises the bar for software quality, and probably reduces the price of apps. Where PC applications (and Mac applications) typically cost from $100 to $500, the lightweight apps in the App Store go for significantly less. I expect the same for Mac apps.

To shift to metaphors:

The initial days of personal computing (1977-1981) were a primitive era, requiring people to be self-sufficient, equivalent to living on the open Savannah, or in northern Britain. How they built Stonehenge (Visicalc) we will never really know.

The days of the IBM PC (DOS and Windows) were roughly equivalent to the Egyptian old kingdom and the empire, with some centralized direction and some impressive monuments (WordPerfect, Lotus 1-2-3, Microsoft Office) built with a lot of manual labor.

The brave new era of "app stores" (either Apple or Microsoft) will possibly be like the Roman Empire, with better technology but more central control and bureaucracy. Computers will finally be "safe enough" and "simple enough" for "plain users".

The new era brings benefits, but also signals the end of the old era. The days of complete independence are disappearing. Computers will be appliances that are controlled in part by the vendor. Applications will shrink in size and complexity (probably a good thing) and work reliably within the environment (also a good thing).

It's a brave new world, and developers would be wise to learn to live in it.

Tuesday, November 2, 2010

Pondering my next PC

Necessity is the mother of invention.

This week, my trusty IBM ThinkPad of ten years developed a severe case of lock-up. This is most likely a problem with the CPU card. (In a laptop, just about everything is on the CPU card, so that's where the problems lie.)

The loss of the laptop is disappointing. It has been a good friend through a number of projects. It was reliable. It had a very nice screen. I liked the keyboard.

Its passing leads me to think of a replacement. And this leads to several ideas.

First idea: Do I need to replace it? I'm not sure that I do. I've collected a number of other PCs in the past ten years, including an Apple MacBook which has better wireless connectivity. I may be able to live without a replacement.

Second idea: Replace it with a tablet. The Apple iPad comes to mind, although I am not happy with the screen (I dislike the high-gloss finish) and I would prefer a tablet that can play my Ogg Vorbis-encoded music.

Third idea: Replace it with a smart phone. Probably an Android phone, as I have the same dislikes of the iPhone as the iPad.

In brief, I am not considering a desktop PC, and considering but not committed to a laptop. This is a big change from a few years ago, when a desktop was considered "the usual" and a laptop was considered "nice to have".

Conversations with others (all tech-minded folks) show that most folks are thinking along similar lines. The techies are leaving desktop PCs and laptops. The future is in mobile devices: smart phones and tablets.


Sunday, October 31, 2010

Better code through visualization

Visualization (which is different from virtualization) renders complex data into a simpler and easier-to-understanf form. We see it all the time with charts and graphs for business and economic activity, demographic trends, and other types of analysis. The pretty charts and graphs (and animated displays, for some analyses) summarize the data and make the trends or distribution clear.

The human brain is geared for visual input. We devote a significant chunk of the brain to the work of processing images.

We're now ready to use visualization (and by doing so, leveraging the brain's powerful capabilities) for software.

I'm doing this, in a very limited (and manual) way, by analyzing source code and creating object diagrams, maps of the different classes and how they relate. These maps are different from the traditional class hierarchy diagrams, in that they show references from methods. (The classic class diagrams show only references in member lists. I pick through the code and find "local" objects and show those references.)

The result is a class diagram that is more comprehensive, and also a bit messier. My process creates maps of classes with arrows showing the dependencies, and even simple (non-trivial) programs have a fair number of classes and a bunch more arrows.

The diagrams are useful. It is easy to spot classes that are not referenced, and contain dead code. I can also spot poorly-designed classes; they usually exist in a set with a "loop" of references (class A refers to class B, class B refers to class C, and class C refers to class A). The diagram makes such cyclic references obvious. It also makes a proper solution (when applied) obvious.

I'm adding the technique and the programs that make it possible to my toolbox of utilities. Visual analysis of programs helps me write better programs, and helps other members of the team understand our programs.

I'm not alone in this idea.

The ACM (the Association for Computing Machinery) ran an article on "code maps" in the August 2010 issue of their Communications magazine. (The name "Communications" refers to information shared with ACM members and does not denote the telecommunications aspect of computing.) The study team found that code maps help team members stay "oriented" within the code.

IBM has their "Many Eyes" project which can analyze data (not just source code) but I'm sure that they are close to visualization of code.

The IEEE (Institute for Electrical and Electronics Engineers) has their "VisWeek", an amalgam of conferences for visualization including the ACM SoftVis conference.

It's an idea whose time has come.



Wednesday, October 27, 2010

How to teach programming?

What approach should we use for newbie programmers? What languages should we recommend and use to let novice programmers learn the craft?

In the nineteen-eighties, with the early microcomputers (Radio Shack TRS-80, Apple II, and Commodore 64) and early PCs (IBM PC and IBM XT), the natural choices were BASIC and Pascal. Both languages were designed as teaching languages, and interpreters and compilers were readily available (books, too).

Over time, fashions in programming languages changed, and the industry shifted from BASIC and Pascal to C, and then to C++, and then to Java, and now to either C# or Python.

I consider C# (and its .NET framework) a competent language, but not a teaching language. It is too complex, too encompassing, too high of a climb for a simple program. I have the same complaint of Java.

Part of my reluctance for C# and Java is their object-orientedness. While object-oriented programming is currently accepted as the norm, I am not convinced that a person should learn O-O techniques from the start. Teaching someone object-oriented programming takes some amount of time, which delays their effectiveness. Also, it may bond the student to object-oriented programming and prevent a move to another programming form in the future.

Object-oriented programming is popular now, but I foresee a day when it is deprecated, looked upon as archaic. (Just as plain old procedural programming is viewed today.)

If we teach new programmers the techniques of object-oriented programming, will they be able to move to something like functional programming? Or will they accept object-oriented programming as The Way Things Ought To Be and resist different programming paradigms? One advantage of BASIC (Dartmouth BASIC, not Visual Basic with its sophisticated constructs) was that we students knew that things could be better. We saw the limitations of the language, and yearned for a better world. When object-oriented programming came along, we jumped at the chance to visit a better world.

If I have fourteen weeks (the length of a typical college semester) I would structure the "CS 101 Intro to Programming" with a mix of languages. I would use BASIC and Pascal, to teach the basic concepts of programming (statements, assignments, variables, arrays, loops, decisions, interpreters, and compilers). I would have a second class for "CS 102 Intermediate  Programming" with Java and Ruby for Object-Oriented programming concepts.

For the serious students, I would have a "CS 121/122 Advanced Programming" pair of classes with assembly language and C, and advanced classes of "CS 215" with LISP and "CS 325/326 Functional Programming" with Haskell.

That's a lot of hours, possibly more than the typical undergraduate wants (or needs) and most likely more than the deans want to allocate to programming.

So the one thing I would do, regardless of the time allocated to programming classes and the number of languages, is design classes to let students learn in pairs. Just as Agile Programming uses pair development to build quality programs, I would use paired learning to build deeper knowledge of the material.


Monday, October 25, 2010

CRM for help desks

We're all familiar with CRM systems. (Or perhaps not. They were the "big thing" several years ago, but the infatuation seems to have passed. For those with questions: CRM stands for "Customer Relationship Management" and was the idea that capturing information about interactions with customers would give you knowledge that could lead to sales.)

We're also all familiar with help desks. (Or perhaps not. They are the banks of usually underpaid, underinformed workers on the other end of the call for support.)

A call to a help desk can be a frustrating experience, especially for technically-savvy folks.

Help desks are typically structured with multiple levels. Incoming calls are directed to the "first level" desk with technicians that have prepared scripts for the most common problems. Only after executing the scripts and finding that the problem is not resolved is your call "escalated" to the "second level" help desk, which usually has a second set of technicians with a different set of scripts and prepared solutions. Sometimes there is a third level of technicians who have experience and can work independently (that is, without prepared scripts) to resolve a problem.

This structure is frustrating for techno-geeks, for two reasons. First, the techno-geek has already tried the solutions that the first level help desk will recommend. Some first level help desks insist that the user try the proffered solutions, even though the user has done them. (This is a blind following of the prepared script.)

Second, many help desks have scripts that assume you are running Microsoft Windows. Help desk technicians ask you to click on the "start" menu, when you don't have one. Some help desks go as far as to deny support to anyone with operating systems other than Windows. See the XKCD comic here. Techno-geeks often pretend to click on the non-existent Windows constructs and provide the help desk with fake information from the non-existent information dialogs (usually from memory) to get their call escalated to a more advanced technician.

The design of help desks (multiple levels, prepared scripts for first levels) is easy to comprehend and on the surface looks efficient. The first level tries the "simple" and "standard" solutions which solve the problem most times. Only after dealing with the (cheap to operate) first level and not resolving the problem do you work with the (expensive to operate) higher levels.

The help desk experience is similar to a video game. You start at the first level, and only after proving your worthiness do you advance to the next level.

Which brings us back to CRM systems. While not particularly good at improving sales, they might be good at reducing the cost of help desks.

Here's why: The CRM system can identify the tech-savvy customers and route them to the advanced levels directly, avoiding the first level scripts. This reduces the load on the first level and also reduces the frustration imposed on the customers. Any competent help desk manager should be willing to jump on a solution that reduces the load on the help desk. (Help desks are measured by calls per hour and calls resolved per total calls, and escalated calls fall in the "not resolved" bucket.)

CRM can also give you better insight into customer problems and calling patterns. The typical help desk metrics will report the problems by frequency and often list the top ten (or maybe twenty). With CRM, you can correlate problems with your customer base and identify the problems by customer type. It's nice to know that printing is the most frequent problem, but it's nicer to know that end-of-day operations is the most frequent problem among your top customers. I know which I would consider the more important!

Saturday, October 23, 2010

Microsoft abandons intuitive for complicated

Microsoft Office 2007 introduced the "ribbon", a change to the GUI that replaced the traditional menu bar with a dynamic, deep-feature, sliding set of options. This is a bigger change than first appears. It is a break with the original concept of the graphical user interface as originally touted by Microsoft.

When Microsoft introduced Windows, it made a point of advertising the "ease of use" and "intuitive" nature of the graphical interface. The GUI was superior to the old DOS programs (some of which were command-line and some which used block-character menus and windows). The Windows GUI was superior because it offered a consistent set of commands and was "intuitive", which most people took to mean as "could be used without training", as if humans had some pre-wired knowledge of the Windows menu bar.

Non-intuitive programs were those that were hard to use. The non-intuitive programs were the command-line DOS programs, the block-character graphic DOS programs which had their own command sets, and especially Unix programs. While one could be productive with these programs, productivity required deep knowledge of the programs that was gained only over time and with practice.

Windows programs, in contrast, were usable from day one. Anyone could sit down with Notepad and change the font and underline text. Anyone could use the calculator. The Windows GUI was clearly superior in that it allowed anyone to use it. (For managers and decision makers, read "anyone" and "a less talented and costly workforce".)

Microsoft was not alone in their infatuation with GUI. IBM tried it with the OS/2 Presentation Manager, yet failed. Apple bought into the GUI concept, and succeeded. But it was Microsoft that advertised the loudest. Microsoft was a big advocate of GUI, and it became the dominant method of interacting with the operating system. Software was installed with the GUI. Configuration options were set with the GUI. Security was set up with the GUI. All of Microsoft's tools for developers were designed for the GUI. All of Microsoft's training materials were designed around the GUI. Microsoft all but abandoned the command line. (Later, they quietly created command-line utilities for system administration, because they were necessary for efficient administration of multi-workstation environments.)

Not all programs were able to limit themselves to the simple menu in Notepad. Microsoft Office products (Word, Excel, Powerpoint, and such) required complex menus and configuration dialogs. Each new release brought larger and longer menus. Using these programs was hard, and a cottage industry for the training of users grew.

The latest version of Microsoft Office replaced the long, hard-to-navigate menus with the confusing, hard-to-search ribbon. One person recently told me that it was more effective and more efficient, once you learned how to use it.

Does anyone see the irony in that statement? Microsoft built the GUI (or stole it from Apple and IBM) to avoid the long lead times for effective use. They wanted people to use Windows immediately, so they told us that it was easy to use. Now, they need a complex menu that takes a long time to learn. 

Another person told me that Microsoft benefits from the ribbon, since once people learn it, they will be reluctant to switch to a different product. In other words, the ribbon is a lock-in device.

It's not surprising that Microsoft needs a complex menu for their Office products. The concepts in the data (words, cells, or slides) are complex concepts, much more sophisticated than the simple text in Notepad. You cannot make complex concepts simple by slapping a menu on top.

But here is the question: if I have to learn a large, complex menu (the Office ribbon), why should I learn Windows programs? Why not learn the whatever tool I want? Instead of Microsoft Word I can learn TEX (or LATEX) and get better control over my output. Instead of Microsoft Access I can learn MySQL.

By introducing the ribbon, Microsoft admitted that the concept of an intuitive program is a false one, or at least limited to trivial functionality. Complex programs are not intuitive; efficient use requires investment and time.


Wednesday, October 20, 2010

Waiting for C++

I was there at the beginning, when C++ was the shiny new thing. It was bigger than C, and more complex, and it required a bit more learning, and it required a new way of thinking. Despite the bigness and the expense and the re-learning time, it was attractive. It was more than the shiny new thing -- it was the cool new thing.

Even when Microsoft introduced Visual Basic, C++ (and later, Visual C++) was the cool thing. It may not have been new, but it was cooler than Visual Basic.

The one weakness in Visual C++ (and in Visual Basic) was the tools for testing, especially tools for testing GUI programs. The testing programs were always add-ons to the basic product. Not just in the marketing or licensing sense, but in terms of technology. GUI testing was always clunky and fragile, using the minimal hooks into the application under test. It was hard to attach test programs to the real programs (the programs under test), and changes to the dialogs would break the tests.

When Java came along, the testing tools were better. They could take advantage of things that were not available in C++ programs. Consequently, the testing tools for Java were better than the testing tools for C++. (Better by a lot.)

The C#/.NET environment offered the same reflection and introspection of classes, and testing tools were better than tools for C++.

I kept waiting for corresponding tools on the C++ side.

This week it hit me: the new tools for C++ will never arrive. The new languages, with their virtual machines and support for reflection, allow for the nifty GUI testing tools, and C++ doesn't. And it never will. It just won't happen. The bright minds in our industry are focussed on C# and Java (or Python or Ruby) and the payoff for C++ does not justify the investment. There is insufficient support in the C++ language and standard libraries for comprehensive testing, and the effort for creating new libraries that do support GUI tests is great.

GUI testing for C++ application is as good as it will ever get. The bright young things are working on other platforms.

Which means that C++ is no longer the cool new thing.


Monday, October 18, 2010

Abstractions

Advances in programming (and computers) come not from connecting things, but from separating them.

In the early days of computing (the 1950s and 1960s), hardware, processors, and software were tied together. The early processors had instructions that assumed the presence of tape drives. Assemblers knew about specific hardware devices. EBCDIC was based on the rows available on punch cards.

Abstractions allow for the decoupling of system components. Not detachment, since we need components connected to exchange information, but decoupled so that a component can be replaced by another.

The COBOL and FORTRAN languages offered a degree of separation from the processor. While FORTRAN I was closely tied to IBM hardware (being little more than a high-powered macro assembler) later versions of FORTRAN delivered on machine-independent code.

The C language showed than a language could be truly portable across multiple hardware platforms, by abstracting the programming constructs to a common set. 

Unix abstracted the file system. It isolated the details of files and directories and provided a common interface to them, regardless of the device. Everything in Unix (and by inheritance, Linux) is a file or a directory. Or if you prefer, everything is a stream.

Microsoft Windows abstracted the graphics and printing (and later, networking) for PC applications.

The Java JVM and the .NET CLR decouple execution from the processor. Java offers "write once, run anywhere" and has done a good job of delivering on that promise. Microsoft focusses more on "write in any language and run on .NET" which has served them well.

Virtualization separates processes from real hardware. Today's virtualization environments provide the same processor as the underlying real hardware -- Intel Pentium on Intel Pentium, for example. I expect that future virtualization environments will offer cross-processor emulation, the equivalent of a 6800 on a Z-80, or an Alpha chip on an Intel Pentium. Once that happens, I expect a new, completely virtual processor to emerge. (In a sense, it already has in the form of the Java JVM and .NET ILM languages.)

Cloud computing offers another form of abstraction, separating processes from underlying hardware.

Each of these abstractions allowed people to become more productive. By decoupling system components, abstraction lets us focus on a smaller space of the problem and lets us ignore other parts of the problem. If the abstracted bits are well-known and not part of the core problem, abstraction helps by eliminating work. (If the abstraction isolates part of the core problem, then it doesn't help because we will still be worrying about the abstracted bits.)


Sunday, October 17, 2010

Small as the new big

I attended the CPOSC (the one-day open source conference in Harrisburg,PA) this week-end. It was a rejuvenating experience, with excellent technical sessions on various technologies.

Open source conferences come in various sizes. The big open source conference is OSCON, with three thousand attendees and running for five days. It is the grand dame of open source conferences. But lots of other conferences are smaller, either in number of attendees or days and usually both.

The open source community has lots of small conferences. The most plentiful of these are the BarCamps, small conferences organized at the local level. They are "unconferences", where the attendees hold the sessions, not a group of selected speakers.

Beyond BarCamp, several cities have small conferences on open source. Portland OR has Open Source Bridge (about 500 attendees), Salt Lake City has Utah Open Source Conference, Columbus has Linux Fest, Chicago has a conference at the University of Illinois, and the metropolis of Fairlee, VT hosts a one-day conference. The CPOSC conference in Harrisburg has a whopping 150 attendees, due to the size of their auditorium and fire codes.

I've found that small conferences can be just as informative and just as energetic as large conferences. The venues may be smaller, the participants are usually from the region and not the country, yet the conference speakers are just as passionate and informed as the speakers at the large conferences.

Local conferences are often volunteer-run, with low overhead and a general air of reasonableness. They have no rick stars, no prima donnas. Small conferences can't afford them, and the audience doesn't idolize them. It makes for a healthy and common-sense atmosphere.

I expect the big conferences to continue. They have a place in the ecosystem. I also expect the small conferences to continue, and to thrive. They serve the community and therefore have a place.


Monday, October 11, 2010

Farming for IT

Baseball teams have farm teams, where they can train players who are not quite ready for the major leagues.

IT doesn't have farm teams, but IT shops can establish connections with recruiters and staffing companies, and recruiters and staffing companies can establish connections with available talent. It's less formal than the baseball farm team, yet it can be quite effective.

If you're running an IT shop, you want a farm team -- unless you can grow your own talent. Some shops can, but they are few. And you want a channel that can give you the right talent when you need it.

Unlike baseball teams, IT shops see cycles of growth and reduction. The business cycle affects both, but baseball teams have a constant roster size. IT shops have a varying roster size, and they must acquire talent when business improves. If they have no farm team, they must hire talent from the "spot market", taking what talent is available.

Savvy IT shops know that talent -- true, capable talent -- is scarce, and they must work hard to find it. Less savvy shops consider their staff to be a commodity like paper or electricity. Those shops are quick to downsize and quick to upsize. They consider IT staff to be easily replaceable, and place high emphasis on reducing cost.

Yet reducing staff and increasing staff are not symmetrical operations. New hires (appropriate  new hires) are much harder to find than layoffs.  The effort to select the correct candidate is large; even companies that treat IT workers as commodities will insist on interviews before hiring someone.

During business downturns, savvy and non-savvy IT shops can lay off workers with equal ease. During business upturns, savvy IT shops have the easier task. They have kept relationships with recruiters and the recruiters know their needs. Recruiters can match candidates to their needs. The non-savvy IT shops are the Johnny-Come-Latelys who have no working relationship. Recruiters do what they can, but the savvy shops will get the best matches, leaving the less-talented folks.

If you want talent, build your farm team.

If you want commodity programmers, then don't build a farm team. But then don't bother interviewing either. (Do you interview boxes of printer paper? Or the person behind the fast food counter?) And don't complain about the dearth of talent. After all, if programmers are commodities, then they are all alike, and you should accept what you can find.


Sunday, October 10, 2010

Risk or no risk?

How to pick the programming languages of the future? One indication is the number of books about programming languages.

One blog lists a number of freely-available books on programming languages. What's interesting is the languages listed: LISP, Ruby, Javascript, Haskell, Erlang, Python, Smalltalk, and under the heading of "miscellaneous": assembly, Perl, C, R, Prolog, Objective-C, and Scala.

Also interesting is the list of languages that are not mentioned: COBOL, FORTRAN, BASIC, Visual Basic, C#, and Java. These are the "safe" languages; projects at large companies use them because they have low risk. They have lots of support from well-established vendors, lots of existing code, and lots of programmers available for hire.

The languages listed in the "free books" list are the risky languages. For large companies (and even for small companies) these languages entail a higher risk. Support is not so plentiful. The code base is smaller. There are fewer programmers available, and when you find one it is harder to determine his skills (espacially if you don't have a programmer on your staff versed in the risky language.)

The illusion of safe is just that -- an illusion. A safe language will give you comfort, and perhaps a more predictable project. (As much as software projects can be comforting and predictable.)

But business advantage does not come from safe, conservative strategies. If it did, innovations would not occur. Computer companies would be producing large mainframe machines, and automobile manufacturers would be producing horse-drawn carriages.

The brave new world of cloud computing needs new programming languages. Just as PC applications used BASIC and later Visual Basic and C++ (and not COBOL and FORTRAN), cloud applications will use new languages like Erlang and Haskell.

In finance, follow the money. In knowledge, follow the free.


Monday, October 4, 2010

A Moore's Law for Programming Languages?

We're familiar with "Moore's Law", the rate of advance in hardware that allows simultaneous increases in performance and reductions on cost. (Or, more specifically, the reduction in size of a transistor by one half every eighteen months.)

Is there a similar effect for programming languages? Are programming languages getting better (more powerful and cheaper to run) and if so at what rate?

It's a tricky question, since programming languages rest on top of hardware, and as the performance of hardware improves, the performance of programming languages gets a free ride. But we can filter out the effect of faster hardware and look at language design.

Even so, rating the power of a programming language is difficult. What is the value of the concept of an array? The concept of structured (GOTO-less) programming? Object-oriented programming?

The advancements made by languages can be deceptive. The LISP language,  considered the most advanced language by luminaries of the field, was created in the late 1950s! LISP has features that modern languages such as Ruby and C# are just beginning to incorporate. If RUby and C# are modern, what is LISP?

The major languages (assembly, FORTRAN, COBOL, BASIC, Pascal, C, C++, and Perl, in my arbitrary collection), have a flat improvement curve. Improvements are linear and not exponential (or even geometric). There is no Moore's Law scaling of improvements.

If not programming languages, perhaps IDEs. Yet here also, progress has been less than exponential. From the initial IDE of TurboPascal (compiler and editor combined) through Microsoft's acquisition and integration of Nu-Mega's CodeView debugger, to Microsoft's SQL Intellisense and stored procedure debugger, improvements have -- in my opinion -- been linear and not worthy of the designation of "Moore's Law".

IDEs are not a consistent measure, since languages like Perl and Ruby have bucked the trend by avoiding (for the most part) IDEs entirely and using nothing more than "print" statements for debugging.

If hardware advances at an exponential rate and programming languages advance at a linear rate, then we have quite a difference in progress.

A side effect of this progress will be the price paid for good programming talent. It's easy to make a smaller and cheaper computer, but not as easy to make a smaller and cheaper application.


Sunday, October 3, 2010

Winning the stern chase

A "stern chase" is a race between two competitors, where one is following the other. The leader's goal is to stay ahead; the laggard must overtake the leader. Both positions have challenges: the leader is ahead but has less information (he can't see the follower), the laggard must work harder but has the advantage of seeing the leader. The lagard knows when he is gaining (or losing) the leader, and can see the actions taken by the leader.

In software, a stern chase occurs when replacing a system. Here's the typical pattern:

A company builds a system, or hires an outside form to build the system. The new system has a nice, shiny design and clean code (possibly). Over time, different people make changes to the system, and the design degrades. The nice, shiny system is no longer shiny, it's rusty. People dislike the system - users don't like it, nor do developers. Users find it hard to use, and developers find it hard to modify.

At some point, someone decides to Build A New System. The old system is buggy, hard to use, hard to maintain, but exists and gets the job done. It is the leader. The new system appears shiny but has no code (and possibly no requirements other than "do what the old system does"). It is the laggard.

The goal of the development team is to build a new system  that catches the old system. Thus the stern chase begins. The old system is out in front, running every day, and getting updates and new features. The news system is behind, getting features added but always lacking some feature of the old system.

Frequently, the outcome of the software stern chase is a second system that has a design that is (eventually if not initially) compromised, code that is (eventually if not initially) modified multiple times and hard to read, and a user base and development team which are (eventually if not initially) asking for a re-write of the system. (A re-write of the new system, not the old one!) The stern chase fails.

But more and more, stern chases are succeeding. Not only succeeding in the delivery of the new system, but the succeeding in that the replacement system is easier to use and easier to maintain.

A stern chase in software can succeed when the following conditions exist:

The chase team knows the old system the problem domain, and has input into the design of the new system. The team can contribute valuable insights into the design for the new system. The new system can take the working parts of the old system, yet avoid the problems of the old system. This lets the development team aim.

The chase team uses better technology that the old system. Newer technology (such as C# over C++) lets the chase team move faster.

The chase team uses a process that is disciplined. The biggest advantage comes from automated tests. Automated tests let the chase team re-design their application as they move forward and learn more about the problem. Without automated tests, the team is afraid to make changes and try new designs. It's also afraid to clean up the code that has gotten hard to maintain. (Even on a new project, code can be hard to maintain.)

With these conditions, the chase team moves faster than the team maintaining the old system. In doing so, they surpass the old system and win the race.


Friday, October 1, 2010

NoSQL means no SQL

There's been a bit of debate about NoSQL databases. The Association for Computing Machinery has a discussion of non-interest in NoSQL.

In sum, the folks who are not interested in NoSQL databases want to remain with SQL for two reasons: their business uses OLTP and SQL offers ACID (atomic, consistent, isolated, and durable) transactions, and SQL is a standard.

I have some thoughts on this notion of non-interest.

No one is asking people to abandon SQL databases. The NoSQL enthusiasts recognize that OLTP needs the ACID properties. Extremists may advocate the abandonment of SQL, but I disagree with that approach. If SQL is working for your current applications, keep using it.

But limiting your organization to SQL with the reasoning: "transactions are what we do, and that's all that we do" is a dangerous path. It prevents you from exploring new business opportunities. The logic is akin to sticking with telegrams and avoiding the voice telephone. (Telegrams are written records and therefore can be stored, they can be confirmed and therefore audited, and they are the standard... sound familiar?)

SQL as a standard offers little rationale for its use. The folks I talk to are committed to their database package (DB2, SQL Server, Oracle, Postgres, MySQL, etc.). They don't move applications from one to another. (Some folks have moved from MySQL to Postgres, but the numbers are few.) Selecting SQL as a standard has little value if you stick with a single database engine.

For me, the benefit of NoSQL databases is... no SQL! I find the SQL language hard to learn and harder to use. My experience with SQL databases has been consistently frustrating, even with open source packages like MySQL. My experience with proprietary, non-SQL databases, on the other hand, has been successful. The performance is better and the code is easier to write. (Easier, but still not easy.) No SQL databases appeal to me simply to get away from SQL.

My preferences aside, NoSQL databases are worthy of investigation. They have different strengths and allow for the construction of new types of applications. Insisting on SQL for every database application is just as extremist as insisting on the complete removal of SQL. Both can contribute to your application suite. Use them both to your advantage.


Tuesday, September 28, 2010

Measuring chaos

Some IT shops are more chaotic than others. (A statement that can be made about just about every discipline, not just software.)

But how to compare? What measurement do we have for chaos, other than anecdotes and comments made by the inmates?

Here's a metric: The frequency of priority changes.

How frequently does your team (if you're a manager) or your workload (if you're a contributor) change its "number one" task? Do you move gradually from one project to another, on a planned schedule? Or do you change from one "hot" project to the next "hotter" project, at the whim of external forces. (In other words, are you in fire-fighting mode? Or are you starting preventative fires with a plan that lets you control the unexpected ones?)

I call it the "PCF" - the "Priority Change Frequency". The more often you change priorities, the more chaotic your shop.

Organizations with competent managers have lower PCF values. Their managers plan for new assignments and prepare their teams. Organizations with less-than-competent managers have higher PCF values, since they will often be surprised and ill-prepared for the work.

But comparing teams is tricky. Some teams will have higher PCF values, due to the nature of their work. Help desks and support teams must respond to user requests and problems. Operations teams must "keep the joint running" and respond to equipment failures. In these types of organizations, a problem can quickly become the number one priority. Changing priorities is the job.

Other teams should have lower PCFs. A programming team (provided that they are not also the support team) has the luxury of planning their work. Even teams that use Agile methods can plan their work -- their priorities may shift from one cycle to another, but not during the cycle.

Don't be seduced by the illusion of "we're different". Some teams think that they are different, and a high PCF is not an indication of a problem. Unless your work is defined by external sources that change frequently (as with help desks), you have no excuse. If you insist on being different, then the difference is in the manager's skills.