Thursday, April 29, 2010

Egos

I was going to write a post about unified communication and the parallels with the current state of the art to the early (multiple phone company) era. But current events lead me to a different topic.

The ... discussion (that's a polite word) ... about Apple products and Adobe Flash has been going on for several weeks. Apple's iPhone 4.0 release included a contract that forbids Flash and lots of other tech from the platform. After the initial release, there was a large outcry from the hacker community, and Apple was not on the winning side.

The furor had passed, and one would think that Apple (and Steve Jobs) would have the acumen to simply carry on and offer their products to the world (Flash-less and all).

But no. Steve Jobs posted a blog entry about Flash on the Apple platform. In doing so, he has stirred the blogosphere.

Perhaps he thinks that no publicity is bad publicity. Perhaps he thinks that the additional blogging will help Apple in their quest for market share.

Steve Jobs is wrong.

The responses to his latest post have been against him. Adobe's response post is restrained, professional, and respectful. The first thing that Jobs has accomplished is that he has made Adobe look good, even trustworthy.

Unlike the first round, which saw anti- and pro-Apple posts, this second round has been uniformly against Apple. The Apple supporters have moved on, or perhaps become bored. If Jobs wants to tweak his opposition, he has succeeded. His second accomplishment has been to invigorate his detractors, with no gain from his supporters.

Apple cannot afford to lose the hacker community. Microsoft has, and it is hurting them. Adobe may have problematic software, but hackers can fix that with the right support from Adobe. Apple has crap-tastic software too (think of iTunes for Windows), but without the hackers they have no chance of surviving.

Steve Jobs, take your ego, put it in a box, and store it in your basement. (Or attic, or self-storage.) Or live knowing that the Golden Age of Apple has just come to an end.


Sunday, April 18, 2010

The WIld West gives way to City Hall

One measure of civilization is the number of things that one does not have to worry about. In the real world, we can measure civilization by access to potable water, safe and plentiful food, and physical security.

In the days of the wild west, individuals staked out households and had to be self-sufficient. They had to get their own water, raise (or trade for) their food, build their own shelter, and defend their property. In some areas of the western US, people still live this way. Yet most people live in the city, where city hall provides a police force, runs a water distribution system, and major companies run stores to provide groceries.

Just as civilization advanced in the real world, civilization advances in the cyber world. Microcomputers (and PCs) have been, for the most part, equivalent to the wild west. A computer owner was responsible for the acquisition, operation, and maintenance of the computer. That included the initial purchase of the hardware (usually with an operating system bundled with the hardware), purchase of additional software (word processing, spreadsheets, etc.), and the maintenance of the hardware and configuration (virus-scanning software, floppy disk cleaning, and dusting).

Microsoft did not raise the level of civilization. They made it easier for people to administer systems (although some would debate that claim), but their efforts were equivalent to supplying guns to homesteaders. They provided tools but not civilization.

Apple has raised the level of civilization. The model for iPhones, iPods, and iPads is different from the "wild west" and rugged independence of PC owners. In the Apple model, it is not the owner who must worry about the operating system, security updates, or virus scan software. The owner leaves that to Apple, and simply uses their device.

I'm using the metaphor of city hall; some might view Apple more as Big Brother. Orwell's society may be a better fit for Apple's environment: City Hall is a democratic institution with replaceable leaders while Apple is more akin to a benevolent dictator.

The move from wild west to city hall is not without costs. One's options are limited: the Apple-run App Store has a finite selection of applications. One cannot create an app for one's own iPod; one must get approval from Apple and release the app through iTunes.

Yet the days of writing one's own application are long over. For most people, a computer or iPhone is a device to accomplish a goal, not a toy for tinkering. Those days were over shortly after IBM introduced the PC, and certainly ended by the time Windows became dominant. The effort to create an application rose beyond an individual's capabilities.

Or perhaps the effort was always higher than a typical individual's capabilities. Prior to the PC, computers were used primarily by hobbyists, folks who had the time and inclination to learn the programming of microcomputers. Once PCs become a commodity, the hobbyist became a minority in the sea of users.

Future historians will label the years from 1976 to 2008 as the "wild west" years, or perhaps use some other moniker, to identify a period of rugged individualism and self-reliance. Some will look back with nostalgia (just as we look upon the wild west with nostalgia) but they will forget the harsh conditions that accompanied the complete freedom (just as we have forgotten the difficult conditions of the true 18th century western expansion).

I'm glad to have lived during the golden age of PCs. I'm also happy to see the arrival of civilization (although I would prefer more City Hall and less Big Brother).


Tuesday, April 13, 2010

The decencies of software development

Software development projects, like other societies, have a class structure. In part, the class distinctions are marked by money. But with software development, the distinctions can also be marked by discipline. (Which is perhaps not too far from society in general.)

Members of the high end have lots of nice goodies. Members at the low end have the basics but not the luxuries.

Between luxuries and necessities are the "decencies", those things that make one decent, or acceptable to other members of society. The middle class has decencies and some luxuries.

Software development has progressed over the years, and the luxuries of yore are now considered necessities.

The obvious necessity is the computer. Computers have changed from the large, room-filling beasts to the small, pocket-sized pets of today's iPhone. The change was marked with the early microcomputers of 1977: the Altair, the Apple II, the TRS-80, and their cousins. It was complete with the introduction of the IBM PC XT in 1983. Once these computers were available, serious development could occur on PCs, and serious developers needed PCs. Access to a time-sharing system was no longer acceptable, was no longer "decent".

In the 1980s, the IDE became a necessity. Turbo Pascal and the UCSD p-System introduced the idea of the integrated development environment, and it quickly went from luxury to necessity.

In the 1990s, version control moved from luxury to decency, and then to necessity. Building software without version control is unthinkable today.

In the past decade, we've seen the rise of automated testing. Some may consider it a luxury, but I think of it as clearly a decency and probably a necessity.

So what's the next big thing for software development? What's the next thing to move from luxury to decency and then to necessity?

I'm not sure, but I have some ideas.

First, I'll observe that the tools I have listed are all tools. They are things. They are not techniques such as code reviews or computer science concepts such as formal methods.

Second, they are used by developers. They are not practices used by managers, such as project management offices or life-cycle management systems.

Which is not to say that code reviews or project management offices are bad things. I think that they can be good things -- or bad things, if executed poorly -- but they won't be widely adopted by developers. Especially by developers on small projects, or developers on "teams of one", who have adopted the tools of IDEs, version control, and automated testing.

I also think that the new big thing will not be a new language or programming technique. I considered object-oriented programming for the list, along with event-driven programming and GUI programs, but left them off. They all help with windows-type programming ("windows with a lower-case 'W') and that is what drove their acceptance. (Of the three, object-oriented programming helps the development team the most, with its ability to group and organize code.)

The next big thing won't be a new programming platform. It won't be C#/.NET or Java or Python or Ruby. At least not any of them specifically.

Here are some ideas for the next thing that will be adopted by development teams as a decency, something that is required for a mainstream development project:

- Virtualization. The ability to run other operating systems, or start and stop instances of a machine.
- Platform-independence: Tools that produce programs that run on multiple platforms, such as Windows, Linux, and iPhone.
- Service-oriented architecture: Probably with a new name, as "SOA" seems to have a stigma. Organizing functionality behind services (specifically web services) will let development teams move faster in the long run.

In ten years, we'll look back and wonder how we got along without the next decency. In twenty years, the freshmen developers will look at us in awe, or possibly horror, as we recount the days of development on leased computers with separate compilers, no version control, manual tests, and tightly coupled system designs.

Yay! for the future!


Friday, April 9, 2010

Apple surpasses Microsoft

Apple and Microsoft have been battling for programmers for a long time. Probably since 1977 with the release of the Apple II (or the "Apple ][" as it was then written), and most definitely since the release of recent MacBooks and iPhones.

With the recently released iPhone/iPod/Pad software license, and it's mandate to use the C, C++, or Objective-C languages (and nothing else), Apple has put itself in the lead for antagonizing developers.

In short, section 3.3.1 of the agreement limits development of Apple apps to these languages and excludes all other technologies. Those other technologies include Flash, Mono, .NET, Java, Perl, Python, Ruby, Scheme, and just about anything you care to mention.

The response from the programmer community has been uniform: Apple can go take a long walk off a short pier. (Although multiple sites use somewhat stronger language.)

The programming community has been irked by various restrictions in the past, as well as the capricious process for approving (or rejecting) iPhone apps. The response to this latest maneuver from Apple has been swift and unanimous.

Apple is acting like a monopolist. The strategy may work for a short time, perhaps three or four years. I'm sure that there are lots of developers who will jump through the hoops to write iPhone apps.

But the smart ones, the creative ones, will go elsewhere. They will leave for other platforms not because they want to, or because they feel offended, or because they think that there is some higher form of justice. They will leave because they cannot wear the straightjacket that Apple now requires for iPhone app development. The best and brightest will develop for Droid, or Linux, or perhaps even Microsoft Windows Phone.

This cannot end well for Apple.


Tuesday, April 6, 2010

Second thoughts on the Apple iPad

My initial reaction to the iPad was that it was just a larger iPod. After some reflection, I think I may have been wrong. And not because Apple iPads are selling briskly, but because I have had time to think of the possibilities offered by the iPad.

The iPad, and devices like it, make available new uses of computing power. Just as PCs (and "microcomputers", before the release of the IBM PC) made available new uses of computing power, so do the iPads.

When the early microcomputers arrived (the Apple II, the Radio Shack TRS-80, et al.) they evoked one of three responses from people. From IT professionals ("DP professionals" in those days), the reaction was "it's a toy and not really a computer". From those who would later become computer geeks, the response was "this is too cool". And from the general public, the (wise) response was "what does it do?".

Proto-geeks took to PCs immediately. IT professionals took to them only after business folks demanded support. Of course, business folks didn't really know what to do with them until VisiCalc and Lotus 1-2-3 arrived. So it was the spreadsheet that made the compelling case for business users, and indirectly to IT professionals. The general public kept a skeptical (and wise) view.

Smartphones, PDAs, and now the iPad and suchlike (shall we call them "slate PCs"?) need no compelling application. Or, perhaps more accurately, have no single compelling application. Individuals see the value in smartphones and PDAs. Businesses see the utility of cell phones and Blackberry phones, but see nothing beyond a mobile phone and mobile e-mail client.

I posit that businesses may see the value in slate PCs. I say "may" because the case is not compelling -- yet.

Businesses in the 1970s and 1980s understood computers. Their understanding was simple: a computer is a big, expensive box that does magical things with data. It is attended by high priests and its uses are specific.

The arrival of the PC disrupted that convenient arrangement. PCs were not large boxes, nor did they reside in special rooms, nor did they require high priests. They were inexpensive, they could be in anyone's office, and the user could use them with a modicum of training.

Corporations, working with Microsoft, took the better part of a decade to fully integrate PCs into their IT infrastructure. When they were done, PCs became cheap boxes that were attached to a large, expensive network that was managed by the high priests of IT support. Access to PCs is now governed by the Microsoft Active Directory server, updates are pushed by Microsoft or a local server, and applications must conform to corporate policies. The result is that the centrally-located and centrally-administered mainframe has become a centrally-administrated network with limited power outside of server rooms (and committee meetings).

The iPad and slate PCs bring another round of disruption. They are small enough to carry conveniently. They are networked but not through the corporate network. They are powerful enough to perform meaningful work. And they are easy to use, much easier than PCs ever were. They are as convenient as a smartphone and large enough to display documents and graphics at a reasonable size.

What's needed for slate PCs in the workplace is the compelling application. I'm not sure what it will be, but my guess is that it will be a collaborative application. And by "collaborative", I don't mean the lame attempts of Microsoft Outlook or Sharepoint.

Here's an example of a multi-user application, one that is not perhaps collaborative but usable by a group of people: Sheet music.

Picture an orchestra. In our current world, the musicians place sheet music before them and perform. In the future, they can place a slate PC in front of them and have it display the music for them. One immediate advantage: pages turn at the correct time, and musicians do not have to lean forward to turn the page.

No impressive, you say? Perhaps not. Let's go a little further.

Network the slate PCs together. With the network, they can communicate. (This lets them turn pages on cue.) Not only can they turn pages, they can highlight the current musical notes at the right time, by having the note change color or becoming bold. The musician will always have the correct place, and won't "get lost".

Not for the professionals, you say? Probably not. I don't see the professional players changing anytime soon. Classical music has enough stodge to prevent such an adoption.

But high school orchestras could use it.

Smart phones are too small for such an application. Slate PCs offer a screen of acceptable size.

I don't know how long it will take corporate America to integrate slate PCs. They've done a poor job of integrating smartphones (except for the Blackberry) so I expect that slate PCs will have a fair wait. In that time, we'll see the innovative applications that make a technology great.


Sunday, April 4, 2010

One language?

I was working with a co-worker last week, who was not that familiar with C++. She was attempting to make (and understand) changes to the code. She's a smart person, but not a programmer, and I realized that part of her difficulty with C++ is that it is not a single language.

She was working on code that had been written by a third person, and she was making some minor changes. The code was not the best C++ code, nor was it the worst, but it was complex enough to confuse.

As I looked at the code, I realized that I was looking at not one but three different languages. The code was "standard" C++ code, with some assignments, a printf() call, and a macro. While all part of normal C++, these three parts of the code have different languages.

The assignment is a normal C++ statement. No problems there.

The printf() call (actually, it was a call to sprintf(), but I consider the printf() and sprintf() calls as part of one family) is C++, yet the format specifier is a different language. The format (in this case "%10.3lf") is its own little language, quite distinct from C++.

The macro (a multi-line expansion that declared a pointer, called malloc(), and then executed a for() loop to initialize members to zero) is also a language of its own. Similar to C++, macros look like C++. Yet there are differences: the space required after the name, the lack of braces, and the backslashes at the end of lines that indicate continuation.

The task of understanding C++ is made harder by the use of multiple languages. I eventually explained the code, and the person making changes understood it, but the cognitive load was higher due to the three different languages. (And the fact that this person did not recognize that there were three different languages with their own scopes did not help. Mind you, I did not recognize that there were three languages at the time, either.)

Templates in C++ could be considered a fourth language, and a friend of mine thinks that templates are two additional languages to C++.

Egads! *Five* different languages for a single program?

But wait, don't forget the ability to include assembly language! That brings the total up to six!

Yes, it would be possible to write a C++ program that used six different languages.

This problem is not unique to C++. Other common languages have such "extensions", although not to the extreme of C++. Perl, for example, uses regular expressions. These are their own language, so one could easily claim that a Perl program with regular expressions is really a program in two languages.

Beyond regular expressions, SQL is its own language, so a C# program could be written in three languages: C#, regular expressions (if you used them), and SQL.

Come to think of it, you can use SQL in C++, so our possible total for languages in a C++ program is up to seven. Add in the feature of HTML or XML, and you've reached eight.

This is too much. Any program that uses more languages than Snow White had dwarves is extreme. Eight is more than I want, and more than a reasonable person can keep straight. The cost of shifting from one language context to another is greater than zero, and must be considered in the maintenance of a program.

I don't think that we will every get back to "one language in one program". Regular expressions and SQL are too useful. But we can keep other languages out. C# and Java have removed the pre-processor and thereby removed the macro language. They've also removed assembly language. These were two changes that I liked, and now I know why. But they've added templates (or "generics", as they call them), which I see as a drawback.

Well, a little progress is better than none.

Saturday, April 3, 2010

The See-saw of Complexity

We've been using computers for more than fifty years. With over half a century of experience, what can we see?

Well, certainly different languages have risen and fallen in popularity. Popularity in terms of praise and fandom as well as popularity in terms of sheer (if grudging) use. Modern languages such as C# and Ruby have a large fan base, and therefore a social popularity. COBOL has few fans, yet many organizations use it for many applications. COBOL retains a degree of utilitarian popularity.

In addition to popularity, languages have changed over time. Most have become more complex. This makes sense: As language designers add features, they strive to keep backwards compatibility. A clear example is C and C++. The initial C (K&R C) was a small, simple language. The revision of ANSI C (which mandated prototypes and introduced the 'void' type) was somewhat more complex. If we consider C++ to be a revision of C (and Stroustrup thought to call it "C with classes") then we see an additional step-up in complexity. The trend continues with revisions to C++ which included templates, STL, and now the functors in the latest version.

If we allow the C/C++ family to continue with Java and C#, we see a decrease in complexity. At the expense of compatibility, simpler languages succeeded the complex C++/STL language.

Let's look at another "track" of languages: the track that includes the Unix shell and scripting languages. I lump all of the Unix and Linux shells together as "shells", for simplicity. I want to include other languages on this track: awk, Perl, Python, and Ruby. These are the scripting languages. (I'm omitting VBscript and JavaScript as they are used -- mostly -- in specific domains.)

On the scripting language track, we see an increase in complexity, rising from the shells, peaking at Perl, and then declining with Python and Ruby. (Indeed, one of Ruby's selling points is that it is a simple, elegant language.) Some compatibility was kept, but not as strictly as with C and C++.

A third track, consisting of FORTRAN, COBOL, Algol, BASIC, and Pascal, is possible. This track is less cohesive than the previous two, as each language went through its own growth of complexity. Yet the trend is: FORTRAN and COBOL at the beginning with a degree of complexity, Algol later with more complexity, BASIC as a return to simplicity (although growing into a move complex Microsoft Basic), and another return to simplicity in Pascal (which in turn grew in complexity in the form of Turbo Pascal and eventually Delphi). My description is not quite accurate, as some of these events overlapped. But let's allow some elasticity in time to give us the effect of moving between complex and simple.

Each track reveals a pattern. It seems that we, as humans, move from simple to complex and then back to simple. We don't start with simple and then grow to complexity and keep going. We oscillate around a certain level of complexity. (Seven plus of minus two, anyone?)

Perhaps this is not surprising. As we add features to a language (hacking them in, to maintain compatibility) we eventually see larger patterns. Someone codifies the patterns into a new language, but since it is a new language, it doesn't do everything that the old language does. But not for long, as the new language begins its trip up the complexity curve, adding features. There must be a point when a new simpler language (one that uses the new patterns) is more effective than the old language. At that point, developers start using the new language.

Interestingly, operating systems do not follow this trend. CP/M was a simple operating system, replaced by MS-DOS with more complexity. Successive versions of MS-DOS added complexity. OS/2 and Windows upped the complexity again. Later versions of Windows added complexity. For PC operating systems, and I think operating systems for all levels of hardware, we have never seen a move to simplicity.

What we have seen is a shift from one platform to another. From mainframes to minicomputers, from minicomputers to PCs, and from PCs to servers. Each shift saw a reduction in complexity, later followed by an increase.

One current shift is from PCs to smartphones. The initial smartphones (and pocket PCs) had very simple operating sytems (consider the Palm Pilot). Even today, the the iPhone OSX and the Microsoft Windows Phone versions are smaller versions of their desktop counterparts. Yet each new version ups the complexity.

A second current shift is to the cloud. Here again, we see that "cloud operating systems" are considered simple and perhaps not capable of business operations. Yet each new version ups the complexity.

None of this is to say that one must abandon their current software or hardware and jump onto the smartphone platform for all applications. Or the cloud. Minicomputers did not kill off mainframes, but expanded the space of possible computing solutions. PCs did the same.

Smartphones and cloud computing will expand the space of computing solutions. Existing applications will continue, and new applications will emerge, just as the previous new platforms made e-mail, desktop publishing, and spreadsheets possible.

Languages will continue to emerge and grow. Some will decline, and a few will fall into oblivion. Yet the popular languages of today will remain for many years, just as FORTRAN and COBOL remain.