Monday, January 31, 2011

The keyboard bows out

Does Apple have it in for the keyboard? I think it might. All of it's recent success have been devices without keyboards. (Physical keyboards, that is.) And I think that this may be a good thing for programmers and especially programming languages.

I can see several possibilities for the elimination of the physical keyboard. Apple could use today's virtual keyboards of the iPod and iPad. Or they might use a small Kinect-like device to capture gestures and interpret them as keystrokes. Apple may include USB ports and Bluetooth for wired and wireless keyboards, as separately purchased accessories. Physical keyboards might be welcomed by a subset of users: secretaries, authors, programmers, and old-timers who refuse to give up the old ways.

The first step in the elimination of the physical keyboard is a transition to a similar representation in virtual form. Apple already does this. But it won't be long before people start flexing this new virtual keyboard. Look for Dvorak configurations and then custom configurations that change not only the key sequence but also the key positions. I expect people to experiment with different layouts, moving keys out of straight rows and into arcs and clumps. And why not? When the "keys" are just bits on a screen, let users move them to convenient positions.

Virtual keyboards will change more than just the layout. They will change the way we program. The hardware we use governs our ideas of programming languages.

The last big change to programming languages was caused by the transition from keypunch machines to terminals. The invention and use of Teletypes, Decwriters. and VT-52s made possible new programming languages. (I omit the venerable IBM 3270 terminal here, since in my opinion they did *not* help users perceive code as text. The IBM hardware and supporting software operated in modes that mimicked punch cards or displayed text in field-structured form. It was the non-structured Teletype that allowed the creation of new languages.)

I'm convinced that the "new hardware" of terminals with upper and lower case letters and full-screen addressability allowed us to create the full-screen editor and also languages that viewed programs as flowing text and not as 80-column chunks. Languages such as Algol, C, and Pascal were possible once we started thinking of code as "statements" and not as "lines". The continuation character and reserved columns in FORTRAN were a result of punch cards, not language design.

Virtual keyboards will be the first step towards a new technology set, just as Teletypes were the first step into the world of text terminals. This new world will have new ideas about programming languages and code. Probably graphical, possibly geometrical. I don't know the nature of the new languages, but they will be different from today's C# and Ruby as those languages are different from FORTRAN and COBOL.

Sunday, January 30, 2011

The virtue of the bad idea

Scott Adams mentions the notion of "the bad version". (Search for the article on the Wall Street Journal web site. Look for "How to Tax the Rich".) The article's focus is on taxes, but the idea of the bad version is worth examining.
For example, if your character is stuck on an island, the bad version of his escape might involve monkeys crafting a helicopter out of palm fronds and coconuts. That story idea is obviously bad, but it might stimulate you to think in terms of other engineering solutions, or other monkey-related solutions.
After a bit of thought, I realized that this method is exactly how most software is developed, and it is the core value of agile development. Start with an idea and implement it. The idea isn't perfect, nor is the implementation. But the program does not stop there. (Well, I guess it can. You could decide that it was good enough and go home.)

Most folks (programmers, managers, testers, and even salespeople and users) want improvements to programs. Each has ideas for improvements.

The basis for improvement is the current version of the program. For people who do not know the entire program (and for large programs it is easy to know only a portion of its functions), the requirements for "version N+1" are often "do everything that version N does, plus these additional items". For shops using agile methods, this is easy since they have tests for all functions in version N. Shops using waterfall methods typically have a harder time, as they do not have a comprehensive set of documented requirements.

The big idea is to improve on your ideas. Start with something -- anything -- good or bad. Admit that the idea is not perfect. Get people thinking about improvements. Support them and create a process to implement the improvements.

Use the power of the bad version to generate ideas. Use the power of ideas for improvements. Use the power of multiple releases for better software.

Thursday, January 27, 2011

GUIs considered harmful

It's tempting to create a solution that uses a GUI program. Most people are comfortable with GUI programs and uncomfortable with command-line programs, so they prefer the GUI solution.

The problem with GUI programs (most GUI programs) is that they are not automatable. You cannot drive them to perform a task. GUi programs cannot be (easily) plugged in to other programs. The GUI is the end of the line, designed to be used by a human and nothing else.

The ability to use a program within a larger system is a good thing. We're not omniscient. When designing a program (and thereby automating certain tasks), but we also choose to leave some tasks to the human in from of the GUI.

For example, the task of deleting files (in Windows) can be performed in the Windows Explorer program, a GUI program. I can select a set of files, or a directory name, and instruct Windows Explorer to delete those files. For normal files, Windows Explorer performs the task with no other action from me. but if any of the files are write-locked (that is, flagged as read-only), Windows Explorer will ask me about each file before deleting it.

The folks at Microsoft added a button to the "are you sure about this read-only" file to tell Windows Explorer that I was sure, and that I was also sure about other files marked as write-locked. But this is an after-the-fact behavior, and only comes into play after the first write-locked file has been encountered. (Also, there is no way to tell Windows Explorer in advance that I want this behavior. If one of the files is write-locked, Windows Explorer will trip over the first one.)

The command-line command in Windows to delete files (DEL) on the other hand has options that can remove write-locked files, and you specify them in advance. With one action, you can tell the command-line shell that you are serious about deleting files and don't ask you any silly questions.

Command-line programs can often be used within larger systems. In Windows (and MS-DOS), one can use batch files to execute a series of programs. Unix and Linux have similar (although more powerful) concepts with their shell programs. And all of the major operating systems support redirection to accept input from a file instead of a keyboard.

Not all GUI programs are evil. Microsoft has done a rather good job of usable GUI programs. Their programs are often designed with two layers, a Gui layer and a lower layer that can be used by other programs. Sometimes this layer is a command line, sometimes it is a COM interface. But the work has to be done, the design has to allow for "bimodal" use.

It's all too easy to design a GUI program for the moment, implement it, and lock out any future increases in automated capabilities. A GUI program prevents others from automating tasks and often freezes the developer at a specific level of automation. A developer of a GUI program is in a comfortable position, with some (perhaps most) of the work being automated, and an expensive proposition for additional automation (changing or re-writing the GUI program). Once in this position, the developer ceases to think about improvements to the program -- the program is "good enough" and the developer convinces himself that the remaining work isn't that onerous. This is not a good place to be.

I don't advocate the elimination of GUI programs. They can be effective. But they can also lead to a stifled mental state.

Tuesday, January 25, 2011

Lessons of programming languages

Programming languages are more than methods of instructing a computer to perform instructions. They are techniques for us humans to organize our thoughts. The syntax and structural capabilities of programming languages tells us more about ourselves and our mental discipline than about the programs of a specific era.

Here is a run-down of the concepts that programming languages have given us:

From COBOL we learned that high-level languages are possible and practical. (There were many high-level languages, and many have been forgotten. COBOL remains and its lessons remain.)

FORTRAN taught us that programming languages can be portable, allowing programs to run on more than one hardware architecture. (COBOL taught us this too, to some extent.)

LISP is almost as old as COBOL and FORTRAN, but we didn't learn any lessons from it. (Not that it has nothing to teach us. Indeed, there are lessons. We didn't learn them.)

From Algol we almost learned that structured programming is possible. This was an important lesson, and most languages today use it. But it took a while for the lesson to sink in.

The lesson from PL/1 is that a language cannot be all things to all people and succeed.

BASIC taught us that there is (or was) a hobbyist market, and that individuals, including high-school students, can write programs.

We learned from C that portable languages -- truly portable languages -- are hard. Despite the efforts of the creators (and later, ANSI committees), C was (and is) haunted by big-endian/little-endian disputes and ambiguities of the size of an integer.

From Pascal, we learned (again) that structured programming is possible. This time, the lesson stuck. I think that every language created after Pascal adopted the concepts of goto-less programming.

The lesson from ADA is that a language cannot be all things to all people and succeed. And that we didn't really learn the lesson of PL/1. A side lesson of ADA is that the US Department of Defense was no longer large enough to force a new language onto the industry.

The C++ language taught us that we can use object-oriented programming (and that we are capable of designing and writing object-oriented programs). It also taught us that complex languages can be accepted by the mainstream developers, at least for a while.

Visual Basic taught us that a vendor-owned language is subject to the whims of the vendor. Microsoft issued six versions of Visual Basic in its lifespan (not counting the current VB.NET release), and many new versions changed the syntax of previous versions, creating maintenance headaches for people who committed to Visual Basic.

Java taught us multiple lessons. It taught us that object-oriented programming does not have to be incredibly hard (as in C++) but can be merely hard. It also taught us that virtual processors were viable technology. Microsoft learned from this lesson and built the .NET platform, using concepts from Java and the much-earlier (and failed) UCSD p-System.

In some ways, Java can be seen as an admission that truly portable languages are not possible (or are at least very difficult). Java achieves its "run everywhere" capability by using the trick of a virtual machine tailored for each hardware platform, and has had challenges with uniform GUI operations.

The lesson from Python and Ruby is that dynamic languages can allow for nimble development. Coupled with agile practices, we learned that static typing and design-up-front is not necessary for effective development of software.

Looking forward, I think that we will continue to learn lessons from programming languages. The next lesson may come from functional languages, which may teach us that programs can be reliable and high quality. But that lesson remains to be seen.

Sunday, January 23, 2011

The disappearing setup program



The setup program had its genesis in the old set-up instructions of early IBM PC applications. Software was distributed on a floppy disk (a true floppy disk, the kind you could bend and flex), and these early applications had no setup program. Instead, you created a directory on your hard disk, copied all of the files to the new directory, and ran the program.

Then a step was added: add the directory to your PATH variable.

Then the process changed again: instead of copying the files, you unzipped them.

At this point, developers were creating batch files to perform these steps, as a convenience for their customers (and as a way to reduce support calls).

With the introduction of Windows, installation sequences become more complex (especially so with the introduction of the Windows Registry) and Microsoft set the bar with a setup program. Vendors such as InstallShield and Wyse provided setup-builders which allowed people to specify files and create fully-functional and professional-looking setup programs.

Microsoft raised the bar again with its MSI packaging. This moved the install engine into Windows yet retained flexibility for the application developers.

While all of this was happening for Windows, similar things were happening for MacIntosh applications and Linux applications.

Yet the rising visibility of the install program has ceased. Now, the setup program has all but disappeared (except in Windows).

Most Linux distributions use package managers. These package managers simplify the installation of software to a level equivalent to an automat. The user picks from a list of available packages and clicks on the "install" button. (You can also remove software this way, too.) The package manager resolves dependencies, adding required packages automatically. In the Linux and open source world, most packages are free, so installing them incurs no additional cost.

Apple has for along time had a setup operation that was simply a "drag and drop" from distribution media to your local hard disk. For iOS, they leverage iTunes to allow you to buy (some are free) and install apps, and dragging is no longer needed.

For Linux and iOS, the setup program has become invisible. It's there, but you the user (or system administrator) don't see it. The setup program has been equipped with an automatic transmission, so the clutch pedal is no longer needed. (There are a few programs that need to be configured after installation. These tend to be infrastructure programs, things like the Apache web server, or Samba, or and NTP client. Yet many of those infrastructure configurations tend to be limited to simple changes in configuration files.)

Microsoft has made improvements over time. The concept of a single setup engine and installable packages is a good one. But they have been surpassed by Linux and Apple, with package managers that make installation of applications invisible.


Thursday, January 20, 2011

Fortran in any language

One of the witty remarks about programming goes: I can write Fortran in any language!

The comment is usually made during a discussion of programming languages, usually new languages. It is generally used to indicate the ability to limit the use of a new language to the features of an older language, such as using a new C++ compiler but writing the constructs of C. Since C++ is a superset of C, all C programs work. One can avoid the hard work of learning C++ and write the comfortable syntax of C while claiming to write C++ code. (One can make the claim, but savvy geeks will quickly learn the fraud.)

While witty, the comment is not quite true. And I think that it is a bit unfair that we keep picking on Fortran.

The idea, perhaps, is that Fortran is so basic, so rudimentary, so primitive, that every language has its features (plus a whole lot more that makes it a different language). Thus, BASIC is Fortran with better looping and input-output, and Pascal is Fortran with pointers and better structure, and one can write "primitive" Fortran-like programs in either BASIC or Pascal. The comment "I can write Fortran in any language" is a condensation of "I can write programs in any language that use a limited subset of that language which is very close to Fortran".

I think that this verbal abuse of Fortran is a bit undeserved. Fortran may be many things, and it may even be primitive, but it is not the parent object of modern programming languages, with descendants that have everything of Fortran and extra bits. Fortran had a lot that other languages abandoned.

For example, in FORTRAN (the early versions of the language: FORTRAN II and FORTRAN IV) had no variable declarations. Pascal and C (and Java and C#) require all variables to be declared in advance. Python and Ruby have gotten away from this, returning to the original FORTRAN style, and Perl, oddly, does not require declarations unless you use the 'strict' module which changes Perl to require them.

FORTRAN also had the GOTO statement, which was kept in C and even in C++ (in a reduced form). Pascal eliminated the GOTO, and it is not to be found in Java or other modern languages.

One interesting (and mind-bending) construct in FORTRAN was the arithmetic IF. Instead of the logical if (IF THEN ELSE ), FORTRAN used an arithmetical expression construct; IF LABEL1, LABEL2, LABEL3. Execution was routed to one of the labels based on the value of the expression: negative values to one label, positive values to another, and zero values to the third. Only with the introduction of FORTRAN-77 did we see the IF...THEN...ELSE construct.

FORTRAN has changed over the years. It started with a pretty good idea of a high-level language, and morphed as we figured out what we really wanted in a language. It's not C# or Ruby, and it won't be. But it has changed more than any other language (with the possible exception of Visual Basic).

Its resilience is a lesson to us all.

Monday, January 17, 2011

How to select talent

COMSYS connects people to positions. COMSYS has a website.

Registration should be easy. It should be a no-brainer. The concept of registration at a web site has been around for slightly less time that the age of the Web. It is "old hat". Granted, some web pages need more information than others, based on the services that they provide. And some web sites need more security than others. But everyone does it.

The first problem with COMSYS' site is the "explanation" web page. This page informs the registrant of the steps needed to register, and includes instructions such as "first click on this link, and then fill out this information, and then click on this other link to fill out more information". The need for such an explanation page indicates a failure of web design.

If you need to explain to people the steps need to register on your web site, you have a very poorly designed web site.

COMSYS' problems extend beyond the need for an explanation page.

The basic profile section, which lists a phone number and *two* e-mail addresses, has some major UI gaffes. I'll ignore the ability for a variable number of items for e-mail and phone numbers, like the GMail contacts UI. Variable fields are fairly new, and only the cutting-edge companies have them.

But COMSYS' site has major flubs. A phone number is edited, by the site, to contain only digits. Their edits remove parentheses and dashes -- those symbols that make the phone number readable to humans. And for the e-mail address fields, if you enter only one, the web site copies the e-mail address to the second field.

One field is listed as "Industry Experience". It's a text field, and you don't know quite what to put in it, until you realize that it is only two characters wide. (The text box is much wider than two characters.) Apparently it is for "years of experience".

The skills pages allow you to select various skills and self-rate your level for each (a nice feature). You can even mark skills as "primary". But only after selecting a bunch of skills as primary, and then clicking on 'submit', do you learn that you can select at most three as primary. The shock of a hidden rule for data entry overwhelms the amazement at a limit of three primary skills.

Major UI gaffes on your web site make you look like an amateur.

The task of connecting contractors with positions is a complex one. COMSYS chooses to collect information on the skills of each individual. The UI they have is not particularly onerous, but it's not friendly either. They have a series of pages that list sets of skills, and the registrant checks the skills that they have. The problem with this approach is the omitted skills ("Ruby on Rails" was listed, but not "Ruby").

Using a defined list of skills (or not allowing registrants to define their own skills) gives you an incomplete view of the person.

But these are nit-picks. I see bigger problems:

Problem one: COMSYS has built a web site to (apparently) let their back end (identifying candidates for positions) work efficiently. They have done this at the expense of the front end.

Problem two: COMSYS has built a web site that conveys the attitude "we are in charge and you are the product". Registrants must follow COMSYS' rules on the web site, listing only those skills that COMSYS deems important. Registrants have no ability to add unique skills to the list.

Problem three: Registrants cannot use the COMSYS site to extend their personal branding. People want to buy in to a job site, to define themselves and show off their talents. (I'm not saying that COMSYS should allow people to re-design the COMSYS web site and use custom graphics, but I am suggesting that people want to list their own skills and provide custom descriptions.)

The bottom line is that this web site offends, and COMSYS will probably lose talent because of it. After using this web site, I am pretty sure I understand the corporate philosophy of COMSYS, and I don't want to work there. Of course, they would place me not at COMSYS but at a client site, so the internal workings of COMSYS are not an issue.

Or are they? I suspect that COMSYS deals with corporations that think like themselves, just as individuals associate with like-minded people. If that is true (and I recognize that the reasoning starts with an interpretation of a web site and follows some tenuous logic) then I don't really want to work with COMSYS' friends, either.

Sunday, January 16, 2011

The Brave New World of Windows 8

Recent rumblings have mentioned a new version of Windows, currently known as "Windows 8". If it seems that Microsoft is releasing new versions of Windows quickly, perhaps it is because ... Microsoft is releasing new versions of Windows quickly. The sedate pace of Windows 2000 and Windows XP has been changed to a faster tempo with Windows Vista and Windows 7.

This may or may not be a good thing for Microsoft.

Certainly the revenue stream is a good thing. And a new version of Windows allows for a "reset" of the hardware requirements, allowing Microsoft to build on a set of more powerful computers. With more computing power, Microsoft ought to be able to offer a better computing experience.

The competing OSX and Linux have had quick releases: OSX "Leopard", "Snow Leopard", and now "Lion"; Ubuntu's six-month pulse of releases 9.04, 9.10, 10.04, and 10.10. Microsoft may feel some need to "keep up with the Joneses".

This may or may not be a good thing for Microsoft customers.

A new version of Windows places a load on customers. Upgrading existing systems, or replacing hardware, are tasks that are time consuming (and dollar consuming). The change from the Windows XP interface to Windows Vista's "Aero" was a big jump and probably cost more in terms of retraining time than in license upgrade dollars.

And the big question that customers are still asking is "what is the benefit"? Most customers (especially corporate customers) were happy with the Windows XP GUI. It was a nice, known interface, perhaps with some arcane bits but everything had been mapped. Support groups knew how to make it work.

To succeed with Windows 8, Microsoft needs to demonstrate the value in a new version. Not just the new features (such as Vista's pretty-but-gratuitous GUI) but benefits. How does a new Windows help business?

I suspect that a big feature in Windows 8 will be the Microsoft app store. (Or "market", or whatever they call it.) I suspect that Microsoft will shift software distribution to their app store, and move away from the model of CD distribution.

Such a move will have large effects on the Windows "ecosystem". Companies that sell Windows software will have to move to the new app store model. This move won't be easy, and it won't be cheap. Yet it shouldn't be a surprise: you can see the signs with the tools available for Windows Phone 7.

Forward-looking companies will prepare for this new world in advance.

Thursday, January 13, 2011

Loopy thinking

Let's think about loops. The venerable father of loops (for most programming languages) is the old FORTRAN "DO" loop:

DO 200 I=1,10
C do stuff
200 CONTINUE

which performs the section "do stuff" ten times, changing the value of I from one to ten on each iteration.

BASIC uses a similar notation:

10 FOR I = 1 TO 10
20 REM do stuff
30 NEXT I

In C, the code is:

for (int i = 0; i < 10; i++)
{
/* do stuff */
}

(The loop in C runs from zero to nine, not one through ten, since C programmers start counting at zero.)

All of these expressions of a loop force the programmer to think in terms of the loop and the incremented variable. The variable exists for the loop, and its value changes with each iteration. Most programmers think that this is the natural way to think of loops, because this is what they have been taught.

But it can be different.

In C#, the code can look like C, or it can look like this:

foreach (int i in Enumerable.Range(1, 10))
{
// do stuff
}

This style has advantages. The variable is not calculated each time through the loop, but instead holds a member of a set of values (that set just happens to be calculated for the loop). This expression of a loop shifts the mental model from "initialize a variable, compare, do the loop, and oh yes increment the value" to "use the next value in the sequence". I find the latter much easier to understand; it is a lighter cognitive load and lets me think more about the rest of the problem.

Yet the C# implementation is a bit clumsy. In other languages, you can easily start and end with any limits:

DO 200 I=10,20
10 FOR I = 10 TO 20
for (int i = 10; i < 20; i++)

But in C#, the construct is:

foreach (int i in Enumerable.Range(10, 20 - 10 + 1))

The "20 - 10 + 1" bit is necessary because the arguments to C#'s Enumerable.Ranges specify the starting number and a count, not the ending number. Oh, I could write:

foreach (int i in Enumerable.Range(10, 11))

but when I scan that line I think of the set (10, 11) when it really is (10..20).

So I must award partial marks to C#. It tries to do the right thing by the programmer, but the implementation is wordy and clunky.

I much prefer Ruby's concise syntax:

(1..10).each |i| do
# stuff
end

Tuesday, January 11, 2011

Are you mainstream?

If you're not a technology leader (and if you are, you know it), then you probably want to be mainstream. Being mainstream means that your tools and techniques are used and known by a large number of people, and you will have an easy time of finding talent for your shop. Being out of the mainstream means that your tools and techniques are used by a small number of people (or possibly no people) and you will have a difficult time finding people to work in your shop.

But how do you know if you are mainstream? (Again, if you are a technology leader then you are not mainstream, but you set the technical direction and people want to come work with you.)

To answer this question, you need to know to pieces of information: where you are, and where the mainstream is. Easy to say, hard to measure. There is no simple test, no easy evaluation, no "Cosmo quiz" for mainstream IT shops. But there are a few things you can do.

For programming languages, consult the Tiobe Index. It lists the popularity of programming languages for the past two years. You can look at the trends and also see where your language of choice lies.

For other techniques, read magazines and talk with recruiters. Printed magazines are hard to come by these days, and many are little more than advertising for a particular vendor. Recruiters are not without biases, but the good ones will give you an honest assessment of the market (and also try to sell you people).

You can also ask you staff to draft recruitment ads, listing the talents needed for the team. Be careful with this one, as it is easy for your team to think that you are considering outsourcing or replacing them.

An honest evaluation requires looking at your shop from an outsider's perspective, and looking at the world from an outsider's (at least from your shop) perspective. Only then can you decide how close (or far) is the mainstream.

Sunday, January 9, 2011

Swimming in the mainstream

Are you mainstream? That is, is your project team working with mainstream technologies and techniques? And why should you care?

Let's address the last question first: do you care about being mainstream?

The classic division of technology uses three groups: early adopters, mainstream, and laggards. Some divisions use more categories, but the approach assumes a linear progression. Pick a technology, line up the users (and future users) and everyone falls on the line somewhere. Leaders are in the front, laggards in the rear.

I don't quite buy the linearity of technology adoption. I think it can be a bit more complex. In addition to the classifications of "leader", "mainstream", and "laggard", I think there are two others: "non-adopter" and "oddball".

The non-adopters are obvious: they don't use the technology, and probably never will. In the language wars, some shops adopted Java and some adopted C#/.NET. Some shops adopted both. (And a few adopted neither. They are clearly non-adopters.) The pure Java shops that never use C# are, for the linear line of C#, non-adopters. The same holds true for C# shops that never use Java. Or the Java/C# hybrid shop that never adopts COBOL.

The other category is 'oddball'. These are the shops that use a rare technology, or more often a rare technique. They may have a custom-made defect tracking system, or unusual steps in their build/test/deploy process.

You may or may not care about being mainstream. If you are a leader, a company that defines new technology, you don't want to be mainstream. You want to be out in front. Your people will be constantly learning the new technologies.

If you're not a leader, then you probably want to be mainstream. Being mainstream means that you can find new talent fairly easily (and quickly, and cheaply). Being mainstream means that you do things the way other companies do things, and your problems are the same as other companies; discussions of problems will be on supplier web sites.

Being a laggard means that in some ways, things are easier. You're following the rest of the industry, so all of the problems you encounter have probably been solved. Solutions are on the supplier web sites. You may, however, find it hard to find talent. The bright folks will have moved on to newer technologies, and the folks who admit to knowing the technology will be the older, senior (more expensive) people.

The position of oddball is like a laggard, but worse. You have used some unusual tech, or developed your own tech, or created your own processes, and no one outside of your organization understands them. Finding talent means finding people who can adapt to your processes. (These folks are really expensive.)

So you care about being mainstream. It costs you money to be otherwise. We now have the question: are you mainstream, and how can you tell?

But that will have to wait for another post.

Saturday, January 8, 2011

Keyboards and buggy whips

Various folks have railed against Apple for the UI of some recent programs on Mac OSX. These changes violate conventions of MacOSX and bring the programs in line with iPhone/iPod/iPad UI. While the changes are irritating, I think that they make sense for Apple.

Think about it: The iMac is nothing more than a large, thick, non-portable, touchscreen-less iPad -- equipped with a keypad and DVD drive. Why should it be denied the ease of the iOS interface?

I think that Apple's vision is to bring the goodness of the iPad to larger computers. In this vision, the iMac mutates into an iOS device, better suited to the needs consumers than the old Mac OSX version. The new iMac will sport the large screen for watching movies and playing games. It won't be portable, but then doesn't need to be. (For portable computing, Apple has the iPad and iPod.)

Just about everyone wins in this arrangement. Consumers get computers that are easy to use -- even easier than OSX iMac computers and certainly easier than Windows or Linux PCs. Apple gets revenue. The only ones left out are creators (that is, programmers, authors, and composers of creative works).

The consumer world of iOS, with its mouseless, keyboardless interaction, works for the consumption of computing services. And it works very well.

But the interaction for creatives, those people building the content and programs that make the magic, more is needed than screen swipes and taps. The virtual keyboard is not suitable to high-volume input. Creatives will insist on old-style keyboards -- for a while.

The short term will see a cry for the "classic" interface to computers of keyboards and mice. The long term has a different picture. I see a new form of programming and high-volume data entry, a form that uses no keyboard. It might be voice recognition, it might be iOS swipes and taps, or it might be something else. I expect Apple to introduce this new technology and techniques. (And it may take them a few tries.)

Once the technology is established and proven to be effective, I expect Microsoft to jump on board and implement similar tech for Windows, and then the open source folks to implement equivalent tech for Linux. Both implementations will make modest improvements.

And when the dust settles, keyboards and mice will become things of the past, suitable only for museums and the dark corners of a hobbyist's attic.

Wednesday, January 5, 2011

By any other name

Names are important. Specifically, names of variables.

Names of variables convey meaning to the people who read the code.

The variable name 'i' ('I' in FORTRAN) is special. It carries no meaning. (Similar variable names are 'j', 'k', 'i2', 'ii', and other single-letter names.) For some loops, a meaningless name is appropriate, because the loop mechanism is less important than the body of the loop.

Consider the following code:

for (int i = 0; i < sizeof(values); i++)
{
sum += values[i];
}

In this code, the important bits are the array 'value' and the scalar 'sum'. The objective is to calculate the sum of the values in the array. The looping mechanism is not important.

The following code is equivalent:

for (int index = 0; index < sizeof(values); index++)
{
sum += values[index];
}

I find this code a little harder to read, since the name 'index' carries some context. Yet the name 'index' is also fairly weak. Let's look at another example:

for (int loop_index = 0; loop_index < sizeof(values); loop_index++)
{
sum += values[loop_index];
}

In this sample, the name 'loop_index' draws our attention. We think about the loop, and therefore spend less effort on the action of the loop.

Sometimes variables change their meaning.

int i = 0;
while (i < sizeof(values) && values[i]->some_function())
{
i++;
}
if (i < sizeof(values))
{
values[i]->other_function();
}

In the 'while' loop, the variable 'i' is a dynamic index into the array, moving from one item to another. In the later 'if' statement, the variable is an index, but now it points to an array member that holds some special property that deserves our interest. Perhaps not so important. But consider the following code:

int i = 0;
while (i < sizeof(values) && values[i]->some_function())
{
i++;
}
// lots of code
if (i < sizeof(values))
{
values[i]->other_function();
}

The 'lots of code' block separates the initialization of 'i' and the use of 'i'. If you're not paying attention, you may miss the use of 'i'. Or you may forget the significance of 'i'. This case calls for a more descriptive name, one that conveys the reason for interest.

In the early days of programming, individual variables were tied to memory, so more variables meant more memory, and the limitations of the day pushed us to minimize variables. Today, we can afford multiple variables. I say, use multiple variables, assigning the value from one variable to another, and gain meaning from appropriate names.

Sunday, January 2, 2011

Predictions for 2011

Happy New Year!

The turning of the year provides a time to pause, look back, and look ahead. Looking ahead can be the most fun, since we can make predictions.

Here are my predictions for computing in the coming year:

Tech that is no longer new

Virtualization will drop from the radar. Virtualization for servers is no longer exciting -- some might say that is is "old hat". Virtualization for the desktop is "not quite fully baked". I expect modest interest in virtualization, driven by promises of cost reductions, but no major announcements.

Social networks in the enterprise are also "not quite fully baked", but here the problem is with the enterprise and its ability to use them. Enterprises are built on command-and-control models and don't expect individuals to comment on each other's projects. When enterprises shift to results-oriented models, enterprise social networks will take off. But this is a change in psychology, not technology.

Multiple Ecosystems

The technologies associated with programming are a large set, and not a single bunch. Programmers seem to enjoy "language wars" (C# or C++? Java or Visual Basic? Python or Ruby?) and the heated debates continue in 2011. But beyond languages, the core technologies are bunched: Microsoft has its "stack" of Windows, .NET, C#, and SQL Server; Oracle with its purchase of Sun has Solaris, JVM, Java, Oracle DB, and MySQL; and so forth.

We'll continue to see the fracturing of the development world. The big camps are Microsoft, Oracle, Apple, Google, and open source. Each has their own technology set and the tools cross camps poorly, and I expect the different ecosystems will continue to diverge. Since each technology set is too large for a single person to learn, individuals must pick a camp as their primary skill area and forgo other camps. Look to see experts in one environment (or possibly two) but not all.

Companies will find that they are consolidating their systems into one of the big five ecosystems. They will build new systems in the new technology, and convert their old systems into the new technology. Microsoft shops will convert Java systems to C#, Oracle shops will convert Visual Basic apps to Java, and everyone will want to convert their old C++ systems to something else. (Interestingly, C++ was the one technology that spanned all camps, and it is being abandoned or at least deprecated by employers and practitioners.)

Microsoft will keep .NET and C#, and continue to "netify" its offerings. Learning from Apple, it will shift away from web applications in a browser to internet applications that run locally and connect through the network. Look for more "native apps" and fewer "web apps".

Apple will continue to thrive in the consumer space, with new versions of iPads, iPods, and iPhones. The big hole in their tech stack is the development platform, which compiles to the bare processor and not to a virtual machine. Microsoft uses .NET, Oracle uses JVM, and the open source favorites Perl, Python, and Ruby also use interpreters and virtual machines.

Virtual processors provide three advantages: 1) superior development tools, 2) improved security, and 3) independence from physical processors. Apple needs this independence; look for a new development platform for all of their devices (iPhone, iPad, iPod, and Mac). This new platform will require a relearning of development techniques, and may possibly use a new programming language.

Google has always lived in the net and does not need to "netify" its offerings. Unlike Microsoft, it will stay in the web world (that is, inside a browser). I expect modest improvements to things such as Google Search, Google Docs, and Google Chrome, and major improvements to Google cloud services such as the Google App Engine.

The open source "rebel alliance" will continue to be a gadfly with lots of followers but little commercial clout. Linux will be useful in the data center but it will not take over the desktop. Perl will continue its slow decline; Python, Ruby, and PHP will gain. Open source products such as Open Office may get a boost from the current difficult economic times.

Staffing

Companies will have a difficult time finding the right people. They will find lots of the "not quite right" people. When they find "the right person", that right person may want to work from home. Companies will have four options:

1) Adjust policies and allow employees to work from home or alternate locations. This will require revision to management practices, since one must evaluate on delivered goods and not on appearance and arrival time.

2) Keep the traditional management policies and practices and accept "not quite right" folks for the job.

3) Expand the role of off-site contractors. Companies that use off-site contractors but insist that employees show up to the office every day and attend Weekly Status Meetings of Doom will be in a difficult situation: How to justify the "work in the office every day" policy when contractors are not in the office?

4) Defer hiring.

How companies deal with staffing in an up market, after so many years of down markets, will be interesting and possibly entertaining.

New tech

Cloud computing will receive modest interest from established shops, but it will take a while longer for those shops to figure out how to use it. More interest will come from startups. The benefits of cloud computing, much like the PC revolution of the early 1980s, will be in new applications, not in improving existing applications.

We will see an interest in functional programming languages. I dislike the term "functional" since all programming languages let you define functions and are functional in the sense that they perform, but the language geeks have their reason for the term and we're stuck with it. The bottom line: Languages such as Haskell, Erlang, and even Microsoft's F# will tick up on the radar, modestly. The lead geeks are looking into these languages, just as they looked into C++ in the mid 1980s.

The cloud suppliers will be interested in functional programming. Functional languages are a better fit in the cloud, where processes can be shuffled from one processor to another. C# and Java can be used for cloud applications, but such efforts require a lot more discipline and careful planning.

Just as C++ was a big jump up from C and Pascal, functional languages are a big jump up from C++, Java, and C#. Programming in a functional language (Haskell, Erlang, or F#) requires a lot of up-front analysis and thought.

The transition from C to C++ was driven by Windows and its event-driven model. The transition from object-oriented to functional programming will be driven by the cloud and its new model. The shift to functional languages will take time, possibly decades. Complicating the transition will be the poor state of object-oriented programming. Functional programming assumes good-to-excellent knowledge of object-oriented programming, and a lot of shops use object-oriented languages but not rigorously. These shops will have to improve their skills in object-oriented programming before attempting the move to functional programming.

These are my predictions for the coming year. I've left out quite a few technologies, including:

Ruby on Rails
Silverlight
Windows Phone 7
NoSQL databases
Perl 6
Microsoft's platforms (WPF, WWF, WCF, and whatever they have introduced since I started writing this post)
Google's "Go" language
Android phones
Salesforce.com's cloud platform

There is a lot of technology out there! Too much to cover in a single post. I've picked those items that I think will be the big shakers. Let's see how well I do! We can check in twelve months.