Saturday, April 30, 2011

Speaking in tongues

In the 1960s, Gerald Weinberg wrote "The Psychology of Computer Programming", in which he observed that programs are not only written but read (by the author and other programmers). He also observed that programming languages are written but not spoken. No one ever "says something in Fortran".

It is an observation that was true at the time, yet I think we now have programming languages that can -- almost -- be considered a spoken language.

The early languages of FORTRAN and COBOL were meant to be written and read (silently). FORTRAN had the terse syntax that persists in most languages today; COBOL had a wordy syntax with the explicit purpose of readability. But no one intended for people to read COBOL out loud. It's wordiness was for the benefit of programming managers and interchangeable programmers.

FORTRAN offered a basic loop with the code:

DO

BASIC improved upon that syntax with:

FOR I = 1 TO 10

Notice that in BASIC, the label denoting the end of the iterating block has been removed. (BASIC uses a separate statement, NEXT, to mark the end of the block.)

C and C++ use the "for" loop:

for (int i = 0; i < 10; i++)

But all of these are in the realm of iterating a known number of times. BASIC, C, and C++ also have "while" loops, to iterate while a condition is true. But "for" loops and "while" loops still require a fair amount of thinking about the looping. A different notion of iterating is "for each of these things", and multiple languages have such constructs.

C++/STL introduced the concept of iterators to the language, and allowed for more concise (and consistent) notation for iterating over a set of objects. The STL provided more than a set of convenient classes -- it provided a set of common idioms for frequently-used constructs. These common idioms allow a person to translate the code

for (Object::const_iter iter(objects.begin()); iter != objects.end(); iter++)
{
iter->do_something()
}

into "for each member of the set of objects, do something". Yet this approach of common idioms still requires familiarity with the language and the work of translation from the C++ idiom to the English.

Microsoft has not ignored this trend. Visual Basic had (and C# has) the "foreach" keyword for iteration, and a recent update to the .NET Framework introduced the Enumerable.Range(start, count) iterator. The latter is a bit clumsy, but serviceable.

The Ruby language uses the method "each()" to iterate over a collection. This is eminently more readable than the C++/STL version: it requires no translation of idiom.

These constructs allow us to read the program -- out loud -- more easily and with less translation.

Speaking a program -- perhaps the proper verb is "orating" -- is important. Our brain's speech centers process information differently that the visual centers. When we orate, we think differently than when we read to ourselves. This different thought process helps us identify errors in our logic. (I suspect that this effect is at work when we explain a failing program to another programmer and during the explanation we understand the failure. I think it also is at work with the pair-programming concept of Extreme Programming and Agile Development, helping programmers identify errors early.)

I don't expect programming languages to be used for grand poetry (or even bad poetry) any time soon. But the concept of orating a program may be within reach.

Wednesday, April 27, 2011

Drucker's Moron

In the 1960s, Peter Drucker wrote a series of essays on management and technology. In one essay, he made an observation that holds today:
We are beginning to realize that the computer makes no decisions; it only carries out orders. It's a total moron, and therein lies its strength. It forces us to think, to set the criteria. The stupider the tool, the brighter the master has to be... (the emphasis is mine)
The problems we have with computers, even to this day, have this concept at their core.

Our programming languages have improved in their abilities to structure operations, our data representations have become self-describing, and our operating systems have gained fancy GUI interfaces. Yet the fundamental problem is this: the computer is only as good as its instructions, and the quality of those instructions depends on the people writing the programs, their understanding of the technology, and their understanding of the business needs. People who don't understand the technology will produce programs that operate inconsistently and poorly. People who don't understand the business need (whether it be a general ledger, a database, a word processor, or a web search engine) will produce a program that at best solves the wrong need.

The quality of our people define the quality of the solution. Good programmers (and GUI designers, and database architects) will be able to create good solutions. A good solution is not guaranteed, only allowed. On the other hand, poor programmers (or GUI designers, or database architects) will yield a poor program -- there is no possibility of a good result.

Sunday, April 24, 2011

Single platform is so 90s

Ah, for those days of yesteryear when things were simple. The software market was easy: build your application to run under Windows and ignore everyone else.

Life is complicated these days. There are three major PC platforms: Windows, OSX, and Linux. Targeting an application to a single platform locks you out of the others. And while Windows enjoys a large share of desktops, one should not ignore a platform simply because its market share is small. All it takes to ruin your single-platform day is for a large potential customer to ask "Does it run on X?" where X is not your platform. Telling a customer "no" is not winning strategy.

But life is even more complicated than three desktop platforms. Apple's success with iPhones and iPads, RIM's success with BlackBerry devices, and the plethora of Android phones (and soon to be tablets) has created more platforms for individuals. These new devices have not worked their way into the office (except for the BlackBerry) but they will soon be there. Androids and iPhones will be there shortly after a CEO wants to read e-mail, and declines the IT-proffered BlackBerry.

The single-platform strategy is no longer viable. To be viewed as a serious contender, providers will have to support multiple platforms. They will have to support the desktop (Windows, OSX, and Linux), the web (Internet Explorer, FireFox, Safari, and Chrome), smartphones (iPhone, Android, and BlackBerry), and tablets (iPad and Android). Their services will have to make sense on these different platforms; stuffing your app into a browser window and claiming that it works on all platforms won't do it. Your customers won't buy it, and your competitors that do support the platforms will win the business.

Wednesday, April 20, 2011

Web presentations for everyone

And now for a little bit of news about nifty new software.

The software is "Big Blue Button", a web conferencing package.

What is the Big Blue Button?

Big Blue Button (http://code.google.com/p/bigbluebutton/) is a collection of software that lets one host presentations on the web, much like WebEx or GoToMeeting. The presenter can provide audio and video, and even share his desktop. More than that, attendees can ask questions (either in text message or audio) and can chat through IM to the presenter or each other.

Neat feature: real-time translation of instant messages. Big Blue Button uses the Google Translation API and lets people chat across language boundaries. If I set my language to "English" and you set yours to "Spanish", I can type a message in English and you see the translated version. (You can also see my original English version.)

The one feature that is needed: recording. Other products in this arena let one record the meeting for playback later. The folks at Linux-ETC recognize the need and are working on it.

Is Big Blue Button ready for everyone?

The presentation I attended had a few problems, one significant enough to crash the software on the presenters PC. The software is perhaps not quite ready for prime time. Large, respectable, and stodgy corporations will probably choose the safer option of WebEx. But smaller teams (and start-ups) may want to look at Big Blue Button.

How will Big Blue Button change things?

Big Blue Button reduces the cost of on-line presentations, and provides another method for coordinating remote teams. It makes it easier to share knowledge, and the startup investment is small. Therefore, Big Blue Button will make things easier for companies to out-source projects. If you are running your off-shore projects with e-mail, voice-mail, and audio conference calls, you may want to look at Big Blue Button's capabilities.

Developers working at home may want to neaten their office-in-the-home. A disorganized office sounds just as good as a well-organized office, but video changes the game. People get professional head shots for LinkedIn and Facebook to maintain their brand. Two-way web audio/visual connections will create the need for professional-level studios (or something that looks like a professional studio). One will need a proper background, a good microphone and webcam, lighting that is flattering, and a way to block external noise like street traffic. I expect that we will see a small industry of "video consultants" to set up office studios and home studios.

Sunday, April 17, 2011

The 3% solution

Banks, at least in the old days of the 1980s, did not want all of their customers to pay their credit card bills. They wanted a default rate of something greater than zero. Banks wanted about three percent of their credit card customers to default -- to make no payments.

Their logic was as follows: The bank could enact strict credit checks and lend money to only those customers who would pay them back. But doing so would reduce the number of customers with credit cards, and thereby reduce the profits from credit cards. If banks loosened the rules and allowed more customers to get credit cards, some would default -- but only a small number of customers. Many of the "new" customers would pay, even though their credit rating was not so great. More customers would repay loans than would default, and the bank would earn more profits overall. Allowing for some failure increased the total profits.

In software development, we strive to reduce the number of defects in software. In the management of software development projects, we strive to control the cost, the delivery schedule, the features delivered, and the quality. These are all good objectives, but sometimes we can focus too much on the process and lose sight of the end goal.

Organizations enact policies to control projects. Some organizations have informal policies; others have stringent policies. Some projects (such as control systems for atomic energy plants) need very specific policies. But not every project requires such minute specification and control.

Yet some organizations push for (and carry out) very specific procedures, in an attempt to eliminate all variations from the project plan. This is equivalent to a bank limiting credit cards and loans to only those customers with the best credit scores. The process can work, but it reduces your effectiveness. For the bank, it shuts out a large number of customers (some of whom will default). For the software development effort, it limits the project to those features that are guaranteed to succeed. In both arenas, the organization achieves less than it could.

As I see it, the best strategy is to allow for failure -- a small number of failures -- with a process to recover and correct after a failure. Attempting to implement twenty features and succeeding with sixteen (failing with four) is better than attempting (and succeeding with) a more conservative feature set of ten.

If your goal is to avoid failure, then the best strategy is to change nothing. If your goal is to advance, then you must accept some risk of failure. The important thing is not to avoid failure but to recover from failure.

Saturday, April 16, 2011

Web 3.0 will make celebrities of us all

The wonderful thing about the web (and the terrible thing about the web) is that it changes. The original web was simple, with static documents that linked to each other.

The next version of the web gave us transactions. It was a big step up: We could buy stuff! Books, CDs, and eventually batteries, household items, groceries, and even cars. But this version of the web did not let us share with others, except for e-mail.

With social media came "Web 2.0" and the ability to share information with others. From reviews of books and music to on-line journals and blogs, from friend sites (Friendster, MySpace, and Facebook) to micro-blogging sites (Twitter and Identi.ca), we built circles of friends and shared our experiences and opinions. But the sharing of information was not very granular. Reviews were available for anyone to see. Journals and blogs were visible to the world, unless we locked things to friends and family.

Twitter changed the game and let people sign up (and unsubscribe) without our say-so, but stayed within the Web 2.0 realm. We were still sharing data to everyone.

I think that "Web 3.0" will be different. It won't happen in the browser, but instead will happen on cell phones. The next generation of apps to connect us will be cell phone apps, not web browser apps. (Although there may be a web version of the cell phone app.)

And Web 3.0 will be defined not only by the primary platform (a cell phone and not the web browser) but also by the degree of control we have on sharing information. With Web 3.0, we will have a big say in who gets to see information, and there will be different levels of openness. For example, there is already a grocery list app that spouses can share. The list is shared between spouses (or significant others) and either person can add or remove items from the list -- but the list is visible to only the parties involved. This makes sense -- who really needs to see my grocery list? Yet it is useful to share it with my spouse.

Web 3.0 will have multiple sets of "friends", from spouses to family to close friends to distant friends and then to business associates. We will be aware of our different "circles of friends" and apps will be aware of them too. When we make information available, we will select the desired level of sharing.

With multiple circles of family and friends will come various politics. Will people be offended when they are placed in an outer circle and not in an inner circle? Will people lobby for more intimacy? Will they cry out when they are "demoted" to an outer circle? We will soon learn the answers to these questions.

The magazine "People" needs the movie industry to create material to fill pages. The movie industry needs the magazine "People" to have a venue for the news, pseudo-news, gossip about actors in the industry. It's a tidy little symbiosis. With Web 3.0, we will each have our own outlet for news. Perhaps not "People" magazine (or a magazine of any name) but web pages, public blogs, and friend-locked blogs. We will each be a celebrity, to some degree, with our own publicity and innermost group of friends. And also the drama that comes with celebrityhood.

Sunday, April 10, 2011

If a C++ standard falls in a forest

The next C++ standard (C++0X) is out, or just about out. Should we rejoice? Should we wait in anticipation for the release of new compilers?

The new standard has a lot of good things in it. Things that I want... or would want, if I were developing in C++. (And to some extent, I am still developing in C++, depending on the client.)

But I'm not sure that the future is so bright for C++ developers. The standard has been all but settled and published. The next steps would be for compiler implementers to release new versions of their products and then for developers to incorporate the new features into their code. That's what happened with the last release of the C++ standard.

The C++ environment has changed. After the previous standards update, the major C++ compiler vendors were Microsoft, Borland, IBM, and Intel. Microsoft and Borland were competing fiercely for the Windows developer.

Today, in the Windows market, the only compiler vendor is Microsoft. Borland is gone, IBM has moved to Java, and Intel was never a big player. I see no one that can provide competitive pressure to Microsoft.

And I doubt that Microsoft has much to gain from a new version of a C++ compiler. They are pushing their customers to C#; releasing a C++ compiler would send a mixed message to developers. I think Microsoft may leave the new C++ standard unimplemented.

The other big player in the C++ compiler arena is GNU, with the 'gcc' collection of compilers. But I don't see GNU as a major competitor to Microsoft; certainly not big enough to push Microsoft into implementing the new standard.

There is still a market for C++ -- just smaller than it used to be. Two decades ago, every serious piece of software for Windows was developed in C++. That market has been fragmented into separate camps for C#, Java, Objective-C, Python, and PHP. C++ is useful, but not universal.

We still need C++. While business apps and smartphone apps can be written in the new languages, the new languages themselves are, I suspect, written in C++. The only language compiler that may be written in something other than C++ might be Microsoft's C# compiler (and of course the Python interpreter for the PyPy project).

But C++ may become more of a specialized language, a tool used only be a small subset of the development community. As such, the per-unit cost for C++ compilers may increase. further encouraging C++ users to move to other languages.

Wednesday, April 6, 2011

Faster than a hand-held calculator?

A fried of mine recently gave me an old HP 35 calculator. For a geek like me, a true HP calculator is a kingly gift. I cleaned off some grime and replaced the internal batteries (a bit tricky but possible) and now have a working calculator.

While using it, I was impressed with the speed of its response. Even for the sophisticated operations, the answers are displayed immediately. (As in "before my finger is off the button".)

Thinking about the experience, I was impressed with this speed because I have been using various computers, and all have been sluggish. The computers are a varied lot and include a collection of operating systems. The hardware ranges from ten years old to just purchased, and the operating systems include Windows, OSX, and Linux.

All of the PCs, despite their youth or modern operating systems, are slow to respond. Launching a program on any of them (even Windows Explorer) sees a delay. (The Apple MacBook is particularly gratuitous with special effects of windows swooping up and down, but Windows 7 is not without its swoopiness.)

Yet the calculator, designed and built in the early 1970s, is faster than the computers of today. The HP 35 has a custom Mostek chip running at no better than 1MHz and probably slower than that, with perhaps 32 bytes of read/write memory and less than 1K of ROM.

Yet the calculator is much faster than the PC.

Comparing a hand-held calculator against a personal computer is awkward at best. The calculator is a specific-use device; the personal computer is a general-use device, capable of much more than arithmetic calculations. And if I run the calculator program on the PC, its response is just as fast as the calculator. (It's the launching of the program that takes time.)

I recognize than comparing the launching of programs against the operation of a calculator is unfair or even meaningless, much like comparing the taste of orange juice against the handling of a mini-van.

But there is a psychological effect here. I was impressed with the HP 35. I don't remember being impressed by a PC program in a long time. (The last PC software that impressed me was the setup program for the Apple Airport, and that was at least four years ago.)

Is it possible to create impressive software? Such software would have to be easy to use, have an elegant interface, and fast. (And correct.)

When building software, we tend to focus on the "correct" part of the requirements, and leave the "wow" factors out. The approach is utilitarian, and possibly efficient, but little more than that.


Sunday, April 3, 2011

Y2K is not dead

The problem with Y2K was not that programs were going to calculate dates incorrectly. The miscalculation of a date is a program defect, a "bug", and all sorts of programs have all sorts of defects.

The problem with Y2K was that a large number of programs would have a defect (the same defect, or similar defects, as it happened) at the same time. It was the large number of programs affected at the same time that made Y2K the crisis what it was. (That, and our willingness to ignore the problems -- we knew about them since the 1960s -- until we had to address them.)

There are other problems of a similar nature to Y2K.

The year 3000 Most obvious is the "Y3K" problem. We solved the year 2000 problem with a series of patches and temporary fixes. A few programs were fixed with a long-term solution. After January 1, 2000, we all looked back, breathed a sigh of relief, and told ourselves that we would not let that problem happen again. Yet as early as March we dropped the "20" from our years and started using two-digit years in our documents and data. These documents, programs, and datasets will cause problems in the year 3000.

For those of you who are not worried about "Y3K" problems because the date is so far ahead, let me remind you that that thinking is exactly what got us into the Y2K mess.

The year 10000 A more distant and more subtle problem. The programs that were fixed "properly" for Y2K will work in 3000, and int 4000, but may fail in 10000.

The astute reader has realized that the "Y10K" problem is the same as Y2K, but at a larger time scale. The very astute reader has surmised there is no permanent solution for storage of dates, at least not with our current calendar and notations.

The Microsoft COM problem Not related to a specific date (at least not yet), this problem is similar to Y2K in that it will affect a large number of systems at the same time and it is a known problem.

At some point, Microsoft will stop supporting COM. I don't know when that time is, and I don't know that Microsoft has made such an announcement, but I am fairly certain that they will stop supporting COM -- if for no other reason than to get rid of the Windows registry, which COM uses.

COM was a solution to the problem of multiple DLLs, and it worked (clumsily) for a long time. It still works, but the design of .NET makes COM unnecessary. A "modern" Windows application will use .NET, not COM. Well, it might use COM and Microsoft's "Interop" techniques to talk to COM components, but only because those components have not been ported to .NET.

Yet a lot of systems use COM. Adobe products use COM to connect to objects in their scripting language. An unknown number of in-house systems use COM to connect components into systems. Even Microsoft uses COM for some of its products. COM is not dead -- although one can make the argument that it should be.

I'm sure that Microsoft is working to migrate all of its products and applications from COM into .NET. Microsoft has a large product line and a large code base, and such a migration may take years and possibly decades.

But know this: at some point, COM will stop. (It will cease to be. It will depart. It will meet its maker. It will be a dead parrot.) When COM stops, all of the systems that rely on COM will stop, too. Microsoft will have converted their applications to use .NET. Will your systems be ready?

My purpose is not to panic people. I believe that COM will be supported for a long time. You have time to identify your systems, plan for their conversion, and implement the changes long before Microsoft drops support for COM. My purpose is to alert you to this change and get you to make plans.

Eventually, you will have to change your applications. You can change them on your schedule, or on Microsoft's schedule. Which do you prefer?

Saturday, April 2, 2011

The Opposite of Batch

We of the PC revolution like to think that the opposite of big, hulking mainframes are the nimble and friendly personal computers of our trade. Mainframes are expensive, hard to program, hard to use, and come with bureaucracies and rules. Personal computers, in contrast, are affordable, easy to program, easy to use, and one is free to do what one will with the PC -- the only rules are the ones we impose upon ourselves.

Yet mainframes and personal computers have one element in common: Both have fixed resources. The processor, the memory, the disk for storage... all are of fixed capacity. (Changeable only by taking the entire system off-line and adding or removing components.)

Mainframes are used by many people, and the resources must be allocated to the different users. The batch systems used by large mainframes are a means of allocating resources efficiently (at least from the perspective of the hardware). Users must wait their turn, submit their request, and wait for the results. The notion of batch is necessary because there are more requests than computing resources.

Personal computers provide interactive programs by providing more computing resources than a single person requires. The user can start jobs (programs) whenever he wants, because there is always spare capacity. The processor is fast enough, the memory is large enough, and the disk is also large enough.

But personal computers still offer fixed capacity. We rarely notice it, since we rarely bump up against the limits. (Although when we do, we often become irritated. Also, our personal systems perform poorly when they require more resources than available. Try to boot a Windows PC with a completely full hard disk.)

The true opposite of batch is not interactive, but flexible resources -- resources that can change as we need them. Such a design is provided by cloud computing. With cloud computing, we can increase or decrease the number of processors, the number of web server instances, the memory, our data store -- all of our resources -- without taking the system off-line. Our computing platform becomes elastic, expanding or contracting to meet our needs, rather than our needs adjusting to fit the fixed-size platform. Perhaps a better name for cloud computing would have been "balloon computing", since our resources can grow or shrink like a balloon.

This inversion of shape -- the system conforms to our needs, not our needs to the system -- is the revolutionary change offered by cloud computing. It will allow us to think of computing in different ways, to design new types of systems. We will have less thought of hardware constraints and more thought for problem design and business constraints. Cloud computing will free us from the drudgery of system design for hardware -- and let us pick up the drudgery of system design for business logic.

With cloud computing, IT becomes a better partner for the business. IT can enable faster business processes, more efficient supply chains, and better market predictions.