Tuesday, March 27, 2012

Programming language as amplifier, or not

Studies have shown that different programmers perform at different levels. Not all programmers are alike, not even programmers with the same job title. The difference between a really good programmer and a really poor programmer has been measured to be a factor of twenty-five!

What I have not seen is a study of programming languages and their role in these differences.

I believe some programming languages to be equalizers and other programming languages to be amplifiers. Some programming languages can make programmers better, or at least allow them to be more productive. Other programming languages limit them, bunching programmers together.


I noticed the difference in programming languages when I shifted from C to C++. The C++ language was more than a "better C" or even a "C with classes" arrangement. It allowed one to use more sophisticated constructs, to develop programs that were more complex. As some folks said, "with C you can shoot yourself in the foot, with C++ you now have a machine gun".


C++ is a powerful language, and good programmers can use it to good effect. Poor programmers, on the other hand, frequently end with messy programs that are difficult to understand and maintain (often with defects, too).

C++ is an amplifying language: good programmers are better, poor programmers are worse.

But that does not hold for all languages.

FORTRAN and COBOL are equalizing languages. (That is, the early versions of these languages were equalizers.) They reduce the difference between good and poor programmers. The structure of the languages constrains both types of programmers and the code is pretty much the same, regardless of the programmer's skill. (Later versions of FORTRAN moved it closer to an amplifying language.)

Some other programming languages:

Assembly language is an amplifier. While the "trick" to good programming in assembly language is understanding the processor and the instruction set, assembly language programming is such that a good programmer is really good and a poor programmer has a very difficult time.

Pascal is an equalizer. It has enough "guardrails" in place to prevent a poor programmer from making a mess. Yet those same guardrails prevent a good programmer from truly excelling.

Perl is an amplifier. Python is an amplifier. Ruby is an amplifier.

Java and C# are equalizers, although they are shifting towards the amplifier end of the spectrum. Sun changes Java and Microsoft changes C#, and the changes add features (such as lambdas) which become the machine guns for shooting yourself in the foot.

Viewed in the light of "amplifier" or "equalizer", one can assess programming languages for risk. An amplifying language can allow programmers to do wonderful things, but it also allows them to create a  horrible mess. When using an amplifying language, you have to take steps to ensure the former and prevent the latter. An equalizing language, on the other hand, limits the possibility of mess (while also limiting the possibility of something wonderful).

But if you don't care about something wonderful, if you want to deliver a known quantity on known schedule (and the quantity and schedule are reasonable), then an equalizing language is better for you. It allows you to hire not-so-great programmers and you know that they will delivery something of reasonable quality.

If my reasoning is true, then we can expect small shops (especially start-ups) to use the amplifying languages. They have to, since they need above-average results. In contrast, large conservative shops will use equalizing languages. They will (most likely) be unwilling to hire top talent and will opt for mediocre (but available) personnel. They will also (most likely) be unwilling to educate and develop

Capabilities of the language are one factor among many. The C++ programming language become popular not because it was an equalizer (it's not) nor because it was an amplifier -- it became popular because it was the way to develop applications for Windows and Microsoft supplied good tools for it. That is no longer the case. The primary language for Windows applications is now C#. The primary language for iOS applications is Objective-C. The primary language for Android applications is Java. Yet programs for all of these platforms can be developed in other languages.

With today's multi-language market, expect companies to select the tool that suits their needs. Companies that need top-level performance will pick the amplifying languages. Companies that need certainty and want to avoid risk will pick the equalizing languages.

Friday, March 23, 2012

The default solution

For decades, mainframes were the default solution to computing problems. When you needed something done, you did it on a mainframe, unless you had a compelling reason for a different platform.

For decades, IBM called the shots in the computer industry. The popularity of IBM hardware gave IBM the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of personal computers (ironically helped by the IBM PC). IBM ceded the control of software to Microsoft, first with DOS and later with Windows.

For decades, PCs were the default solution to computing problems. When you needed something done, you did it on a PC, unless you had a compelling reason for a different platform.

For decades, Microsoft called the shots. The popularity of Windows and Office gave Microsoft the ability to strongly influence (some might say dictate) hardware and software standards. That power diminished with the rise of hand-held computers (specifically iPods and iPhones). Microsoft ceded the market to Apple, after several failed attempts at moving Windows to hand-sized devices.

Now, smartphones and tablets are the default solution to computing problems. When you need something done, you do it on a smartphone or tablet, unless you have a compelling reason for a different platform.

The popular platforms are the default solutions, and the company with the dominant platform can set the standards and the direction of the technology. Notice that it is the popular platform that defines the default solution, not the most cost-effective or the most reliable. The default solution is defined by the market, specifically what customers are buying. It is not a democracy, but neither is it an inherited rank. A company has a leadership role because the market gives that company the role.

And the market can take away that role.

The change in the market from mainframe to PC was an expansion, not a revolution.

The events that unseated IBM were not market revolutions, in which one competitor replaced another. IBM the mainframe manufacturer was not ousted by another mainframe manufacturer.They defended themselves against competitors, but failed to expand to new markets.

The PC revolution expanded the market. (It may have killed dedicated word processing systems, but overall it expanded the market.) The new market of word processing software, spreadsheets, and even primitive databases was something that IBM did not pursue with mainframes. It is possible that IBM was unable to pursue that market, as the PCs were small, inexpensive, and purchased by people who did not have a squadron of lawyers to review purchase and support contracts.

The market expanded but mainframes stayed constant, and that allowed PCs to become the default solution.

We have a similar situation with PCs and tablets.

The smartphone revolution (along with tablets) is expanding the market. The new market of location-aware apps, easy-to-install apps, and touchscreen interfaces is a market that Microsoft is only now beginning to pursue with Windows 8 and the Metro UI, and this effort is by no means guaranteed. (Many long-time supporters of Microsoft are grumbling at Windows 8.)

The market is expanding and PCs are mostly staying constant. That allows smartphones to become the default solution.

But PCs are not simply sitting still. PCs, and more specifically, PC operating systems, are adapting the ideas of the smartphone market. Microsoft's Windows 8 is the most prominent example of this effect, with its new GUI and the new Microsoft Windows App Store. Apple's "Lion" release of OSX bring it closer to smartphone operating systems. Some Linux distributions are morphing their user interfaces to something closer to smartphones and are simplifying their package managers.

In the end, I think PCs will have a limited role. Data centers have never been fond of the tower-style units, preferring rack-mounted servers and now preferring virtual PCs running on mainframes, of all things! Home users will find that smartphones and tablets less expensive, easier to use, and good enough to get the job done. Corporate users are the last bastion of PCs, and even they are looking at smartphones and tablets in the "Bring Your Own Device" movement.

PCs won't die out. Some tasks are handled by PCs better than on tablets. (Just as some tasks are handled by mainframes better than PCs, even today.) Some people will keep them because they are "tried and true" solutions, others will be unwilling to move to different platforms. Hobbyists will keep them out of nostalgia.

But they won't be the default solution.

Sunday, March 18, 2012

Windows 8 means a faster treadmill

The release of Windows 8 marks a change in Microsoft's approach to backwards compatibility. Microsoft has shifted its position from "compatible at just about everything" to "things change a lot and your old things may not work".

Windows 8 and its Metro interface re-define the programming of applications. On x86 processors, legacy applications can run in "Windows 7 mode". On ARM processors, legacy applications... cannot run. And while Windows 8 offers Windows 7 mode, Microsoft has made no promise of such a feature in future releases.

With the shift to WinRT and Metro, Microsoft has started a countdown clock for the lives of all Windows applications in existence.

In the past, Microsoft maintained compatibility for just about every application. Even vintage DOS applications would run under Windows XP (and probably still run under Windows 7). That compatibility came at no small expense, not only in development and testing costs, but at opportunity costs. (New development was constrained by the design decisions of previous releases.)

Users, developers, and support teams are on a treadmill, with new technologies and releases arriving faster than before. The good old days of decade-long technology planning have been replaced with a range of two or three years.

People can get upset about the faster pace of the treadmill, but they have nowhere to go.

Apple has "revved" its platform a number of times, changing the processor, the operating system, the user interface, and the device form factor. The folks working on Linux are working on similar changes.

If Microsoft believes that it can be more profitable in a new market, or if it believes that the current market is not profitable, then I believe that they will move to the new market. It's customer's problems with lack of backward compatibility are not Microsoft's problem.

Interestingly, corporations long ago lobbied for shorter depreciation schedules for computing equipment. They successfully got the depreciation for equipment reduced to ... three years. Now Apple and Microsoft seem to be agreeing, indicating that equipment really is obsolete after three years. (Except that they include software in the definition of "equipment".)



I'm not sure that this faster pace is a good thing. I'm also not sure that I like it. But I do know this: it's happening. The question is not how to stop it, or how to avoid it, but how to cope with it. How do we live in a world when technology changes (dramatically) every three or maybe two years?


Wednesday, March 14, 2012

Why apps are not web pages

Apps are not web pages. They are constructed differently, they perform differently (although some web pages made for smart phones are quite close to apps in behavior), and I believe that we perceive them differently.

Apps (especially when used on cell phones) are more intimate than web pages. Web pages live in a browser, which in turn lives in a PC or laptop. Apps live in our phones. Apps are closer to us.

Perhaps this is because we hold the cell phone in our hand (tablets too) but PCs we leave on the desk. Laptops are not as intimate as tablets, since we rarely hold laptops -- I put mine on a convenient desk or shelf, or maybe the floor.

The intimacy affects our expectations of apps. When I use an app, I expect two things: content tailored for me, and frequent changes to that content. I don't expect it of web pages. (Some pages, if I log in to a web site. But not all web sites.)

Facebook is a good example: it shows me "feeds" from my friends. This information is tailored for me (it's from the people I pick as friends) and the information changes daily (more than daily, actually).

But it's not just Facebook.

Yahoo Mail: Information that people have sent to me, or e-mail from lists to which I have subscribed.

Maps: Tailored for me, since it shows the local area. I can shift to a different area, if I choose.

Twitter: Close to Facebook in that it is tweets from people I choose to follow, with changes every hour.

The New York Times: Not tailored for me (the news is the same for everyone) but it changes daily and I pick the sections to display.

These web sites are "naturals" for conversion to apps. (And, in fact, they have been converted to apps.)

But some web sites will never "fly" as apps. These are the web sites that are generic (not personalized) and static (infrequent changes). These are sites that one visits rarely, and expects no personalized content. Sites such as "brochureware" for hotels and resorts, shopping sites, and even Wikipedia. (Wikipedia has an excellent web site for cell phones, but I don't see a need for an app.)

If I am right, then the great shift from the web to app will leave a large number of web sites behind. Even if their owners convert web site into an app, few people will download the app and fewer will use it.

If the personalizable web sites are "raptured" into apps, then what happens to the "left behind"? I see a need for the static, generic web sites -- smaller than the need for an app, but a need nonetheless -- and that need must be met. Will we keep the browser in our cell phones and tablets? Or will we build a new form to distribute static and non-personalized content?

Sunday, March 11, 2012

The post-spreadsheet world

The tablet/smartphone revolution changes the rules for our use of computers. We can now (easily) take them with us, they provide simple user interfaces on small displays, and data is stored on servers. This new model works poorly with spreadsheets, which want large displays, have complex user interfaces, and consider the data as their own. Spreadsheets were not designed for collaboration.

Google has done impressive work with their on-line documents and spreadsheets. I have yet to see Microsoft's on-line offerings, so I will not comment on them. But I can make some predictions.

The tablet and smartphone revolution moves us into a new realm of processing. This new model of processing builds apps from small, connected services and shares data. I think that the collision of tablets and spreadsheets will give us new tools.

Spreadsheets, at their core, are scriptable data processors. They store their data in a two-dimensional format (or three-dimensional format, if you consider multiple sheets to be a dimension). The scripts can be simple formulas, or they can be programs (in Microsoft programs they are written in VBA, in Open Office they are in Java). The ability to apply simple scripts (formulas) is what gives spreadsheets their power.

I expect that in the new world of tablets we will develop small, connectable, scriptable data processors. These processors will work with small sets of data, presenting it to users with smaller screens and also letting users change the data. They will also let users create and run (and share) scripts. And most importantly, they will connect to other data processors -- probably through web services. People will not build spreadsheets but their own custom apps, plugging together these data processors.

Add version control, identity management, and access controls (based on identity), and you will be able to build enterprise-class apps.

We may keep spreadsheets, although I expect them to change. Once mission-critical data is in the cloud, we will extend spreadsheets to pull that data and merge it into a two-dimensional grid. Enthusiastic folks may build real-time updates, bi-directional updates, round-tripping, and collaboration for multiple spreadsheet users. The spreadsheet will become a client of the data processors in the cloud.

In this scenario, Alice may be working on some figures on her tablet as she commutes to the office (she rides in a carpool) while Bob reviews those same figures in the office in his spreadsheet. No one has the master spreadsheet, no one has to worry about getting the latest version.

Saturday, March 10, 2012

Don't like Windows 8? Wait for version 3

Lots of folks are unhappy with Windows 8. The complaints are easy to find: just search the web for Windows 8 and skip any pages on microsoft.com.

My take is that Windows 8, and Metro in particular, are the first version of a new product. Microsoft has a history of releasing products that are not quite right, and releasing follow-up versions that improve the product. Eventually, Microsoft releases a product that is popular. I call this the "version 3 effect", after the experience with Windows. Microsoft released Windows several times before it became popular in version 3.0, and really popular with version 3.1.

Resistance to early versions of Windows was due in part to hardware (the processors and PCs of the day were not quite ready for multitasking) and due in part to our unfamiliarity with the new creature known as Windows. (The jump from PC-DOS to Windows was a large one, and it took most people some time to adjust our mental model of PCs.)

I recall a similar resistance to .NET, with people unsure of the new thing and longing for the familiarity of the old MFC/Win32 world. (Some of the confusion was caused by Microsoft's marketing, with tagged almost every product with the ".NET" label.) Yet today we have no confusion of .NET and few developers want to return to Win32 or MFC.

The change from Windows 7 to Windows 8 is a large one. I view it as large as the MFC- to-.NET change and the DOS-to-Windows change. Microsoft has redefined the Windows API and the terms of GUI design. Instead of Win32 or even the classic .NET API, Microsoft is providing WinRT. Instead of the traditional "windows and controls" design, Microsoft is providing Metro.

These changes are large, and more importantly, they invalidate a lot of hard-won knowledge of the Microsoft environment. Developers must learn the new APIs, and much of their current knowledge is about to become useless. This, I think, is driving the anger in the development community. I expect similar anger in related communities: tech support, sales, Windows-as-component (think of the point-of-sale and kiosk systems that include Windows), and anyone who uses Windows. Microsoft has changed the rules, and people have to learn the new set.

I think that Microsoft is doing the right thing. The wrong thing would be to keep Windows as Windows, to not move towards the model of tablet computing. That path would allow Apple, Google, and Linux to surpass Windows and make Microsoft irrelevant.

Metro isn't classic Windows. It also isn't perfect, or even demonstrably better (yet). I expect Microsoft to learn from their experience and improve their product, as they have in the past. If Windows 8 is "version 1", then look for one or two service packs to improve the product -- those will be "version 2". The "version 3" product, the one that gets it right, will be Windows 9.

Wednesday, March 7, 2012

The C preprocessor is a thing of beauty

Converting programs from one language to another can be easy or difficult, depending on the languages. More specifically, the ease of conversion depends on the commonality of language features.

For example: converting a program from FORTRAN IV to C. Converting a FORTRAN program to C is easy, converting a C program to FORTRAN IV is hard. The FORTRAN constructs of subroutines and functions are easily handled in C, as are the data types of integer, real, and character. C has constructs that are unavailable in FORTRAN: pointers, dynamic memory, and recursion. The FORTRAN language elements of IF/THEN and DO are available in C, but the C elements of 'while' and 'longjmp' are not. (We have to shoe-horn the non-FORTRAN aspects of C into FORTRAN.)

When converting a program from one language to another, the difficulties of conversion are the language-specific features of the 'origin' language -- the things that the 'destination' language cannot handle.

In the past, I have considered the C preprocessor to be a hideous thing, an orc-like programming language that has no manners. But I have been wrong.

The C preprocessor (which is the same as the C++ preprocessor), has some very interesting constructs. It is a language with concepts found only in advanced programming languages. It has access to the caller's scope, something that exists in the lambdas of LISP. It can use a parameter as a substitution token, or it can substitute the text value of the token name into the program ("stringification").

These features of the C preprocessor make conversion to other languages difficult. We think of the C preprocessor as an ugly beast not because it is ugly, but because we cannot easily convert its code to another language. The preprocessor language is beautiful -- simple, elegant, and powerful. We hate it because we cannot think of it in the terms of languages that we know and love.

It may be some time before mainstream languages have the features of the C preprocessor. (I'm considering LISP outside of the mainstream, at least for now.) Until such a time, the advanced language of the preprocessor will pose conversion challenges to programmers.

Monday, March 5, 2012

Bigger than you think

The brave new world of tablets will change business. It will change not just business but the way that we do business.

Managers like to think that they make decisions that drive technology. And they do, to an extent. They buy technologies that make their businesses more efficient and more capable.

But the arrow points both ways. Not only does business drive technology, but technology drives business.

PCs changed the way we do business: In the "Mad Men" world of the 1950s business were run with typewriters, filing cabinets, secretaries; now we run businesses with desktop PCs, servers, and IT support teams.

Managers may like to think that they were in charge of that transition, but much of it was forced upon business managers. Typewriter manufacturers went out of business, filing cabinets became "old-fashioned", and secretaries became luxuries reserved for the uppermost managers. Businesses had to adopt PCs because other businesses were using them, and because previous technologies were expensive. Businesses had little choice in the matter.

The internet and the world wide web changed the way we do business: In the pre-web era companies communicated with customers by letter, phone, possibly e-mail, and in person. The web era allowed companies to communicate with web pages. Customers select their purchases without assistance from salespeople (or telephone order operators). Businesses had to adopt the web because other businesses adopted the web and customers expected it. Businesses had little choice in the matter.

Tablet computing and cloud computing will become big, for individuals and businesses. Businesses will adopt them, because they have little choice in the matter.

Individuals will stop buying desktops and start buying tablets (if they already have not done so). Most PC applications are poorly suited for the individual and home user: no one really needs an office suite with word processing and spreadsheets, and certainly not presentation software. Individuals want Facebook, Twitter, and yes, Angry Birds. They will want smartphone apps and tablet apps for banking and shopping. Businesses that provide these apps will thrive; businesses that don't will see sales suffer.

Businesses will stop buying desktops and start buying tablets because PC manufacturers will stop selling PCs. Or they will allow workers to bring tablets from home, reducing their outlays to zero. Changing to tablets will cause confusion and change for businesses, but it will not drive businesses out of existence.

The shift from desktop PCs to tablets will cause businesses and governments to change their organization. Commercial and government entities have organized their information around spreadsheets, documents, and presentations. But these organizations do not have to organize themselves that way. In the pre-PC era, they organized themselves around weekly and monthly reports from the mainframe. In the post-PC era, they will organize themselves around cloud-based services and tablet-based apps that present data in real-time.

Companies long ago outsourced non-core tasks such as payroll, the generation of electricity, and the production of paper. With tablets and cloud, they can focus their core processes and on continue to outsource non-core tasks. IT support is a likely candidate for outsourcing, as is Human Resource administration.

New companies will form to offer new forms of corporate support. (IT support and Human Resource administration are two areas that come to mind.) These new companies will offer services to other companies, much like web services offer services to other web applications.

Outsourcing is not the only change. Just as PCs allowed large companies to eliminate jobs (such as secretaries), tablets and cloud will allow companies to eliminate more jobs. I suspect that companies will keep the jobs that require creativity and outsource the jobs that consist of rote tasks. The remaining jobs will be closely tied to the performance of the company's core processes.

The brave new world of tablet computing will be different from the current world. I think it will be a better world, with clearer focus on core competencies and "value added". Some folks may find the transition uncomfortable; others will thrive.