Programming has seen a number of ugly things, and we programmers (and more specifically, language designers) have improved them.
The GOTO statement
The most famous ugly thing in programming is probably the GOTO statement. First called out by Edsger Dijkstra in the Communications of the ACM, (description here on wikipedia) it is the poster child of poor programming practices. The GOTO statement was a direct analog of the assembly language "jump" instruction (often assigned the code 'JMP', but it varied from processor to processor), and it allowed for difficult-to-read programs. We improved programming languages with structured programming, 'if/then/else' statements, 'while' loops, and iterations over collections. (The 'goto' statement was omitted from Java, but remains in C, C++, and C#.)
Global variables
Global variables were another form of ugliness, allowing any part of a program to read (or more excitedly, modify) a variable. One never could tell what value a global variable would contain. They were mandatory in COBOL; present in FORTRAN, C, and C++; removed in Java and C#.
Arithmetic IF
FORTRAN has the honor of originating the 'arithmetic IF', a three-destination comparison of a value. (It was not limited to FORTRAN, the thing would show up later in FOCAL.) One of three GOTO statements would be executed, based on the sign of an expression. (The sign could be positive, negative, or zero.) This nasty beast was the result of the IBM 704 instruction set, which allowed such a construct in one instruction. Efficient for the processor, but not so much for the programmers.
Pointers
Pointers were available in Pascal and the life-blood of C. In Pascal (and in C) they allowed the construction of numerous data structures, many of which were impossible in earlier languages. Yet pointers were also a form of "GOTO in data" and lead to lots of headaches. They were eventually replaced by references, which were pointers bound to a known valid entity.
Memory management
The pointers in C (and Pascal, somewhat) demanded memory management. One could allocate memory for anything, but one also had to track that memory and release it when one was finished with it. The later languages of Visual Basic, Perl, Java, C#, Python, and Ruby all replaced manual memory management with garbage collection.
Early garbage collection algorithms were unpredictable and often caused performance problems. Later algorithms (and faster processors) made garbage collection practical.
Column-dependent coding
FORTRAN (and to some extent COBOL) used indentation as a significant indicator to the compiler. FORTRAN was locked into a restrictive format that specified columns for line numbers and statements (and statement continuation on a successive 'source card'). COBOL's view of indentation was more advanced. While optional, it was a popular convention and saw life again in Python's indentation for block definition.
Short variable names
BASIC was initially limited to a single letter and an optional digit. (No 'R2D2' for you!) Original FORTRAN saw variable names limited to six characters. Early PC compilers for BASIC, Pascal, and C had similar restrictions. Modern compilers and interpreters allow for variable names longer than I can care to type (and I type some long names!).
The overall trend
Looking back, we can see that lots of ugly programming constructs were made for efficiency (arithmetic IF) or due to limitations of memory (short variable names) or processing power (GOTO). Advances in hardware allowed for work to be shifted from programmers to compilers, interpreters, and run-time systems. But here's the thing: advances in programming languages and techniques are much slower than advances in hardware.
Today's computers are more powerful than those of the 1960s by several orders of magnitude. While replacing GOTO with structure programming and replacing direct memory management with garbage collection, the change in software is much smaller than that of the change for hardware.
I'm tending to think that this effect is caused by the locality of software. We programmers are close to our programs; the hardware is remote, sitting on the far side of the compiler. We can, should we choose, replace the hardware with faster (compatible) equipment or switch to a new target processor by changing the back end of the compiler. In contrast, the programming language constructs are close to us, living inside our heads. We think in our programming languages and are loathe to give them up.
Moreover, we programmers often learn to overcome the ugly aspects of programming languages and sometimes develop techniques to leverage them. We become attached to these tricks, and we are quite reluctant to let them go.
If we want to advance the art, we will have to give up the old (ugly) constructs and adopt the new techniques. It is not easy; I myself have had to give up BASIC for C, C for C++, and C++ for Java and C#, and C# for Ruby. I have given up unstructured programming for structured programming, structured (procedural) programming for object-oriented programming, and object-oriented programming for immutable-object-programming. Each transition has been difficult, in which I had to un-learn the old ways. Yet I find that the new languages and techniques are better and allow me to be more effective.
Monday, February 27, 2012
Saturday, February 25, 2012
Upper case and spaces
Early written language (that is, natural language) used only upper case letters and no space characters. The space was a medieval invention. In the ancient world, the phrase "texts from the ancient world" appeared as TEXTSFROMTHEANCIENTWORLD. Our concepts of "upper case" and "lower case" come from Renaissance times, with the invention of movable type and two cases to hold the slugs for letters.
The original art of writing was to simply record the phonemes, allowing the reader to re-create the spoken sounds. Text was for re-enacting a speech, not storing information. The notion of "reading to one's self" had yet to emerge. A reader was little more than a modern-day tape recorder in playback mode, generating sounds for the audience (the true readers of the text).
It is interesting that the development of our programming languages parallels the development of written natural language.
Early programming languages used upper case letter in their character set. This was true for FORTRAN, BASIC, COBOL, assembly language, and other languages. As IBM card punches and readers used a limited character set with only uppercase letters, we can see that language design followed the available equipment.
Some early programming languages did not care about space characters. FORTRAN was space-agnostic; the parsing of the language was independent of SPACE characters. One could write
THETA = 10.0
DO 9 I = 10,20
ALPHA = BETA * I
SUM = ALPHA + SUM
9 CONTINUE
or one could write
THETA=10.0
DO9I=10,20
ALPHA=BETA*I
SUM=ALPHA+SUM
9 CONTINUE
the FORTRAN compiler would treat them identically. (FORTRAN did care about columns 1 through 7; the first six were reserved for line numbers and column 7 was reserved for the continuation indicator.
Such space-agnostic parsing was not unique to FORTRAN. Early versions of the BASIC language were space-agnostic. (Such was not the case for COBOL or assembly language, both which relied upon SPACE characters to separate tokens.)
In time, we humans learned that written (natural language) text could be read silently, and that space characters improved the readability of the text. Similarly, we human programmers learned that space characters improved the readability of programs (as did blank lines and indentation) and that programs should be read, and should be written to be read. (See The Psychology of Computer Programming by Gerald Weinberg.) While it took several centuries to invent the concept of a space to separate words in natural language, we adopted the concept of whitespace in programming languages in less than a decade.
If we consider spaces and indentation the marks of advanced program-writing, then that leads to the conclusion that the language which uses spaces and indentation most is the most advanced programming language. The languages that use spacing and indentation most, as I understand it, are Python (with its indentation for marking code blocks) and assembly language (with its heavy reliance on spacing and indentation for parsing). All other language fall after those two.
Friday, February 17, 2012
No sleeping giant for cloud computing
I suspect that a lot of people are waiting for a "grand convergence" of the different forms of cloud computing, or a "market leader" to define a standard that everyone (or just about everyone) can use. Such an event occurred in the PC market, when IBM released their model 5150 PC into a frothy market of multiple architectures and standards.
While today's cloud environment is similar to the 1980 microcomputer market, with multiple vendors and varying "takes" on cloud computing (infrastructure as a service, platform as a service, software as a service), I'm not expecting a single approach to dominate the market.
The thing that we are missing is the "sleeping giant", the one large, well-respected company that had not entered the market. With microcomputers, that sleeping giant was IBM. Hundreds of small companies (and not so small companies) were selling into the market. Apple, interestingly, was one of them. The products varied, from fully assembled, ready-to-run systems (Radio Shack) to kits (Heathkit) to assemble-your-own-kit offerings. No one company was dominant, or even close.
But IBM was out there, and everyone knew it. IBM waited and picked a fortuitous time to enter the market. Their reputation (and product) defined a center for the market, a common ground, a mainstream.
With today's cloud market, the major players are involved. IBM is in, mostly for corporate and large customers. Microsoft is in, with Azure. Amazon.com is in, Google is in, and Apple is in. These companies have developed (and continue to develop) their offerings. There are no folks waiting on the sidelines for the opportune moment. (At least, no large, well-respected companies.)
Thus, I see no equivalent to the IBM PC for the cloud market. I see no one product or service offering that will take the market by storm.
Instead, I see the market remaining fragmented. I suspect that a few players may change (Oracle may buy Rackspace, for example) and some smaller players may gain ground. But no sleeping giant to "set the world straight".
While today's cloud environment is similar to the 1980 microcomputer market, with multiple vendors and varying "takes" on cloud computing (infrastructure as a service, platform as a service, software as a service), I'm not expecting a single approach to dominate the market.
The thing that we are missing is the "sleeping giant", the one large, well-respected company that had not entered the market. With microcomputers, that sleeping giant was IBM. Hundreds of small companies (and not so small companies) were selling into the market. Apple, interestingly, was one of them. The products varied, from fully assembled, ready-to-run systems (Radio Shack) to kits (Heathkit) to assemble-your-own-kit offerings. No one company was dominant, or even close.
But IBM was out there, and everyone knew it. IBM waited and picked a fortuitous time to enter the market. Their reputation (and product) defined a center for the market, a common ground, a mainstream.
With today's cloud market, the major players are involved. IBM is in, mostly for corporate and large customers. Microsoft is in, with Azure. Amazon.com is in, Google is in, and Apple is in. These companies have developed (and continue to develop) their offerings. There are no folks waiting on the sidelines for the opportune moment. (At least, no large, well-respected companies.)
Thus, I see no equivalent to the IBM PC for the cloud market. I see no one product or service offering that will take the market by storm.
Instead, I see the market remaining fragmented. I suspect that a few players may change (Oracle may buy Rackspace, for example) and some smaller players may gain ground. But no sleeping giant to "set the world straight".
Sunday, February 12, 2012
Redmond may be the new Detroit
Microsoft is haunted by its reputation.
In the 1960s and 1970s, Detroit earned a reputation for a certain level of quality in its products. Cars were designed, built, and sold -- lots of cars, since U.S. automakers had no significant competition -- with a level of quality that was ... less than it could have been. Cars were designed with "planned obsolescence", built to last for a few years or a few thousand miles. Exhaust systems would rust, vinyl roofs would shred, and gas mileage was low. But with no competition, Detroit had a quasi-monopoly on the market. (There was some competition from automakers in Germany, France, and Japan, but it was small.)
In the late 1970s, the price of oil (and therefore gasoline) rose, and people became unhappy with their cars. Japanese automakers designed and built cars that were efficient. Those cars were also reliable.
Japan quickly acquired the reputation of building reliable, efficient cars. Detroit acquired the reputation for "gas guzzlers" and shoddy workmanship.
Detroit learned a hard lesson, changed its ways, and built better cars. They were more efficient, safer, and more reliable. Detroit improved the quality of its product and now produces cars that are of equal quality to Japanese cars.
Yet the reputation lingers.
Within companies, the sales and marketing folks are very conscious of "the brand" and do things to maintain the companies "image". They run advertising campaigns and conduct customer surveys. Reputation is the customer's version of brand management. It's not managed, it simply happens. Whatever customers think of your brand becomes your reputation. And Detroit earned the reputation for unreliable and inefficient products.
Which brings us to Microsoft.
Microsoft is the Detroit of the software world. It has a long reputation of building buggy, bloated applications. And until recently, it was the only game in town -- if you were using computers, you were using Windows. (A few rebels used Apple or Linux, but they were on the fringe, much like the folks who drove foreign cars in the 1960s.)
Just as the Japanese automakers moved in to the American market, Apple moved in to the PC market. Apple now has a significant share of PC sales and software sales. (Linux remains fringe, except in the server arena.)
Detroit changed its ways, and Microsoft is making changes to Windows. The Windows 8 line, with its "Metro" GUI and the Windows App Store, is a big departure from the old Windows. The old Windows provided a platform (Windows), office software (Word, Excel, Exchange, Outlook), development tools (Visual Studio) but left the market open to all others. Anyone could write and sell applications for Windows (and many people did). Windows PCs were open in the sense that the owner (or administrator) could install applications from any source - Microsoft, third-party providers, or even in-house developers.
The brave new world of Windows 8 changes that. Apps (at least, apps for the "Metro" side of Windows 8) must come through the Windows App store, much like iPad apps must come through iTunes. This change will allow Microsoft better control over the quality of apps, and allow it to filter out poorly-written apps and malware. It improves the quality of apps in the Windows environment.
With these changes, Microsoft may be able to achieve a level of quality that equals (or perhaps even surpasses) that of iPad apps.
Which brings us to reputations.
Will improved quality be enough for Microsoft? Or has the market assigned a reputation to Microsoft? The reputation of buggy software may limit Microsoft's growth.
For Microsoft to succeed in the market, they must provide software of a higher quality than Apple, and they must build a reputation for reliable software.
I believe that they can do it. But I believe that it will take a lot of control over the products that are released in the Windows world. A level of control that is on par with Steve Jobs' obsession with quality. And I am not sure that the existing market of Windows application providers is ready for that.
In the 1960s and 1970s, Detroit earned a reputation for a certain level of quality in its products. Cars were designed, built, and sold -- lots of cars, since U.S. automakers had no significant competition -- with a level of quality that was ... less than it could have been. Cars were designed with "planned obsolescence", built to last for a few years or a few thousand miles. Exhaust systems would rust, vinyl roofs would shred, and gas mileage was low. But with no competition, Detroit had a quasi-monopoly on the market. (There was some competition from automakers in Germany, France, and Japan, but it was small.)
In the late 1970s, the price of oil (and therefore gasoline) rose, and people became unhappy with their cars. Japanese automakers designed and built cars that were efficient. Those cars were also reliable.
Japan quickly acquired the reputation of building reliable, efficient cars. Detroit acquired the reputation for "gas guzzlers" and shoddy workmanship.
Detroit learned a hard lesson, changed its ways, and built better cars. They were more efficient, safer, and more reliable. Detroit improved the quality of its product and now produces cars that are of equal quality to Japanese cars.
Yet the reputation lingers.
Within companies, the sales and marketing folks are very conscious of "the brand" and do things to maintain the companies "image". They run advertising campaigns and conduct customer surveys. Reputation is the customer's version of brand management. It's not managed, it simply happens. Whatever customers think of your brand becomes your reputation. And Detroit earned the reputation for unreliable and inefficient products.
Which brings us to Microsoft.
Microsoft is the Detroit of the software world. It has a long reputation of building buggy, bloated applications. And until recently, it was the only game in town -- if you were using computers, you were using Windows. (A few rebels used Apple or Linux, but they were on the fringe, much like the folks who drove foreign cars in the 1960s.)
Just as the Japanese automakers moved in to the American market, Apple moved in to the PC market. Apple now has a significant share of PC sales and software sales. (Linux remains fringe, except in the server arena.)
Detroit changed its ways, and Microsoft is making changes to Windows. The Windows 8 line, with its "Metro" GUI and the Windows App Store, is a big departure from the old Windows. The old Windows provided a platform (Windows), office software (Word, Excel, Exchange, Outlook), development tools (Visual Studio) but left the market open to all others. Anyone could write and sell applications for Windows (and many people did). Windows PCs were open in the sense that the owner (or administrator) could install applications from any source - Microsoft, third-party providers, or even in-house developers.
The brave new world of Windows 8 changes that. Apps (at least, apps for the "Metro" side of Windows 8) must come through the Windows App store, much like iPad apps must come through iTunes. This change will allow Microsoft better control over the quality of apps, and allow it to filter out poorly-written apps and malware. It improves the quality of apps in the Windows environment.
With these changes, Microsoft may be able to achieve a level of quality that equals (or perhaps even surpasses) that of iPad apps.
Which brings us to reputations.
Will improved quality be enough for Microsoft? Or has the market assigned a reputation to Microsoft? The reputation of buggy software may limit Microsoft's growth.
For Microsoft to succeed in the market, they must provide software of a higher quality than Apple, and they must build a reputation for reliable software.
I believe that they can do it. But I believe that it will take a lot of control over the products that are released in the Windows world. A level of control that is on par with Steve Jobs' obsession with quality. And I am not sure that the existing market of Windows application providers is ready for that.
Subscribe to:
Posts (Atom)