Sunday, December 27, 2015

For the future of Java, look to Google

What is the future of Java? It is a popular language, perhaps a bit long in the tooth, yet still capable. It struggled under Sun. Now it is the property of Oracle.

Oracle is an interesting company, with a number of challenges. Its biggest challenge is the new database technologies that provide alternatives to SQL. Oracle built its fortune on the classic, ACID-based, SQL database, competing with IBM and Microsoft.

Now facing competition not only in the form of other companies but in new technologies, Oracle must perform. How will is use Java?

For the future of Java, I suggest that we look to Google and Android. Java is part of Android -- or at least the Java bytecode. Android apps are written in Java on standard PCs, compiled into Java bytecode, and then delivered to Android devices. The Android devices (phones, tablets, what-have-you) use not the standard JVM interpreter but a custom-made one named "Dalvik".

Oracle and Google have their differences. Oracle sued Google, successfully, for using the Java APIs. (A decision with which I disagree, but that is immaterial.)

Google now faces a decision: stay with Java or move to something else. Staying with Java will most likely paying Oracle a licensing fee. (Given Oracle's business practices, probably an exorbitant licensing fee.)

Moving to a different platform is equally expensive. Google will have to select a new language and make tools for developers. They will also have to assist developers with existing applications, allowing them to migrate to the new platform.

Exactly which platform Google picks isn't critical. Possibly Python; Google supports it in their App Engine. Another candidate is Google Go, which Google also supports in App Engine. (The latter would be a little more complicated, as Go compiles to executables and not bytecode, and I'm not sure that all Android devices have the same processor.)

Google's decision affects more that just Google and Android. It affects the entire market for Java. The two big segments for Java are server applications and Android applications. (Java as a teaching language is probably the third.) If Google were to move Android to another language, a full third of the Java market would disappear.

If you have a large investment in Java applications (or are considering building new Java applications), you may want to keep an eye on Google and Android.

Saturday, December 26, 2015

Servers need RAM-based SSDs, not flash-based SSDs

How to store data? How to store data on servers? In the "good old days" (prior to 2012), servers used spinning platters of metal (disk drives) to store data. The introduction of SSDs complicated the issue.

SSDs (solid state disks) eliminate the spinning metal platters and use semiconductors to store data. Today one can purchase an SSD that plugs into a PC or server and acts just like a hard disk.

SSDs provide faster read and write times, lower power consumption, and greater reliability (mostly -- more on that later). Their cost has been higher than hard disks, but that has changed. Prices for SSDs are now in the same range as "classic" hard drives.

But life with SSDs is not perfect. The solid-state disks use flash RAM, which allows data to be stored after the power is off (something we want in our storage), but flash-based RAM is not that durable. The heavy workload that servers put on their storage systems causes failures in SSDs.

I'm sure that many folks are working on methods to improve the reliability of flash-based RAM. But for servers, I think there may be another path: classic (that is, non-flash) RAM.

We've been building RAM for decades. We know how to make it, in large quantities, and reliably. Classic RAM is typically faster than flash-based RAM, and typically cheaper. So why not use it?

The big difference in classic RAM and flash-based RAM is that classic RAM, when you remove power, forgets everything. An SSD built with classic RAM would work beautifully -- until you powered the unit off. (And even then you would notice the problem only when you powered it back on.)

The impression is that such an arrangement would be useless. I'm not so sure, for two reasons.

First, while desktop PCs are powered on and off regularly, servers are not. Servers are typically installed in data centers and kept on, twenty-four hours a day, every day of the year. If the server stays on, the data stays in RAM, and everything works.

Second, it is possible to build classic RAM SSDs with small, auxiliary power supplies. These small power supplies (usually batteries) can keep the RAM active, and therefore keep the data, while main power is not available.

RAM storage units are not new. They date back to the mid-1970s, and have been used on IBM mainframes, minicomputers, and even Apple II computers.

I suspect that at some point, someone will figure out that classic RAM makes sense for servers, and build SSDs for servers. This will be another step in the divergence of desktop PCs and server computers.

Thursday, December 17, 2015

Indentation and brackets in languages

Apple has released Swift, an object-oriented language for developing applications. It may be that Swift marks the end of the C-style indentation and brace syntax.

Why is indentation and bracket style important? Maybe we should start with a definition of indentation and brace style, and go from there.

In the far past, programs were punched onto cards and the syntax of the programming language reflected that. FORTRAN and COBOL languages reserved columns 1 through 6 for line numbers, column 7 for line continuation and comments, and columns 73 to 80 for card sequencing. (The last was used to sort the card deck when it was dropped and cards spilled on the floor.)

The limitations of the punch card and the notion that one and only one statement may appear on one (or possibly more) cards had a heavy influence on the syntax of the language.

Algol introduced a number of changes. It introduced the 'begin/end' keywords that were later used by Pascal and became the braces in most modern languages. It removed the importance of newlines, allowing multiple statements on a single line, and allowing a single statement to span multiple lines without special continuation markers.

The result was the syntax we have in C, C++, Java, and C# (and a bunch of other languages). Semicolons (not newlines) terminate statements. Braces group statements. Indentation doesn't matter, statements can begin in any column we desire. All of these features come from Algol. (At the beginning of this essay I referred to it as "C-style indentation and brace syntax", but the ideas really originated in Algol.)

The "Algol revolution" threw off the shackles of column-based syntax. Programmers may not have rejoiced, but they did like the new world. They wrote programs gleefully, indenting as they liked and arguing about the "one true brace style".

Some programmers wanted this style:

function_name(params) {
    statements
}

Others wanted:

function_name(params)
{
    statements
}

And yet others wanted:

function_name(params)
    {
    statements
    }

There were some programmers who wanted to use whatever style that felt comfortable at the time, even if that meant that their code was inconsistent and difficult to read.

It turns out that the complete freedom of indentation and brace placement is not always a good thing. In the past decades, we have taken some steps in the direction of constraints on indentation and braces.

For years, programming teams have held code reviews. Some of these reviews look at the formatting of the code. Inconsistent indentation is flagged as something to be corrected. Variants of the lint program warn on inconsistent indentation.

Visual Studio, Microsoft's IDE for professionals, auto-formats code. It did some with the old Visual Basic. Today it auto-indents and auto-spaces C# code. It even aligns braces according to the style you choose.

The Python language uses indentation, not braces, to mark code blocks. It reports inconsistent indentation and refuses to run code until the indentation is consistent.

The Go language uses braces to mark code blocks and it requires a specific style of braces (the first style shown above). It won't compile programs that use other styles.

We have designed our processes, out tools, and our languages to care about indentation and brace style. We are gradually moving to language syntax that uses them, that considers them significant.

As programmers, as people who read code, we want consistent indentation. We want consistent brace style. We want these things because it makes the code easier to read.

Which gets us back to Swift.

Swift has restrictions on brace style. It uses brace placement to assist in the determination of statements, visible in the syntax for if statements and for loops (and I suspect while loops). Indentation doesn't matter. Brace style does.

We now have three popular languages (Python, Go, and Swift) that care about indentation or brace style. That, I think, shows a trend. Language designers are beginning to care about these aspects of syntax, and developers are willing to work with languages that enforce them. We will not return to the harsh constraints of FORTRAN and COBOL, but we will retreat from the complete freedom of Algol, Pascal, and C. And I think that the middle ground allow us to develop and share programs effectively.

Tuesday, December 15, 2015

The new real time

In the 1970s, people in IT pursued the elusive goal of "real time" computing. It was elusive because the term was poorly defined. With no clear objective, any system could be marketed as "real time". Marketing folks recognized the popularity of the term and anything that could remotely be described as "real time" was described as "real time".

But most people didn't need (or want) "real time" computing. They wanted "fast enough" computing, which generally meant interactive computing (not batch processing) that responded to requests quickly enough that clerks and bank tellers could answer customers' questions in a single conversation. Once we had interactive computing, we didn't look for "real time" and interest in the term waned.

To be fair to "real time", there *is* a definition of it, one that specifies the criteria for a real-time system. But very few systems actually fall under those criteria, and only a few people in the industry actually care about the term "real time". (Those that do care about the term really do care, though.)

Today, we're pursuing something equally nebulous: "big data".

Lots of people are interested in big data. Lots of marketers are getting involved, too. But do we have a clear understanding of the term?

I suspect that the usage of the term "big data" will follow an arc similar to that of "real time", because the forces driving the interest are similar. Both "real time" and "big data" are poorly defined yet sound cool. Further, I suspect that, like "real time", most people looking for "big data" are really looking for something else. Perhaps they want better analytics ("better" meaning faster, more frequent, more interaction and drill-down capabilities, or merely prettier graphics) for business analysis. Perhaps they want cheaper data storage. Perhaps they want faster development times and fewer challenges with database management.

Whatever the reason, in a few years (I think less than a decade) we will not be using the term "big data" -- except for a few folks who really need it and who really care about it.

Thursday, December 10, 2015

The types of standards for programming languages

A programming language must have a reference point, a standard, which is used to decide what is in the language and how it behaves. There are different ways to build and distribute this standard:

Independent committee In the past it was an ANSI committee, today it is an ISO committee. The committee, independent of vendors, defines the language and publishes the standard. Anyone can create an implementation of the standard; the committee often provides tests to verify compliance with the standard.

Benevolent dictator A single person decides what is in the language and what is not. He (it is usually a male) runs a team of people who develop the implementation and publish it. The implementation is usually open source. Tests are used by the development team to ensure compliance with the standard. There may be more than one implementation published; the development team may publish advance and legacy versions, and other individuals or companies may publish their own implementations.

Corporate closed source A corporation designs a language and builds and publishes it as a product. The language may change at any time to suit the needs of the corporation. Other entities can publish "clones" of the language but they do not have access to the source code.

Corporate open source A corporation designs a language, builds and publishes it as in the "closed source" model, and then provides the source code as open source. Other entities can use the source code. The language can still change to suit the needs of the corporation.

These four models cover almost all programming languages. Looking at some popular programming languages, the grouping is:

Independent committee: FORTRAN, COBOL, C, C++, SQL, Ada, JavaScript

Benevolent dictator: Forth, Pascal, AWK, Perl, Python, Ruby

Corporate closed source: APL, PL/I, Visual Basic, Delphi, VB.NET, SAS, Objective-C, Objective-C++

Corporate open source: Java, C#, Swift

Some languages change over time. For example, BASIC started as with the "benevolent dictator" model, but Microsoft's success changed the dominate form to "corporate closed source". Java started as "corporate closed source" and is shifting to "corporate open source".

What's interesting is that the languages governed by independent committee tend to have longer lives. Of the seven languages (Fortran, Cobol, C, C++, SQL, Ada, and JavaScript) all but Ada are in use today. (Yes, Ada may be in use somewhere, on some obscure legacy project, but that is true of just about every language. Ada is, for all intents and purposes, a dead language.)

Languages governed by a benevolent dictator fare less well. Python and Ruby enjoy success today, while Perl declines from its previous popularity. Forth, Pascal, and Awk are used rarely and I see no activity, no growth to those languages.

Corporate languages enjoy popularity ... as long as the corporation pushes them. APL and PL/I, developed by IBM, are in the "dead" list. Microsoft's Visual Basic is dead and VB.NET (purported to be a successor to Visual Basic) is languishing. Delphi is used only in legacy applications.

I expect that with Apple's introduction of Swift, Objective-C and Objective-C++ will drop to "dead" status. The Mac OS X platform was the only place they were used. The current index at tiobe.com confirms this drop.

What does all of this mean? For anyone developing a large, long-term project, the selection of a language is important. Committee-governed languages last longer than other languages.

Notice that Java is not a committee-governed language. It is managed by Oracle.

Tuesday, December 8, 2015

The PC Market Stabilizes

Once again we have a report of declining sales of personal computers, and once again some folks are worrying that this might signal the end of the personal computer. While the former is true, the latter certainly is false.

The decline in sales signals not an abandonment of the personal computer, but a change in the technology growth of PCs.

To put it bluntly, PCs have stopped growing.

By "PC" or "personal computer" I refer to desktop and laptop computers that run Windows. Apple products are excluded from this group. I also exclude ChromeBooks, smartphones, and tablets.

Sales of personal computers are declining because demand is declining. Why is demand declining? Let's consider the factors that drive demand:

Growth of the organization When a business grows and increases staff, it needs more PCs.

Replacement of failing equipment Personal computers are cheap enough to replace rather than repair. When a PC fails, the economically sensible thing to do is to replace it.

New features deemed necessary Some changes in technology are considered important enough to warrant the replacement of working equipment. In the past, this has included CD drives, the Windows (or perhaps OS/2) operating system, new versions of Windows, the 80386 and Pentium processors, and VGA monitors (to replace older monitors).

The recent economic recession saw many companies reduce their ranks and only now are they considering new hires. Thus, the growth of organizations has been absent as a driver of PC sales.

The basic PC has remained unchanged for the past several years, with the possible exception of laptops, which have gotten not more powerful but simply thinner and lighter. The PC one buys today is very similar to the PC of a few years ago. More importantly, the PC of today has no new compelling feature - no larger hard drive (well, perhaps larger, but the old one was large enough), no faster processor, no improved video. (Remember, I am excluding Apple products in this analysis. Apple has made significant changes to its hardware.)

The loss of these two drivers of PC sales means that the one factor that forces the sales of PCs is the replacement of failing equipment. Personal computers do fail, but they are, overall, fairly reliable. Thus, replacement of equipment is a weak driver for sales.

In this light, reduced sales is not a surprise.

The more interesting aspect of this analysis is that the technology leaders who introduced changes (Microsoft and Intel) have apparently decided that PCs are now "good enough" and we don't need to step up the hardware. Microsoft is content to sell (or give away) Windows 10 and designed it to run on existing personal computers. Intel designs new processors, but has introduced no new "must have" features. (Or if they have, the marketing of such features has been remarkably quiet.)

Perhaps Microsoft and Intel are responding to their customers. Perhaps the corporate users of computers have decided that PCs are now "good enough", and that PCs have found their proper place in the corporate computing world. That would be consistent with a decline in sales.

I expect the sales of PCs to continue to decline. I also expect corporations to continue to use PCs, albeit in a reduced role. Some applications will move to smart phones and tablets. Other applications will move to virtualized PCs (or virtual desktops). And some applications will remain on old-fashioned desktop (or laptop) personal computers.

* * * * *

Some have suggested that the decline of PC sales may also be explained, in part, by the rise of Linux. When faced with the prospect of replacing an aged PC (because it is old enough that Windows does not support it), people are installing Linux. Thus, some sales of new PCs are thwarted by open source software.

I'm certain that this is happening, yet I'm also convinced that it happens in very small numbers. I use Linux on old PCs myself. Yet the market share of Linux is in the one- and two-percent range. (For desktop PCs, not servers.) This is too small to account for much of the annual decline in sales.

Sunday, November 22, 2015

The Real Problem with the Surface RT

When Microsoft introduced the Surface RT, people responded with disdain. It was a Windows tablet with a limited version of Windows. The tablet could run specially-compiled applications and a few "real" Windows applications: Internet Explorer, Word, Excel, and Powerpoint.

Many folks, including Microsoft, believe that the problem with the Surface RT was that it was underpowered. It used an ARM chip, not Intel, for the processor. The operating system was not the standard Windows but Windows RT, compiled for the ARM processor and excluding the .NET framework.

Those decisions meant that the millions (billions?) of existing Windows applications could not run on the Surface RT. IE, Word, Excel, and Powerpoint ran because Microsoft built special versions of those applications.

The failure of the Surface RT was not in the design, but in the expectations of users. People, corporations, and Microsoft all expected the Surface RT to be another "center of computing" -- a device that provided computing services. It could have been -- Microsoft provided tools to develop applications for it -- but people were not willing to devote the time and effort to design, code, and test applications for an unproven platform.

The Surface RT didn't have to be a failure.

What made the Surface RT a failure was not that it was underpowered. What made it a failure was that it was overpowered.

Microsoft's design allowed for applications. Microsoft provided the core Office applications, and provided development tools. This lead to the expectation that the tablet would host applications and perform computations.

A better Surface RT would offer less. It would have the same physical design. It would have the same ARM processor. It might even include Windows RT. But it would not include Word, Excel, or Powerpoint.

Instead of those applications, it would include a browser, a remote desktop client, and SSH. The browser would not be Internet Explorer but Microsoft's new Edge browser, modified to allow for plug-ins (or extensions, or whatever we want to call them).

Instead of a general-purpose computing device, it would offer access to remote computing. The browser allows access to web sites. The remote desktop client allows access to virtual desktops on remote servers. SSH allows for access to terminal sessions, including those on GUI-less Windows servers.

Such a device would offer access to the new, cloud-based world of computing. The name "Surface RT" has been tainted for marketing purposes, so a new name is needed. Perhaps something like "Surface Edge" or "Edgebook" or "Slab" (given Microsoft's recent fascination with four-character names like "Edge", "Code", and "Sway").

A second version could allow for apps, much like a Windows phone or iPhone or Android tablet.

I see corporations using the "Edgebook" because of its connectivity with Windows servers. I'm not sure that individual consumers would want one, but then Microsoft is focussed on the corporate market.

It just might work.

Apple and Microsoft do sometimes agree

In the computing world, Apple and Microsoft are often considered opposites. Microsoft makes software; Apple makes hardware (primarily). Microsoft sells to enterprises; Apple sells to consumers. Microsoft products are ugly and buggy; Apple products are beautiful and "it just works".

Yet they do agree on one thing: The center of the computing world.

Both Apple and Microsoft have built their empires on local, personal-size computing devices. (I would say "PCs" but then the Apple fans would shout "MacBooks are not PCs!" and we don't need that discussion here.)

Microsoft's strategy has been to enable PC users, both individual and corporate. It supplies the operating system and application programs. It supplies software for coordinating teams of computer users (ActiveDirectory, Exchange, Outlook, etc). It supplies office software (word processor, spreadsheet), development tools (Visual Studio, among others), and games. At the center of the strategy is the assumption that the PC will be a computing engine.

Apple's strategy has also been to enable users of Apple products. It designs computing products such as the MacBook, the iMac, the iPad, and the iPhone. Like Microsoft, the center of its strategy is the assumption that these devices will be computing engines.

In contrast, Google and Amazon.com take a different approach. They offer computing services in the cloud. For them, the PCs and tablets and phones are not centers of computing; they are sophisticated input-output devices that feed the computing centers.

That Microsoft's and Apple's strategies revolve around the PC is not an accident. They were born in the microcomputing revolution of the 1970s, and in those days there was no cloud, no web, no internet. (Okay, technically there *was* an internet, but it was limited to a very small number of users.)

Google and Amazon were built in the internet age, and their business strategies reflect that fact. Google provides advertising, search technology, and cloud computing. Amazon.com started by selling books (on the web) and has moved on to selling everything (still on the web) and cloud computing (its AWS offerings).

Google's approach to computing allows it to build Chromebooks, light-powered laptops that have just enough operating system to run the Chrome browser. Everything Google offers is on the web, accessible with merely a browser.

Microsoft's PC-centric view makes it difficult to build a Windows version of a Chromebook. While Google can create Chrome OS as a derivative of Linux, Microsoft is stuck with Windows. Creating a light version of Windows is not so easy -- Windows was designed as a complete entity, not as a partitioned, shrinkable thing. Thus, a Windows Cloudbook must run Windows and be a center of computing, which is quite different from a Chromebook.

Yet Microsoft is moving to cloud computing. It has built an impressive array of services under the Azure name.

Apple's progress towards cloud computing is less obvious. It offers storage services called iCloud, but their true cloud nature is undetermined. iCloud may truly be based on cloud technology, or it may simply be a lot of servers. Apple must be using data centers to support Siri, but again, those servers may be cloud-based or may simply be servers in a data center. Apple has not been transparent in this.

Notably, Microsoft sells developer tools for its cloud-based services and Apple does not. One cannot, using Apple's tools, build and deploy a cloud-based app into Apple's cloud infrastructure. Apple remains wedded to the PC (okay, MacBook, iMac, iPad, and iPhone) as the center of computing. One can build apps for Mac OS X and iOS that use other vendors' cloud infrastructures, just not Apple's.

For now, Microsoft and Apple agree on the center of the computing world. For both of them, it is the local PC (running Windows, Mac OS X, or iOS). But that agreement will not last, as Microsoft moves to the cloud and Apple remains on the PC.

Wednesday, November 11, 2015

Big changes happen early

Software has a life cycle: It is born, it grows, and finally dies. That's not news, or interesting. What is interesting is that the big changes in software happen early in its life.

Let's review some software: PC-DOS, Windows, and Visual Basic.

PC-DOS saw several versions, from 1.0 to 6.0. There were intermediate versions, such as versions 3.3 and 3.31, so there were more that six versions.

Yet the big changes happened early. The transition from 1.0 to 2.0 saw big changes in the API, allowing new device types and especially subdirectories. Moving from version 1.0 to 2.0 required almost a complete re-write of an application. Moving applications from version 2.0 to later versions required changes, but not as significant. The big changes happened early.

Windows followed a similar path. Moving from Windows 1 to Windows 2 was a big deal, as was moving from Windows 2 to Windows 3. The transition from Windows 3 to Windows NT was big, as was the change from Windows 3.1 to Windows 95, but later changes were small. The big changes happened early.

Visual Basic versions 1, 2, and 3 all saw significant changes. Visual Basic 4 had some changes but not as many, and Visual Basic 5 and 6 were milder. The big changes happened early. (The change from VB6 to VB.NET was large, but that was a change to another underlying platform.)

There are other examples, such as Microsoft Word, Internet Explorer, and Visual Studio. The effect is not limited to Microsoft. Lotus 1-2-3 followed a similar arc, as did dBase, R:Base, the Android operating system, and Linux.

Why do big changes happen early? Why do the big jumps in progress occur early in a product's life?

I have two ideas.

One possibility is that the makers and users of an application have a target in mind, a "perfect form" of the application, and each generation of the product moves closer to that ideal form. The first version is a first attempt, and successive versions improve upon previous versions. Over the life of the application, each version moves closer to the ideal.

Another possibility is that changes to an application are constrained by the size of the user population. A product with few users can see large changes; a product with many users can tolerate only minor changes.

Both of these ideas explain the effect, yet they both have problems. The former assumes that the developers (and the users) know the ideal form and can move towards it, albeit in imperfect steps (because one never arrives at the perfect form). My experience in software development allows me to state that most development teams (if not all) are not aware of the ideal form of an application. They may think that the first version, or the current version, or the next version is the "perfect" one, but they rarely have a vision of some far-off version that is ideal.

The latter has the problem of evidence. While many applications grow there user base over time and also shrink their changes over time, not all do. Two examples are Facebook and Twitter. Both have grown (to large user bases) and both have seen significant changes.

A third possibility, one that seems less theoretical and more mundane, is that as an application grows, and its code base grows, it is harder to make changes. A small version 1 application can be changed a lot for version 2. A large version 10 application has oodles of code and oodles of connected bits of code; changing any bit can cause lots of things to break. In that situation, each change must be reviewed carefully and tested thoroughly, and those efforts take time. Thus, the older the application, the larger the code base and the slower the changes.

That may explain the effect.

Some teams go to great lengths to keep their code well-organized, which allows for easier changes. Development teams that use Agile methods will re-factor code when it becomes "messy" and reduce the couplings between components. Cleaner code allows for bigger and faster changes.

If changes are constrained not by large code but by messy code, then as more development teams use Agile methods (and keep their code clean) we will see more products with large changes not only early in the product's life but through the product's life.

Let's see what happens with cloud-based applications. These are distributed by nature, so there is already an incentive for smaller, independent modules. Cloud computing is also younger than Agile development, so all cloud-based systems could have been (I stress the "could") developed with Agile methods. It is likely that some were not, but it is also likely that many were -- more than desktop applications or web applications.

Thursday, November 5, 2015

Fleeing to less worse programming languages

My career in programming has seen a number of languages. I moved from one language to another, always moving to a better language than the previous.

I started with BASIC. After a short detour with assembly language and FORTRAN, I moved to Pascal, and then C, and then C++, which was followed by Java, C#, Perl, Ruby, and Python.

My journey has paralleled the popularity of programming languages in general. We as an industry started with assembly language, then moved to COBOL and FORTRAN, followed by BASIC, Pascal, C, C++, Visual Basic, Java, and C#.

There have been other languages: PL/I, Ada, Forth, dBase, SQL, to name a few. Each had had some popularity (SQL still enjoys it).

We move from one language to another. But why do we move? Do we move away from one or move to a better language?

BASIC was a useful language, but it had definite limitations. It was interpreted, so performance was poor and there was no way to protect source code.

Pascal was compiled, so it had better performance than BASIC and you could distribute the executable and keep the source code private. Yet it was fractured in multiple distributions and there was no good way to build large systems.

Each language had its good points, but it also had limits. Moving to the next language meant moving to a language that was better in that it didn't suck as much as the former.

The difference between moving to a better language and moving away from a problematic language is not insignificant. It tells us about the future.

If people are moving away from languages because of problems, then when we arrive at programming languages that have no problems (or no significant problems) then people will stay with them. Switching programming languages has a cost, so the benefit of the new language must outweigh the effort to switch.

In other words, once we get a language that is good enough, we stop.

I'm beginning to think that Java and C# may be good enough. They are general-purpose, flexible, powerful, and standardized. Each is a living language: Microsoft actively improves C# and Oracle improves Java.

If my theory is correct, then businesses and organizations with large code bases in Java or C# will stay with them, rather than move to a "better" language such as Python or Haskell. Those languages may be better, but Java and C# are good enough.

Many companies stayed with COBOL for their financial applications. But that is perhaps not unreasonable, as COBOL is designed for financial processing and other languages are not. Therefore, a later language such as BASIC or Java is perhaps worse at processing financial transactions than COBOL.

C# and Java are built for processing web transactions. Other languages may be better at it, but they are not that much better. Expect the web to stay with those languages.

And as for cloud applications? That remains to be seen. I tend to think that C# and Java are not good languages for cloud applications, and that Haskell and Python are better (that is, less worse). Look for cloud development to use those languages.

Monday, November 2, 2015

Microsoft software on more than Windows

People are surprise -- and some astonished -- that Microsoft would release software for an operating system other than Windows.

They shouldn't be. Microsoft has a long history of providing software on operating systems other than Windows.

What are they?

  • PC-DOS and MS-DOS, of course. But that is, as mathematicians would say, a "trivial solution".
  • Mac OS and Mac OS X. Microsoft supplied "Internet Explorer for Mac" on OS 7, although it has discontinued that product. Microsoft supplies "Office for Mac" as an active product.
  • Unix. Microsoft supplied "Internet Explorer for Unix".
  • OS/2. Before the breakup with IBM, Microsoft worked actively on several products for OS/2.
  • CP/M. Before MS-DOS and PC-DOS there was CP/M, an operating system from the very not_microsoft company known as Digital Research. Microsoft produced a number of products for CP/M, mainly its BASIC interpreter and compilers for BASIC, FORTRAN, and COBOL.
  • ISIS-II and TEKDOS. Two early operating systems which ran Microsoft BASIC.
  • Any number of pre-PC era computers, including the Commodore 64, the Radio Shack model 100, and the NEC 8201, which all ran Microsoft BASIC as the operating system.

It is true that Microsoft, once it obtained dominance in the market with PC-DOS/MS-DOS (and later Windows) built software that ran only on its operating systems. But Microsoft has a long history of providing software for use on non-Microsoft platforms.

Today Microsoft provides software on Windows, Mac OS X, iOS, Android, and now Chrome. What this means is that Microsoft sees opportunity in all of these environments. And possibly, Microsoft may see that the days of Windows dominance are over.

That Windows is no longer the dominant solution may shock (and frighten) people. The "good old days" of "Windows as the standard" had their problems, and people grumbled and complained about things, but they also had an element of stability. One knew that the corporate world ran on Windows, and moving from one company to another (or merging two companies) was somewhat easy with the knowledge that Windows was "everywhere".

Today, companies have options for their computing needs. Start-ups often use MacBooks (and therefore Mac OS X). Large organizations have expanded their list of standard equipment to include Linux for servers and iOS for individuals. The market for non-Windows software is now significant, and Microsoft knows it.

I see Microsoft's expansion onto platforms other than Windows as a return to an earlier approach, one that was successful in the past. And a good business move today.

Thursday, October 29, 2015

Tablet sales

A recent article on ZDnet.com blamed recent lackluster sales of iPads on... earlier iPads. This seems wrong.

The author posed this premise: Apple's iOS 9 runs on just about every iPad (it won't run on the very first iPad model, but it runs on the others) and therefore iPad owners have little incentive to upgrade. iPad owners behave differently from iPhone owners, in that they (the iPad owners) hold on to their tablets longer than people hang on to their phones.

The latter part of that premise may be true. I suspect that tablet owners do upgrade less frequently that phone owners (for Apple or Android camps). While tablets are typically less expensive than phones, iPads are pricey, and iPad owners may wish to delay an expensive purchase. My belief is that people replace phones more readily than tablets because of the relative size of phones and tablets. Tablets, being larger, are viewed as more valuable. The psychology drives us to replace phones faster than tablets. But that's a pet theory.

Getting back to the ZDnet article: There is a hidden assumption in the author's argument. He assumes that the only people buying iPads are previous iPad owners. In other words, everyone who is going to buy an iPad has already purchased one, and the only sames for iPads will be upgrades as a customer replaces an iPad with an iPad. (Okay, perhaps not "only". Perhaps "majority". Perhaps it's "most people buying iPads are iPad owners.)

This is a problem for Apple. It means that they have, rather quickly, reached market saturation. It also means that they are not converting people from Android tablets to Apple tablets.

I don't know the numbers for iPad sales and new sales versus upgrades. I don't know the corresponding numbers for Android tablets either.

But if the author's assumption is correct, and the tablet market has become saturated, it could make things difficult for Apple, Google (Alphabet?), and ... Microsoft. Microsoft is trying to get into the tablet market (in hardware and in software alone). A saturated market would mean little interest in Windows tablets.

Or maybe it means that Microsoft will be forced to offer something new, some service that compels one to look seriously at a Windows tablet.

Sunday, October 25, 2015

Refactoring is almost accepted as necessary

The Agile Development process brought several new ideas to the practice of software development. The most interesting, I think, is the notion of re-factoring as an expected activity.

Refactoring is the re-organization of code, making it more readable and eliminating redundancies. It is an activity that serves the development team; it does not directly contribute to the finished product.

Earlier methods of software development did not list refactoring as an activity. They made the assumption that once written, the software was of sufficient quality to deliver. (Except for defects which would be detected and corrected in a "test" or "acceptance" phase.)

Agile Development, in accepting refactoring, allows for (and encourages) improvements to the code without changing the behavior of the code (that is, refactoring). It is a humbler approach, one that assumes that members of the development team will learn about the code as the write the code and identify improvements.

This is a powerful concept, and, I believe, a necessary one. Too many projects suffer from poor code quality -- the "technical backlog" or "technical debt" that many developers will mention. The poor code organization slows development, as programmers must cope with fragile and opaque code. Refactoring improves code resilience and improves visibility of important concepts. Refactored code is easier to understand and easier to change, which reduces the development time for future projects.

I suspect that all future development methods will include refactoring as a task. Agile Development, as good as it is, is not the perfect method for all projects. It is suited to projects that are exploratory in nature, projects that do not have a specific delivery date for a specific set of features.

Our next big software development method may be a derivative of Waterfall.

Agile Development (and its predecessor, Extreme Programming) was, in many ways, a rebellion against the bureaucracy and inflexibility of Waterfall. Small teams, especially in start-up companies, adopted it and were rewarded. Now, the larger, established, bureaucratic organizations envy that success. They think that adopting Agile methods will help, but I have yet to see a large organization successfully merge Agile practices into their existing processes. (And large, established, bureaucratic organizations will not convert from Waterfall to pure Agile -- at least not for any length of time.)

Instead of replacing Waterfall with Agile, large organizations will replace Waterfall with an improved version of Waterfall, one that keeps the promise of a specific deliverable on a specific date yet adds other features (one of which will be refactoring). I'm not sure who will develop it; the new process (let's give it a name and call it "Alert Waterfall") may come from IBM or Microsoft or Amazon.com, or some other technical leader.

Yet it will include the notion of refactoring, and with it the implicit declaration that code quality is important -- that it has value. And that will be a victory for developers and managers everywhere.

Thursday, October 22, 2015

Windows 10 means a different future for PCs

Since the beginning, PCs have always been growing.  The very first IBM PCs used 16K RAM chips (for a maximum of 64K on the CPU board); these were quickly replaced by PCs with 64K RAM chips (which allowed 256K on the CPU board).

We in the PC world are accustomed to new releases of bigger and better hardware.

It may have started with that simple memory upgrade, but it continued with hard drives (the IBM PC XT), enhanced graphics, higher-capacity floppy disks, and a more capable processor (the IBM PC AT), and an enhanced buss, even better graphics, and even better processors (the IBM PS/2 series).

Improvements were not limited to IBM. Compaq and other manufacturers revised their systems and offered larger hard drives, better processors, and more memory. Every year saw improvements.

When Microsoft became the market leader, it played an active role in the specification of hardware. Microsoft also designed new operating systems for specific minimum platforms: you needed certain hardware to run Windows NT, certain (more capable) hardware for Windows XP, and even more capable hardware for Windows Vista.

Windows 10 may change all of that.

Microsoft's approach to Windows 10 is different from previous versions of Windows. The changes are twofold. First, Windows 10 will see a constant stream of updates instead of the intermittent service packs of previous versions. Second, Windows 10 is "it" for Windows -- there will be no later release, no "Windows 11".

With no Windows 11, people running Windows 10 on their current hardware should be able to keep running it. Windows Vista forced a lot of people to purchase new hardware (which was one of the objections to Windows Vista); Windows 11 won't force that because it won't exist.

Also consider: Microsoft made it possible for just about every computer running Windows 8 or Windows 7 (or possibly Windows Vista) to upgrade to Windows 10. Thus, Windows 10 requires just as much hardware as those earlier versions.

What may be happening is that Microsoft has determined that Windows is as big as it is going to be.

This makes sense for desktop PCs and for servers running Windows.

Most servers running Windows will be in the cloud. (They may not be now, but they will be soon.) Cloud-based servers don't need to be big. With the ability to "spin up" new instances of a server, an overworked server can be given another instance to handle the load. A system can provide more capacity with more servers. It is not necessary to make the server bigger.

Desktop PCs, either in the office or at home, run a lot of applications, and these applications (in Microsoft's plan) are moving to the cloud. You won't need a faster machine to run the new version of Microsoft Word -- it runs in the cloud and all you need is a browser.

It may be that Microsoft thinks that PCs have gotten as powerful as they need to get. This is perhaps not an unreasonable assumption. PCs are powerful and can handle every task we ask of them.

As we shift our computing from PCs and discrete servers to the cloud, we eliminate the need for improvements to PCs and discrete servers. The long line of PC growth stops. Instead, growth will occur in the cloud.

Which doesn't mean that PCs will be "frozen in time", forever unchanging. It means that PC *growth* will stop, or at least slow to a glacial pace. This has already happened with CPU clock frequencies and buss widths. Today's CPUs are about as fast (in terms of clock speed) as CPUs from 2009. Today's CPUs use a 64-bit data path, which hasn't changed since 2009. PCs will grow, slowly. Desktop PCs will become physically smaller. Laptops will become thinner and lighter, and battery life will increase.

PCs, as we know them today, will stay as we know them today.

Sunday, October 18, 2015

More virtual, less machine

A virtual machine, in the end, is really an elaborate game of "let's pretend". The host system (often called a hypervisor), persuades an operating system that a physical machine exists, and the operating system works "as normal", driving video cards that do not really exist and responding to timer interrupts created by the hypervisor.

Our initial use of virtual machines was to duplicate our physical machines. Yet in the past decade, we have learned about the advantages of virtual machines, including the ability to create (and destroy) virtual machines on demand. These abilities have changed our ideas about computers.

Physical computers (that is, the real computers one can touch) often server multiple purposes. A desktop PC provides e-mail, word processing, spreadsheets, photo editing, and a bunch of other services.

Virtual computers tend to be specialized. We build virtual machines often as single-purpose servers: web servers, database servers, message queue servers, ... you get the idea.

Our operating systems and system configurations have been designed around the desktop computer, the one serving multiple purposes. Thus, the operating system has to provide all possible services, including those that might never be used.

But with specialized virtual servers, perhaps we can benefit from a different approach. Perhaps we can use a specialized operating system, one that includes only the features we need for our application. For example, a web server needs an operating system and the web server software, and possibly some custom scripts or programs to assist the web server -- but that's it. It doesn't need to worry about video cards or printing. It doesn't need to worry about programmers and their IDEs, and it doesn't need to have a special debug mode for the processor.

Message queue servers are also specialized, and if they keep everything in memory then they need little about file systems and reading or writing files. (They may need enough to bootstrap the operating system.)

All of our specialized servers -- and maybe some specialized desktop or laptop PCs -- could get along with a specialized operating system, one that uses the components of a "real" operating and just enough of those components to get the job done.

We could change policy management on servers. Our current arrangement sees each server as a little stand-alone unit that must receive policies and updates to those policies. That means that the operating system must be able to receive the policy updates. But we could change that. We could, upon instantiation of the virtual server, build in the policies that we desire. If the policies change, instead of sending an update, we create a new virtual instance of our server with the new policies. Think of it as "server management meets immutable objects".

The beauty of virtual servers is not that they are cheaper to run, it is that we can throw them away and create new ones on demand.

Languages become legacy languages because of their applications

Programming languages have brief lifespans and a limited set of destinies.

COBOL, invented in 1959, was considered passé in the microcomputer age (1977 to 1981, prior to the IBM PC).

Fortran, from the same era as COBOL, saw grudging acceptance and escaped the revulsion given COBOL, possibly because COBOL was specific to accounting and business applications and Fortran was useful for scientific and mathematical applications.

BASIC, created in the mid-1960s, was popular through the microcomputer and PC-DOS ages but did not transition to Microsoft Windows. Its eponymous successor, Visual Basic, was a very different language and it was adored in the Windows era but reviled in the .NET era.

Programming languages have one of exactly two fates: despised as "legacy" or forgotten. Yet it is not the language (its style, syntax, or constructs) that define it as a legacy language -- it is the applications written in the language.

If a language doesn't become popular, it is forgotten. The languages A-0, A-1, B-0, Autocode, Flow-matic, and BACAIC are among dozens of languages that have been abandoned.

If a language does become popular, then we develop applications -- useful, necessary application -- in it. Those useful, necessary applications eventually become "legacy" applications. Once enough of the applications written in a language are legacy applications, the language becomes a "legacy" language. COBOL suffered this fate. We developed business systems in it and those systems are too important to abandon, yet also too complex to convert to another language, so COBOL lives on. But we don't build new systems in COBOL, and we later programmers don't like COBOL.

The true mark of legacy languages is the open disparagement of them. When a sufficient number of developers refuse to work with them (the languages), then they are legacy languages.

Java and C# are approaching the boundary of "legacy". They have been around long enough for enough people to have written enough useful, necessary applications. These applications are now becoming legacy applications: large, difficult to understand, and written in the older versions of the language. It is these applications that will doom C# and Java to legacy status.

I think we will soon see developers declining to learn Java and C#, focussing instead on Python, Ruby, Swift, Haskell, or Rust.

Wednesday, October 14, 2015

The inspiration of programming languages

Avdi posted a blog about programming languages, bemoaning the lack of truly inspirational changes in languages. He says:
... most programming languages I’ve worked with professionally were born from much less ambitious visions. Usually, the vision amounted to “help programmers serve the computer more easily”, or sometimes “be like $older_language, only a little less painful”.
he is looking, instead, for:
systems that flowed out of a distinct philosophy and the intent to change the world
Which got me thinking: Which languages are truly innovative, and which are merely derivative, merely improvements on a previous language? They cannot all be derivatives, because there must have been some initial languages to start the process.

What inspires languages?

Some languages were built for portability. The designers wanted languages to run on multiple platforms. Many languages run on multiple platforms, but few were designed for that purpose. The languages for portability are:
  • Algol (a language designed for use in different cultural contexts)
  • NELIAC (a version of Algol, designed for machine-independent operation)
  • COBOL (the name comes from "Common Business Oriented Language")
  • Ada (specified as the standard for Department of Defense systems)
  • Java ("write once, run everywhere")
  • JavaScript (portable across web browsers)
Other languages were designed for proprietary use:
  • C# (a Java-like language specific to Microsoft)
  • Swift (a language specific to Apple)
  • Visual Basic and VB.NET (specific to Microsoft Windows)
  • SAS (proprietary to SAS Corporation)
  • VBScript (proprietary to Microsoft's Internet Explorer)
A few languages were designed to meet the abilities of new technologies:
  • BASIC (useful for timesharing; COBOL and FORTRAN were not)
  • JavaScript (useful for browsers)
  • Visual Basic (needed for programming Windows)
  • Pascal (needed to implement structured programming)
  • PHP (designed for building web pages)
  • JOSS (useful for timesharing)
BASIC and JOSS may have been developed simultaneously, and perhaps one influenced the other. (There are a number of similarities.) I'm considering them independent projects.

All of these are good reasons to build languages. Now let's look at the list of "a better version of X", where people designed languages to improve an existing language:
  • Assembly language (better than machine coding)
  • Fortran I (a better assembly language)
  • Fortran II (a better Fortran I)
  • Fortran IV (a better Fortran II -- Fortran III never escaped the development lab)
  • S (a better Fortran)
  • R (a better S)
  • Matlab (a better Fortran)
  • RPG (a better version of assembly language, initially designed to generate report programs)
  • FOCAL (a better JOSS)
  • BASIC (a better Fortran, suitable for timesharing)
  • C (a better version of 'B')
  • B (a better version of 'BCPL')
  • BCPL (a better version of 'CPL')
  • C++ (a better version of C)
  • Visual C++ (a version of C++ tuned for Windows, and therefore 'better' for Microsoft)
  • Delphi (a better version of Visual C++ and Visual Basic)
  • Visual Basic (a version of BASIC tuned for Windows)
  • Pascal (a better version of Algol)
  • Modula (a better version or Pascal)
  • Modula 2 (a better version of Modula)
  • Perl (a better version of AWK)
  • Python (a better version of ABC)
  • ISWIM (a better Algol)
  • ML (a better ISWIM)
  • OCaml (a better ML)
  • dBase II (a better RETRIEVE)
  • dBase III (a better dBase II)
  • dBase IV (a better dBase III)
  • Simula (a better version of Algol)
  • Smalltalk-71 (a better version of Simula)
  • Smalltalk-80 (a better version of Smalltalk-71)
  • Objective-C (a combination of C and Smalltalk)
  • Go (a better version of C)
Which is a fairly impressive list. It is also a revealing list. It tells us about our development of programming languages. (Note: the term "better" meant different things to the designers of different languages. "Perl is a 'better' AWK" does not (necessarily) use the same connotation as "Go is a 'better' C".)

We develop programming languages much like we develop programs: incrementally. One language inspires another. Very few languages are born fully-formed, and very few bring forth new programming paradigms.

There are a few languages that are truly innovative. Here are my nominees:

Assembly language A better form of machine coding, but different enough in that it uses symbols, not numeric codes. That change makes it a language.

COBOL The first high-level language as we think of them today, with a compiler and keywords and syntax that does not depend on column position.

Algol The original "algorithmic language".

LISP A language that is not parsed but read; the syntax is that of the already-parsed tree.

Forth A language that uses 'words' to perform operations and lets one easily define new 'words'.

Eiffel A language that used object-oriented techniques and introduced design-by-contract, a technique that is used by very few languages.

Brainfuck A terse language that is almost impossible to read and sees little use outside of the amusement of programmers.

These are, in my opinion, the ur-languages of programming. Everything else is derived, one way or another, from these.

It is not necessary to change the world with every new programming language; we can make improvements by building on what exists. Derived languages are not the mashing of different technologies as shown in "Mad Max" and other dystopian movies. (Well, not usually.) They can be useful and they can be elegant.

Thursday, October 8, 2015

From multiprogramming to timesharing

Multiprogramming boosted the efficiency of computers, yet it was timesharing that improved the user experience.

Multiprogramming allowed multiple programs to run at the same time. Prior to multiprogramming, a computer could run one and only one program at a time. (Very similar to PC-DOS.) But multiprogramming was focussed on CPU utilitization and not on user experience.

To be fair, there was little in the was of "user experience". Users typed their programs on punch cards, placed the deck in a drawer, and waited for the system operator to transfer the deck to the card reader for execution. The results would be delivered in the form of a printout, and users often had to wait hours for the report.

Timesharing was a big boost for the user experience. It built on multiprogramming, running multiple programs at the same time. Yet it also changed the paradigm. Multiprogramming let a program run until an input-output operation, and then switched control to another program while the first waited for its I/O operation to complete. It was an elegant way of keeping the CPU busy, and therefore improving utilization rates.

With timesharing, users interacted with the computer in real time. Instead of punch cards and printouts, they typed on terminals and got their responses on those same terminals. That change required a more sophisticated approach to the sharing of resources. It wouldn't do to allow a single program to monopolize the CPU for minutes (or even a single minute) which could occur with multiprogramming. Instead, the operating system had to frequently yank control from one program and give it to another, allowing each program to run a little bit in each "time slice".

Multiprogramming focussed inwards, on the efficiency of the system. Timesharing focussed outwards, on the user experience.

In the PC world, Microsoft focussed on the user experience with early versions of Windows. Windows 1.0, Windows 2, Windows 3.0, and Windows 95 all made great strides in the user experience. But other versions of Windows focussed not on the user experience but on the internals: security, user accounts, group policies, and centralized control. Windows NT, Windows 2000, Windows XP all contained enhancements for the enterprise but not for the individual user.

Apple has maintained focus on the user, improving (or at least changing) the user experience with each release of Mac OSX. This is what makes Apple successful in the consumer market.

Microsoft focussed on the enterprise -- and has had success with enterprises. But enterprises don't want cutting-edge user interfaces, or GUi changes (improvements or otherwise) every 18 months. They want stability. Which is why Microsoft has maintained its dominance in the enterprise market.

Yet nothing is constant. Apple is looking to make inroads into the enterprise market. Microsoft wants to get into the consumer market. Google is looking to expand into both markets. All are making changes to the user interface and to the internals.

What we lose in this tussle for dominance is stability. Be prepared for changes to the user interface, to update mechanisms, and to the basic technology.

Sunday, October 4, 2015

Amazon drops Chromecast and Apple TV

Amazon.com announced that it would stop selling Chromecast and Apple TV products, a move that has raised a few eyebrows. Some have talked about anti-trust actions.

I'm not surprised by Amazon.com's move, and I am surprised.

Perhaps an explanation is in order.

The old order saw Microsoft as the technology leader, setting the direction for the use of computers at home and in the office. That changed with Apple's introduction of the iPhone and later iPads and enhanced MacBooks. It also changed with Amazon.com's introduction of cloud computing services. Google's rise in search technologies and its introduction of Android phones and tablets also made part of the change.

The new order sees multiple technology companies and no clear leader. As each player moves to improve its position, it, from time to time, blocks other players from working with its technologies. Apple MacBooks don't run Windows applications, and Apple iPhones don't run Android apps. Android tablets don't run iOS apps. Movies purchased through Apple's iTunes won't play on Amazon.com Kindles (and you cannot even move them).

The big players are building walled gardens, locking in user data (documents, music, movies, etc.).

So it's no surprise that Amazon.com would look to block devices that don't serve its purposes, and in fact serve other walled gardens.

What's surprising to me is the clumsiness of Amazon.com's announcement. The move is a bold one, obvious in its purpose. Microsoft, Google, and Apple have been more subtle in their moves.

What's also surprising is Amazon.com's attitude. My reading of the press and blog entries is one of perceived arrogance.

Amazon.com is a successful company. They are well-respected for their sales platform (web site and inventory) and for their web services offerings. But they have little in the way of loyalty, especially in their sales side. Arrogance is something they cannot afford.

Come to think of it, their sales organization has taken a few hits of late, mostly with employee relations. This latest incident will do nothing to win them new friends -- or customers.

It may not cost them customers, at least in the short term. But it is a strategy that I would recommend they reconsider.

Thursday, October 1, 2015

The birth of multiprogramming

Early computers ran one program at a time. They were also slow. This wasn't a problem. At first.

Early computers ran in "batch mode" - a non-interactive mode that often saw input on punch cards or magnetic tape, instead of people typing on terminals (much less smaller computers as we do today).

Companies had programs for each task: a program to update inventory, a program to update sales information, a program to print personnel reports, etc. Each program was a "job" with its program, input data, and output data.

The advantage of batch mode processing is that the job runs as an independent unit and it can be scheduled. Your collection of programs could be planned, as each used specific data, generated specific data, and ran for a (usually) predictable amount of time. Companies would run their daily jobs every day, their weekly jobs perhaps every Saturday, their monthly jobs at the end of the month (or more often during the first days of the next month), and their annual jobs at the end of the year.

If your programs all ran successfully, and within their usual timeframes, you were happy. The problem for companies was that they tended to grow, increasing the number of customers they supported and the number of sales they created. Those increases meant an increase in the size of data for processing, and that meant increased processing time to run their computer jobs.

If you have spare computing time, you simply run jobs longer. But what happens when you don't have spare processing time? What happens when your daily jobs take more than twenty-four hours to run?

In today's world of personal computers and cloud processing, we simply order up some additional computers. That was not possible in the early days of computing: computers were expensive.

Instead of buying (or leasing) more computers, we looked for ways to make computers more efficient. One of the first methods was called "multiprogramming" and it allowed multiple programs to run at the same time.

Successful multiprogramming had a number of challenges: loading multiple programs into memory (at different locations), preventing one program from writing to memory allocated to another, and sharing the processor among the simultaneously executing programs. While these are all tasks that any modern operating system handles, in its day it was a significant change.

It was also successful. It took the time spent waiting for input/output tasks and re-allocated for processing. The result was an increase of processing time, which meant that a company could run more programs without buying a larger (and more expensive) computer.

Multiprogramming shared the processor by using what we call "cooperative multitasking". A program ran until it requested an input/output operation, at which point the operating system initiated the operation and switched the processor to a different program. The input/output operation was handled by a separate device (a card reader or tape reader, or maybe a disk drive) so it could continue without the main processor. This freed the main processor to do some other work in the form of another program.

Windows 95 used a similar technique to switch between programs.

Later operating systems used "pre-emptive task switching", giving programs small amounts of processing time and then suspending one program and activating another. This was the big change for Windows NT.

Multiprogramming was driven by cost reduction (or cost avoidance) and focussed on internal operations. It made computing more efficient in the sense that one got "more computer" for the same amount of hardware. (The operating system had to be more sophisticated, of course.) But it did nothing for the user; it made no changes to the user experience. One still had to schedule jobs to run programs with specific input and output data.

Cost avoidance is one driver for IT. Its focus on internal operations is appropriate. But there are other drivers, and they require other techniques.

Sunday, September 27, 2015

The (eventual) rise of style-checkers

The IT world changes technology and changes practices. We have changed platforms from mainframes to personal computers, from desktop PCs to web servers, and now we're changing from web servers to cloud computing. We've changed programming languages from assembly to Cobol and Fortran, to Basic, to Pascal and C, to C++, to Java and C#. We're now changing to JavaScript and Python.

We change our coding styles, from "unstructured" to structured and from structured to object-oriented.

We also change our development processes. We started with separate tools (editors, compilers, debuggers) and then shifted to the 'IDE' - the integrated development environment. We added version control systems.

The addition of version control is an interesting study. We had the tools for some time, yet we (as an industry) did not use them immediately. I speculate that it was management that spurred the use of version control, and not individual programmers. Version control offers little benefit to the individual; it offers more to managers.

The use of version control systems allows for larger teams and larger projects. Without version control, team members must coordinate their changes very carefully. Files are kept in a common directory, and updates to those files must ensure a consistent set of source code. It is easy for two programmers to start working on different changes. Both begin by copying the common code to their private workspaces. They make changes, and the first one done copies his changes into the common workspace. When the second programmer finishes, he copies his changes to the common area, overwriting the changes from the first programmer. Thus, the work from the first programmer "disappears".

These problems can be avoided by carefully checking prior to copying to the common area. For a small team (less than fifteen) this requires effort and disciple, and is possible. For a larger team, the coordination effort is daunting.

An additional benefit of version control systems: the illusion of accountability. With controlled access and logged activity, project managers could see who made which changes.

Project managers, not individual developers, changed the process to use version control. That's as it should be: project managers are the people to define the process. (They can use input from developers, however.)

We have style checking programs for many languages. A quick search shows style checkers for C, C++, C#, Java, Perl, Python, Ruby, Ada, Cobol, and even Fortran. Some checkers are simple affairs, checking nothing more than indentation and line length. Others are comprehensive, identifying complex modules.Style checkers are not used in a typical development project. The question is: when will style-check programs become part of the normal tool set?

Which is another way of asking: when (if ever) will a majority of development teams use style-check programs as part of their process?

I think that the answer is: yes, eventually, but not soon.

The same logic for version control systems has to apply to style checkers. It won't be individuals who bring style checkers into the process; it will be project managers.

Which means that the project managers will have to perceive some benefit from style checkers. There is a cost to using them. It is a change to the process, and some (many) people are resistant to change. Style checkers enforce a certain style on the code, and some (many) developers prefer their own style. Style checkers require time to run, time to analyze the output, and time to change the code to conform, all of which take time away from the main task of writing code.

The benefits of style checkers are somewhat hazy, and less clear than version control systems. Version control systems fixed a clear, repeating problem ("hey, where did my changes go?"). Style checkers do no such thing. At best, they make code style consistent across the project, which means that the code is easier to read, changes are easier to make, and developers can move from one section of the code to another. Style checkers invest effort now for a payback in the future.

I suspect that the adoption of style checkers will be slow, and while lead by project managers, it will be encouraged by developers. Or some developers. I suspect that the better developers will be more comfortable with style checkers and will want to work on projects with style checkers. It may be that development teams will form two groups: one with style checkers and one without. The development teams that use style checkers will tend to hire other developers who use style checkers, and the development teams that don't use style checkers will tend to hire developers who don't use style checkers. Why? Because a developer in the "wrong" environment will be uncomfortable with the process - and the code. (The same thing happened with version control systems.)

For style checkers to be adopted, I see the following preconditions:
  • The style checkers have to exist (at least for the team's platform and language)
  • The style checkers have to be easy to use
  • The recommendations from style checkers has to be "tuneable" - you can't be swamped with too many messages at once
  • The team has to be willing to improve the code
  • The project management has to be willing to invest time now for payment later
 We have the tools, and they are easy to use (well, some of them). I think teams are willing to improve the code. What we need now is a demonstration that well-maintained code is easier to use in the long term.

Thursday, September 24, 2015

An imaginary Windows version of the Chromebook

Acer and HP have cloudbooks - laptop computers outfitted with Windows and a browser - but they are not really the equivalent of a Chromebook.

A Chromebook is a lightweight laptop computer (in both physical weight and computing power) equipped with a browser and just enough of an operating system to run the browser. (And some configuration screens. And the ssh program.) As such, they have minimal administrative overhead.

Cloudbooks - the Acer and HP versions - are lightweight laptops equipped with the full Windows operating system. Since they have the entire Windows operating system, they have the entire Windows administrative "load".

Chromebooks have been selling well (possibly due to their low prices). Cloudbooks have been selling... well, I don't know. There are only a few models from Acer and a few models from HP; much fewer than the plethora of Chromebooks from numerous manufacturers. My guess is that they are selling in only modest quantities.

Would a true "Windows Chromebook" sell? Possibly. Let's imagine one.

It would have to use a different configuration than the current cloudbooks. It would have to be a lightweight laptop with just enough of an operating system to run the browser. A Windows cloudbook would need a browser (let's pick the new Edge browser) and stripped-down version of Windows that is just enough to run it.

I suspect that the components of Windows are cross-dependent and one cannot easily build a stripped-down version. Creating such a version of Windows would require the re-engineering of Windows. But since this is an imaginary device, let's imagine a smaller, simpler version of Windows.

This Windows cloudbook would have to match the price of the Chromebooks. That should be possible for hardware; the licensing fees for Windows may push the price upwards.

Instead of locally-installed software, everything would run in the browser. To compete with Google Docs, our cloudbook would have Microsoft Office 365.

But then: Who would buy it?

I can see five possible markets: enterprises, individual professionals, home users, students, and developers.

Enterprises could purchase cloudbooks and issue them to employees. This would reduce the expenditures for PC equipment but might require different licenses for professional software. Typical office jobs that require Word and Excel could shift to the web-based versions of those products. Custom software may have to run in virtual desktops accessed through the company's intranet. Such a configuration may make it easier for a more mobile workforce, as applications would run from servers and data would be stored on servers, not local PCs.

Individual professionals might prefer a cloudbook to a full Windows laptop. Then again, they might not. (I suspect most independent professionals using Windows are using laptops and not desktops.) I'm not sure what value the professional receives by switching from laptop to cloudbook. (Except, maybe, a lighter and less expensive laptop.)

Home users with computers will probably keep using them, and purchase a cloudbook only when they need it. (Such as when their old computer dies.)

Students could use cloudbooks as easily as the use Chromebooks.

Developers might use cloudbooks, but for them they would need tools available in their browser. Microsoft has development tools that run in the browser, and so do other companies.

But for any of these users, I see them using a Chromebook just as easily as using a Windows cloudbook. Microsoft Office 365 runs in the Chrome and Firefox browsers on Mac OSX and on Linux. (There are apps for iOS and Android, although limited in capabilities.)

There is no advantage to using a Windows cloudbook  -- even our imaginary cloudbook -- over a Chromebook.

Perhaps Microsoft is working on such an advantage.

Their Windows RT operating system was an attempt at a reduced-complexity configuration suitable for running a tablet (the ill-fated Surface RT). But Microsoft departed from our imagined configuration in a number of ways. The Surface RT:

- was released before Office 365 was available
- used special versions of Word and Excel
- had a complex version of Windows, reduced in size but still requiring administration

People recognized the Surface RT for what it was: a low-powered device that could run Word and Excel and little else. It had a browser, and it had the ability to run apps from the Microsoft store, but the store was lacking. And while limited in use, it still required administration.

A revised cloudbook may get a warmer reception than the Surface RT. But it needs to focus on the browser, not locally-installed apps. It has to have a simpler version of Windows. And it has to have something special to appeal to at least one of the groups above -- probably the enterprise group.

If we see a Windows cloudbook, look for that special something. That extra feature will make cloudbooks successful.

Sunday, September 20, 2015

Derivatives of programming languages

Programmers are, apparently, unhappy with their tools. Specifically, their programming languages.

Why do I say that? Because programmers frequently revise or reinvent programming languages.

If we start with symbolic assembly language as the first programming language, we can trace the development of other languages. FORTRAN, in its beginning, was a very powerful macro assembler (or something like it). Algol was a new vision of a programming language, in part influenced by FORTRAN. Pascal was developed as a simpler language, as compared to Algol.

Changes to languages come in two forms. One is an enhancement, a new version of the same language. For example, Visual Basic had multiple versions, yet it was essentially the same language. FORTRAN changed, from FORTRAN IV to FORTRAN 66 to Fortran 77 (and later versions).

The other form is a new, separate language. C# was based on Java, yet was clearly not Java. Modula and ADA were based on Pascal, yet quite different. C++ was a form of C that had object-oriented programming.

Programmers are just not happy with their languages. Over the half-century of programming, we have had hundreds of languages. Only a small fraction have gained popularity, yet we keep tuning them, adjusting them, and deriving them. And programmers are not unwilling to revamp an existing language to meet the needs of the day.

There are two languages that are significant exceptions: COBOL and SQL. Neither of these have been used (to my knowledge) to develop other languages. At least not popular ones. Each has had new versions (COBOL-61, COBOL-74, Cobol-85, SQL-86, SQL-89, SQL-92, and so on) but none have spawned new, different languages.

There have been many languages that have had a small following and never been used to create a new language. It's one thing for a small language to live and die in obscurity. But COBOL and SQL are anything but obscure. They drive most business transactions in the world. They are used in all organizations of any size. One cannot work in the IT world without being aware of them.

So why is it that they have not been used to create new languages?

I have a few ideas.

First, COBOL and SQL are popular, capable, and well-understood. Both have been developed for decades, they work, and they can handle the work. There is little need for a "better COBOL" or a "better SQL".

Second, COBOL and SQL have active development communities. When a new version of COBOL is needed, the COBOL community responds. When a new version of SQL is needed, the SQL community responds.

Third, the primary users of COBOL and SQL (businesses and governments) tend to be large and conservative. They want to avoid risk. They don't need to take a chance on a new idea for database access. They know that new versions of COBOL and SQL will be available, and they can wait for a new version.

Fourth, COBOL and SQL are domain-specific languages, not general-purpose. They are suited to financial transactions. Hobbyists and tinkerers have little need for COBOL or a COBOL-like language. When they experiment, they use languages like C or Python ... or maybe Forth.

The desire to create a new language (whether brand new or based on an existing language) is a creative one. Each person is driven by his own needs, and each new language probably has different root causes. Early languages like COBOL and FORTRAN were created to let people be more productive. The urge to help people be more productive may still be there, but I think there is a bit of fun involved. People create languages because it is fun.

Wednesday, September 16, 2015

Collaboration is an experiment

The latest wave in technology is collaboration. Microsoft, Google, and even Apple have announced products to let multiple people work on documents and spreadsheets at the same time. For them, collaboration is The Next Big Thing.

I think we should pause and think before rushing into collaboration. I don't say that it is bad. I don't say we should avoid it. But I will say that it is a different way to work, and we may want to move with caution.

Office work on PCs (composing and editing documents, creating spreadsheets, preparing presentations) has been, due to technology, solitary work. The first PCs had no networking capabilities, so work had to be individual. Even with the hardware and basic network support in operating systems, applications were designed for single users.

Yet it was not technology alone that made work solitary. The work was solitary prior to PCs, with secretaries typing at separate desks. Offices and assignments were designed for independent tasks, possibly out of a desire for efficiency (or efficiency as perceived by managers).

Collaboration (on-line, real-time, multiple-person collaboration as envisioned in this new wave of tools) is a different way of working. For starters, multiple people have to work on the same task at the same time. That implies that people agree on the order in which they perform their tasks, and the time they devote to them (or at least the order and time for some tasks).

Collaboration also means the sharing of information. Not just the sharing of documents and files, but the sharing of thoughts and ideas during the composition of documents.

We can learn about collaboration from our experiences with pair programming, in which two programmers sit at one computer and develop a program. The key lessons I have learned are:

  • Two people can share information effectively; three or more are less effective
  • Pair program for a portion of the day, not the entire day
  • Programmers share with multiple techniques: by talking, pointing at the screen, and writing on whiteboards
  • Some pairs of people are more effective than others
  • People need time to transition from solitary-only to pair-programming

I think the same lessons will apply to most office workers.

Collaboration tools may be effective with two people, but more people working on a single task may be, in the end, less effective. Some people may be "drowned out" by "the crowd".

People will need ways to share their thoughts, beyond simply typing on the screen. Programmers working together can talk; people working in a shared word process will need some other communication channel such as a phone conversation or chat window.

Don't expect people to collaborate for the entire day. It may be that some individuals are better at working collaboratively than others, due to their psychological make-up. But those individuals will have been "selected out" of the workforce long ago, due to the solitary nature of office work.

Allow for transition time to the new technique of collaborative editing. Workers have honed their skills at solitary composition over the years. Changing to a new method requires time -- and may lead to a temporary loss of productivity. (Just as transitioning from typewriters to word processors had a temporary loss of productivity.)

Collaboration is a new way of working. There are many unknowns, including its eventual effect on productivity. Don't avoid it, but don't assume that your team can adopt it overnight. Approach it with open eyes and gradually, and learn as you go.

Sunday, September 13, 2015

We program to the interface, thanks to Microsoft

Today's markets for PCs, for smart phones, and for tablets show a healthy choice of devices. PCs, phones, and tablets are all available from a variety of manufacturers, in a variety of configurations. And all run just about everything written for the platform (Android tablets run all Android applications, Windows PCs run all Windows applications, etc.).

We don't worry about compatibility.

It wasn't always this way.

When IBM delivered the first PC, it provided three levels of interfaces: hardware, BIOS, and DOS. Each level was capable of some operations, but the hardware level was the fastest (and some might argue the most versatile).

Early third-party applications for the IBM PC were programmed against the hardware level. This was an acceptable practice, as the IBM PC was considered the standard for computing, against which all other computers were measured. Computers from other vendors used different hardware and different configurations and were thus not compatible. The result was that the third-party applications would run on IBM PCs and IBM PCs only, not on systems from other vendors.

Those early applications encountered difficulties as IBM introduced new models. The IBM PC XT was very close to the original PC, and just about everything ran -- except for a few programs that made assumptions about the amount of memory in the PC. The IBM PC AT used a different keyboard and a different floppy disk drive, and some software (especially those that used copy-protection schemes) would not run or sometimes even install. The EGA graphics adapter was different from the original CGA graphics adapter, and some programs failed to work with it.

The common factor in the failures of these programs was their design: they all communicated directly with the hardware and made assumptions about it. When the hardware changed, their assumptions were no longer valid.

We (the IT industry) eventually learned to write to the API, the high-level interface, and not address hardware directly. This effort was due to Microsoft, not IBM.

It was Microsoft that introduced Windows and won the hearts of programmers and business managers. IBM, with its PS/2 line of computers and OS/2 operating system struggled to maintain control of the enterprise market, but failed. I tend to think that IBM's dual role in supplying hardware and software helped in that demise.

Microsoft supplied only software, and it sold almost no hardware. (It did provide things such as the Microsoft Mouse and the Microsoft Keyboard, but these saw modest popularity and never became standards.) With its focus on software, Microsoft made its operating system run on various hardware platforms (including processors such as DEC's Alpha and Intel's Itanium) and Microsoft focussed on drivers to provide functionality. Indeed, one of the advantages of Windows was that application programmers could avoid the effort of supporting multiple printers and multiple graphics cards. Programs would communicate with Windows and Windows would handle the low-level work of communicating with hardware. Application programmers could focus on the business problem.

The initial concept of Windows was the first step in moving from hardware to an API.

The second step was building a robust API, one that could perform the work necessary. Many applications on PCs and DOS did not use the DOS interface because it was limited, compared to the BIOS and hardware interfaces. Microsoft provided capable interfaces in Windows.

The third step was the evolution of Windows. Windows 3 evolved into Windows 3.1 (which included networking), Windows 95 (which included a new visual interface), and Windows 98 (which included support for USB devices). Microsoft also developed Windows NT (which provided pre-emptive multitasking) and later Windows 2000, and Windows XP.

With each generation of Windows, less and less of the hardware (and DOS) was available to the application program. Programs had to move to the Windows API (or a Microsoft-supplied framework such as MFC or .NET) to keep functioning.

Through all of these changes, Microsoft provided specifications to hardware vendors who used those specifications to build driver programs for their devices. This ensured a large market of hardware, ranging from computers to disk drives to printers and more.

We in the programming world (well, the Microsoft programming world) think nothing of "writing to the interface". We don't look to the hardware. When faced with a new device, our first reaction is to search for device drivers. This behavior works well for us. The early marketing materials for Windows were correct: application programmers are better suited to solving business problems than working with low-level hardware details. (We *can* deal with low-level hardware, mind you. It's not about ability. It's about efficiency.)

Years from now, historians of IT may recognize that it was Microsoft's efforts that lead programmers away from hardware and toward interfaces.

Thursday, September 10, 2015

Apple and Microsoft do not compete

The tech press has been building the case that Apple is "going for the enterprise market" and that Microsoft is "responding". I'm not convinced of this argument.

The evidence is thin. Exhibit A for Apple is the new iPad Pro, which has a larger screen, a faster processor, and an optional keyboard. The thinking is that this new iPad Pro competes with Microsoft's Surface tablet -- which it does -- and therefore Apple is in direct competition with Microsoft.

Exhibit B is Microsoft's partnership with Dell to sell computers to enterprises. Here, the thinking is (apparently) that Microsoft would only make such an agreement if it felt threatened by Apple.

Such competition may make for good news, but it makes little sense. Apple and Microsoft are in two different markets. Apple sells hardware that happens to come with some software; Microsoft sells software that happens to run on hardware. Its true that Microsoft is moving from the single-sale model to a subscription model, but they are still selling software. (Primarily. Microsoft does sell the Surface tablets. They are a tiny portion of Microsoft's sales.)

Microsoft and Apple are building a synergistic system, one which sees Apple selling hardware and Microsoft selling software. Enterprises may wish to purchase iPads and iPhones for their employees and still use Microsoft apps on those devices. Enterprises have a long history with Microsoft and lots of documents and spreadsheets in Word and Excel. Apple may have Pages and Numbers (its competing word processor and spreadsheet), but they are not the same as Microsoft's Word and Excel. The future (for enterprises) may very well be Microsoft apps running on Apple hardware.

One might assume that Apple has the upper hand in such a hardware/software combination. I disagree. While apps run in the iOS ecosystem at Apple's whim -- Apple can revoke any app at any time -- such a move would not benefit Apple. Enterprises are bound to their data first, their software second, and their hardware last. Apple could "pull the plug" on Microsoft apps, hoping that enterprises would switch to Apple's products, but I think the reaction would be different. Enterprise managers would be angry, and the target of their anger would be Apple. They would view Apple as selfish and dangerous -- and purchases of Apple equipment would drop to near zero.

Such a situation does not mean that Microsoft can be a bully. They have improved their reputation by expanding Microsoft software offerings to the iOS and Android platforms and maintaining a long relationship with the Mac OSX platform. They cannot arbitrarily "pull the plug" on their iOS apps. Such a move would be frowned upon. (They could, however, discontinue their support of iOS in response to arbitrary moves by Apple, such as a change in iTunes charges.)

Apple and Microsoft are not in direct competition. (Despite competing hardware and software products.) They stand to gain much more by civilized behavior.

Monday, September 7, 2015

New programming languages

Sometimes we create programming languages to solve problems in programming. (For example, we created PASCAL to use structured programming techniques.) Sometimes we create programming languages to take advantage of new hardware or system platforms.

Consider BASIC. It was made possible (some might say necessary) by the invention of timesharing systems.

Prior to timesharing, computing had to be done local to the processor. Computers were the stereotypical large, centralized (expensive) box one sees in movies from the 1950s and 1960s. Programs and data were supplied on punch cards or magnetic tape, which had to be processed on large (also expensive) readers. You could run a program, but only with the assistance of the system operator who would actually schedule the program for execution and load your input cards or tapes. There was no such thing as a "personal" computer.

Timesharing brought computing to remote terminals, small (relatively inexpensive) printing devices that let one issue commands to the computer without the assistance of an operator. The existing languages at the time (COBOL and FORTRAN) were not suitable for such an environment because they assumed the existence of card readers and tape drives.

BASIC was inspired by FORTRAN but it allowed for self-contained programs which could be modified by the user. The programs are self-contained in that they can include the data; COBOL and FORTRAN (and other languages) require data from an external source. With a self-contained program, no system operator was needed! (Well, not after the system operator started the timesharing service.) Users could connect, enter their programs -- including the data -- , run them, and see the results. This set of capabilities, all in one language, required a new language.

BASIC is not alone in being constructed in response to a system platform. There are others.

SQL was created for databases. We had the databases, and we needed a common, powerful language to manipulate them. Prior to SQL, each database had its own API. Moving from one database to another was a major effort -- so large that often people stayed with their database rather than switch.

JavaScript was created for web browsers. We had web pages and web browsers, and we wanted a way to manipulate objects on the web page. JavaScript (later with HTML enhancements and CSS) gave us the power to build complex web pages.

These languages (BASIC, SQL, and JavaScript) all filled a vacuum in computing. They also have had long periods of popularity. BASIC's popularity began shortly after its creation, and it moved from timesharing to microcomputers such as the Radio Shack TRS-80 and the Commodore 64. It was part of the original IBM PC, baked into the ROM so your could run BASIC without DOS (or disks!). SQL has been popular since the 1980s. JavaScript is in all of the major web browsers.

BASIC was challenged by other languages on small systems (FORTRAN, APL, Pascal, PL/M, C, and even COBOL). We eventually shifted from BASIC to Visual Basic and C++ (and later, Java, Perl, Python) but only after PCs become large enough to support those languages -- a program that contained its own data and a language its own editor was no longer needed. SQL remains popular, and no other languages have challenged it as the interface to databases.

Perhaps their popularity was due to the fact that these languages were designed to meet the needs of the specific platform. Perhaps they were so well designed, such a good fit, that other languages could not dislodge them. Perhaps we can expect JavaScript to have a long, productive life too. It has been challenged by CoffeeScript and TypeScript, but neither have made a dent in the popularity of JavaScript.