Sunday, November 22, 2015

The Real Problem with the Surface RT

When Microsoft introduced the Surface RT, people responded with disdain. It was a Windows tablet with a limited version of Windows. The tablet could run specially-compiled applications and a few "real" Windows applications: Internet Explorer, Word, Excel, and Powerpoint.

Many folks, including Microsoft, believe that the problem with the Surface RT was that it was underpowered. It used an ARM chip, not Intel, for the processor. The operating system was not the standard Windows but Windows RT, compiled for the ARM processor and excluding the .NET framework.

Those decisions meant that the millions (billions?) of existing Windows applications could not run on the Surface RT. IE, Word, Excel, and Powerpoint ran because Microsoft built special versions of those applications.

The failure of the Surface RT was not in the design, but in the expectations of users. People, corporations, and Microsoft all expected the Surface RT to be another "center of computing" -- a device that provided computing services. It could have been -- Microsoft provided tools to develop applications for it -- but people were not willing to devote the time and effort to design, code, and test applications for an unproven platform.

The Surface RT didn't have to be a failure.

What made the Surface RT a failure was not that it was underpowered. What made it a failure was that it was overpowered.

Microsoft's design allowed for applications. Microsoft provided the core Office applications, and provided development tools. This lead to the expectation that the tablet would host applications and perform computations.

A better Surface RT would offer less. It would have the same physical design. It would have the same ARM processor. It might even include Windows RT. But it would not include Word, Excel, or Powerpoint.

Instead of those applications, it would include a browser, a remote desktop client, and SSH. The browser would not be Internet Explorer but Microsoft's new Edge browser, modified to allow for plug-ins (or extensions, or whatever we want to call them).

Instead of a general-purpose computing device, it would offer access to remote computing. The browser allows access to web sites. The remote desktop client allows access to virtual desktops on remote servers. SSH allows for access to terminal sessions, including those on GUI-less Windows servers.

Such a device would offer access to the new, cloud-based world of computing. The name "Surface RT" has been tainted for marketing purposes, so a new name is needed. Perhaps something like "Surface Edge" or "Edgebook" or "Slab" (given Microsoft's recent fascination with four-character names like "Edge", "Code", and "Sway").

A second version could allow for apps, much like a Windows phone or iPhone or Android tablet.

I see corporations using the "Edgebook" because of its connectivity with Windows servers. I'm not sure that individual consumers would want one, but then Microsoft is focussed on the corporate market.

It just might work.

Apple and Microsoft do sometimes agree

In the computing world, Apple and Microsoft are often considered opposites. Microsoft makes software; Apple makes hardware (primarily). Microsoft sells to enterprises; Apple sells to consumers. Microsoft products are ugly and buggy; Apple products are beautiful and "it just works".

Yet they do agree on one thing: The center of the computing world.

Both Apple and Microsoft have built their empires on local, personal-size computing devices. (I would say "PCs" but then the Apple fans would shout "MacBooks are not PCs!" and we don't need that discussion here.)

Microsoft's strategy has been to enable PC users, both individual and corporate. It supplies the operating system and application programs. It supplies software for coordinating teams of computer users (ActiveDirectory, Exchange, Outlook, etc). It supplies office software (word processor, spreadsheet), development tools (Visual Studio, among others), and games. At the center of the strategy is the assumption that the PC will be a computing engine.

Apple's strategy has also been to enable users of Apple products. It designs computing products such as the MacBook, the iMac, the iPad, and the iPhone. Like Microsoft, the center of its strategy is the assumption that these devices will be computing engines.

In contrast, Google and take a different approach. They offer computing services in the cloud. For them, the PCs and tablets and phones are not centers of computing; they are sophisticated input-output devices that feed the computing centers.

That Microsoft's and Apple's strategies revolve around the PC is not an accident. They were born in the microcomputing revolution of the 1970s, and in those days there was no cloud, no web, no internet. (Okay, technically there *was* an internet, but it was limited to a very small number of users.)

Google and Amazon were built in the internet age, and their business strategies reflect that fact. Google provides advertising, search technology, and cloud computing. started by selling books (on the web) and has moved on to selling everything (still on the web) and cloud computing (its AWS offerings).

Google's approach to computing allows it to build Chromebooks, light-powered laptops that have just enough operating system to run the Chrome browser. Everything Google offers is on the web, accessible with merely a browser.

Microsoft's PC-centric view makes it difficult to build a Windows version of a Chromebook. While Google can create Chrome OS as a derivative of Linux, Microsoft is stuck with Windows. Creating a light version of Windows is not so easy -- Windows was designed as a complete entity, not as a partitioned, shrinkable thing. Thus, a Windows Cloudbook must run Windows and be a center of computing, which is quite different from a Chromebook.

Yet Microsoft is moving to cloud computing. It has built an impressive array of services under the Azure name.

Apple's progress towards cloud computing is less obvious. It offers storage services called iCloud, but their true cloud nature is undetermined. iCloud may truly be based on cloud technology, or it may simply be a lot of servers. Apple must be using data centers to support Siri, but again, those servers may be cloud-based or may simply be servers in a data center. Apple has not been transparent in this.

Notably, Microsoft sells developer tools for its cloud-based services and Apple does not. One cannot, using Apple's tools, build and deploy a cloud-based app into Apple's cloud infrastructure. Apple remains wedded to the PC (okay, MacBook, iMac, iPad, and iPhone) as the center of computing. One can build apps for Mac OS X and iOS that use other vendors' cloud infrastructures, just not Apple's.

For now, Microsoft and Apple agree on the center of the computing world. For both of them, it is the local PC (running Windows, Mac OS X, or iOS). But that agreement will not last, as Microsoft moves to the cloud and Apple remains on the PC.

Wednesday, November 11, 2015

Big changes happen early

Software has a life cycle: It is born, it grows, and finally dies. That's not news, or interesting. What is interesting is that the big changes in software happen early in its life.

Let's review some software: PC-DOS, Windows, and Visual Basic.

PC-DOS saw several versions, from 1.0 to 6.0. There were intermediate versions, such as versions 3.3 and 3.31, so there were more that six versions.

Yet the big changes happened early. The transition from 1.0 to 2.0 saw big changes in the API, allowing new device types and especially subdirectories. Moving from version 1.0 to 2.0 required almost a complete re-write of an application. Moving applications from version 2.0 to later versions required changes, but not as significant. The big changes happened early.

Windows followed a similar path. Moving from Windows 1 to Windows 2 was a big deal, as was moving from Windows 2 to Windows 3. The transition from Windows 3 to Windows NT was big, as was the change from Windows 3.1 to Windows 95, but later changes were small. The big changes happened early.

Visual Basic versions 1, 2, and 3 all saw significant changes. Visual Basic 4 had some changes but not as many, and Visual Basic 5 and 6 were milder. The big changes happened early. (The change from VB6 to VB.NET was large, but that was a change to another underlying platform.)

There are other examples, such as Microsoft Word, Internet Explorer, and Visual Studio. The effect is not limited to Microsoft. Lotus 1-2-3 followed a similar arc, as did dBase, R:Base, the Android operating system, and Linux.

Why do big changes happen early? Why do the big jumps in progress occur early in a product's life?

I have two ideas.

One possibility is that the makers and users of an application have a target in mind, a "perfect form" of the application, and each generation of the product moves closer to that ideal form. The first version is a first attempt, and successive versions improve upon previous versions. Over the life of the application, each version moves closer to the ideal.

Another possibility is that changes to an application are constrained by the size of the user population. A product with few users can see large changes; a product with many users can tolerate only minor changes.

Both of these ideas explain the effect, yet they both have problems. The former assumes that the developers (and the users) know the ideal form and can move towards it, albeit in imperfect steps (because one never arrives at the perfect form). My experience in software development allows me to state that most development teams (if not all) are not aware of the ideal form of an application. They may think that the first version, or the current version, or the next version is the "perfect" one, but they rarely have a vision of some far-off version that is ideal.

The latter has the problem of evidence. While many applications grow there user base over time and also shrink their changes over time, not all do. Two examples are Facebook and Twitter. Both have grown (to large user bases) and both have seen significant changes.

A third possibility, one that seems less theoretical and more mundane, is that as an application grows, and its code base grows, it is harder to make changes. A small version 1 application can be changed a lot for version 2. A large version 10 application has oodles of code and oodles of connected bits of code; changing any bit can cause lots of things to break. In that situation, each change must be reviewed carefully and tested thoroughly, and those efforts take time. Thus, the older the application, the larger the code base and the slower the changes.

That may explain the effect.

Some teams go to great lengths to keep their code well-organized, which allows for easier changes. Development teams that use Agile methods will re-factor code when it becomes "messy" and reduce the couplings between components. Cleaner code allows for bigger and faster changes.

If changes are constrained not by large code but by messy code, then as more development teams use Agile methods (and keep their code clean) we will see more products with large changes not only early in the product's life but through the product's life.

Let's see what happens with cloud-based applications. These are distributed by nature, so there is already an incentive for smaller, independent modules. Cloud computing is also younger than Agile development, so all cloud-based systems could have been (I stress the "could") developed with Agile methods. It is likely that some were not, but it is also likely that many were -- more than desktop applications or web applications.

Thursday, November 5, 2015

Fleeing to less worse programming languages

My career in programming has seen a number of languages. I moved from one language to another, always moving to a better language than the previous.

I started with BASIC. After a short detour with assembly language and FORTRAN, I moved to Pascal, and then C, and then C++, which was followed by Java, C#, Perl, Ruby, and Python.

My journey has paralleled the popularity of programming languages in general. We as an industry started with assembly language, then moved to COBOL and FORTRAN, followed by BASIC, Pascal, C, C++, Visual Basic, Java, and C#.

There have been other languages: PL/I, Ada, Forth, dBase, SQL, to name a few. Each had had some popularity (SQL still enjoys it).

We move from one language to another. But why do we move? Do we move away from one or move to a better language?

BASIC was a useful language, but it had definite limitations. It was interpreted, so performance was poor and there was no way to protect source code.

Pascal was compiled, so it had better performance than BASIC and you could distribute the executable and keep the source code private. Yet it was fractured in multiple distributions and there was no good way to build large systems.

Each language had its good points, but it also had limits. Moving to the next language meant moving to a language that was better in that it didn't suck as much as the former.

The difference between moving to a better language and moving away from a problematic language is not insignificant. It tells us about the future.

If people are moving away from languages because of problems, then when we arrive at programming languages that have no problems (or no significant problems) then people will stay with them. Switching programming languages has a cost, so the benefit of the new language must outweigh the effort to switch.

In other words, once we get a language that is good enough, we stop.

I'm beginning to think that Java and C# may be good enough. They are general-purpose, flexible, powerful, and standardized. Each is a living language: Microsoft actively improves C# and Oracle improves Java.

If my theory is correct, then businesses and organizations with large code bases in Java or C# will stay with them, rather than move to a "better" language such as Python or Haskell. Those languages may be better, but Java and C# are good enough.

Many companies stayed with COBOL for their financial applications. But that is perhaps not unreasonable, as COBOL is designed for financial processing and other languages are not. Therefore, a later language such as BASIC or Java is perhaps worse at processing financial transactions than COBOL.

C# and Java are built for processing web transactions. Other languages may be better at it, but they are not that much better. Expect the web to stay with those languages.

And as for cloud applications? That remains to be seen. I tend to think that C# and Java are not good languages for cloud applications, and that Haskell and Python are better (that is, less worse). Look for cloud development to use those languages.

Monday, November 2, 2015

Microsoft software on more than Windows

People are surprise -- and some astonished -- that Microsoft would release software for an operating system other than Windows.

They shouldn't be. Microsoft has a long history of providing software on operating systems other than Windows.

What are they?

  • PC-DOS and MS-DOS, of course. But that is, as mathematicians would say, a "trivial solution".
  • Mac OS and Mac OS X. Microsoft supplied "Internet Explorer for Mac" on OS 7, although it has discontinued that product. Microsoft supplies "Office for Mac" as an active product.
  • Unix. Microsoft supplied "Internet Explorer for Unix".
  • OS/2. Before the breakup with IBM, Microsoft worked actively on several products for OS/2.
  • CP/M. Before MS-DOS and PC-DOS there was CP/M, an operating system from the very not_microsoft company known as Digital Research. Microsoft produced a number of products for CP/M, mainly its BASIC interpreter and compilers for BASIC, FORTRAN, and COBOL.
  • ISIS-II and TEKDOS. Two early operating systems which ran Microsoft BASIC.
  • Any number of pre-PC era computers, including the Commodore 64, the Radio Shack model 100, and the NEC 8201, which all ran Microsoft BASIC as the operating system.

It is true that Microsoft, once it obtained dominance in the market with PC-DOS/MS-DOS (and later Windows) built software that ran only on its operating systems. But Microsoft has a long history of providing software for use on non-Microsoft platforms.

Today Microsoft provides software on Windows, Mac OS X, iOS, Android, and now Chrome. What this means is that Microsoft sees opportunity in all of these environments. And possibly, Microsoft may see that the days of Windows dominance are over.

That Windows is no longer the dominant solution may shock (and frighten) people. The "good old days" of "Windows as the standard" had their problems, and people grumbled and complained about things, but they also had an element of stability. One knew that the corporate world ran on Windows, and moving from one company to another (or merging two companies) was somewhat easy with the knowledge that Windows was "everywhere".

Today, companies have options for their computing needs. Start-ups often use MacBooks (and therefore Mac OS X). Large organizations have expanded their list of standard equipment to include Linux for servers and iOS for individuals. The market for non-Windows software is now significant, and Microsoft knows it.

I see Microsoft's expansion onto platforms other than Windows as a return to an earlier approach, one that was successful in the past. And a good business move today.

Thursday, October 29, 2015

Tablet sales

A recent article on blamed recent lackluster sales of iPads on... earlier iPads. This seems wrong.

The author posed this premise: Apple's iOS 9 runs on just about every iPad (it won't run on the very first iPad model, but it runs on the others) and therefore iPad owners have little incentive to upgrade. iPad owners behave differently from iPhone owners, in that they (the iPad owners) hold on to their tablets longer than people hang on to their phones.

The latter part of that premise may be true. I suspect that tablet owners do upgrade less frequently that phone owners (for Apple or Android camps). While tablets are typically less expensive than phones, iPads are pricey, and iPad owners may wish to delay an expensive purchase. My belief is that people replace phones more readily than tablets because of the relative size of phones and tablets. Tablets, being larger, are viewed as more valuable. The psychology drives us to replace phones faster than tablets. But that's a pet theory.

Getting back to the ZDnet article: There is a hidden assumption in the author's argument. He assumes that the only people buying iPads are previous iPad owners. In other words, everyone who is going to buy an iPad has already purchased one, and the only sames for iPads will be upgrades as a customer replaces an iPad with an iPad. (Okay, perhaps not "only". Perhaps "majority". Perhaps it's "most people buying iPads are iPad owners.)

This is a problem for Apple. It means that they have, rather quickly, reached market saturation. It also means that they are not converting people from Android tablets to Apple tablets.

I don't know the numbers for iPad sales and new sales versus upgrades. I don't know the corresponding numbers for Android tablets either.

But if the author's assumption is correct, and the tablet market has become saturated, it could make things difficult for Apple, Google (Alphabet?), and ... Microsoft. Microsoft is trying to get into the tablet market (in hardware and in software alone). A saturated market would mean little interest in Windows tablets.

Or maybe it means that Microsoft will be forced to offer something new, some service that compels one to look seriously at a Windows tablet.

Sunday, October 25, 2015

Refactoring is almost accepted as necessary

The Agile Development process brought several new ideas to the practice of software development. The most interesting, I think, is the notion of re-factoring as an expected activity.

Refactoring is the re-organization of code, making it more readable and eliminating redundancies. It is an activity that serves the development team; it does not directly contribute to the finished product.

Earlier methods of software development did not list refactoring as an activity. They made the assumption that once written, the software was of sufficient quality to deliver. (Except for defects which would be detected and corrected in a "test" or "acceptance" phase.)

Agile Development, in accepting refactoring, allows for (and encourages) improvements to the code without changing the behavior of the code (that is, refactoring). It is a humbler approach, one that assumes that members of the development team will learn about the code as the write the code and identify improvements.

This is a powerful concept, and, I believe, a necessary one. Too many projects suffer from poor code quality -- the "technical backlog" or "technical debt" that many developers will mention. The poor code organization slows development, as programmers must cope with fragile and opaque code. Refactoring improves code resilience and improves visibility of important concepts. Refactored code is easier to understand and easier to change, which reduces the development time for future projects.

I suspect that all future development methods will include refactoring as a task. Agile Development, as good as it is, is not the perfect method for all projects. It is suited to projects that are exploratory in nature, projects that do not have a specific delivery date for a specific set of features.

Our next big software development method may be a derivative of Waterfall.

Agile Development (and its predecessor, Extreme Programming) was, in many ways, a rebellion against the bureaucracy and inflexibility of Waterfall. Small teams, especially in start-up companies, adopted it and were rewarded. Now, the larger, established, bureaucratic organizations envy that success. They think that adopting Agile methods will help, but I have yet to see a large organization successfully merge Agile practices into their existing processes. (And large, established, bureaucratic organizations will not convert from Waterfall to pure Agile -- at least not for any length of time.)

Instead of replacing Waterfall with Agile, large organizations will replace Waterfall with an improved version of Waterfall, one that keeps the promise of a specific deliverable on a specific date yet adds other features (one of which will be refactoring). I'm not sure who will develop it; the new process (let's give it a name and call it "Alert Waterfall") may come from IBM or Microsoft or, or some other technical leader.

Yet it will include the notion of refactoring, and with it the implicit declaration that code quality is important -- that it has value. And that will be a victory for developers and managers everywhere.