Wednesday, January 28, 2015

The mobile/cloud revolution has no center

In some ways, the mobile/cloud market is a re-run of the PC revolution. But not completely.

The PC revolution of the 1980s (which saw rise of the IBM PC, PC-DOS, and related technologies) introduced new hardware that was cheaper and easier to use than the previous technologies of mainframes and minicomputers. Today's mobile/cloud revolution shares that aspect, with cloud-based services and mobile devices cheaper than their PC-based counterparts. It's much easier to use a phone or tablet than it is to use a PC -- ask the person who installs software.

The early PC systems, while cheaper and easier to use, were much less capable than the mainframe and minicomputer systems. People ran large corporations on mainframes and small businesses on minicomputers; PCs were barely able to print and handle a few spreadsheets. It was only after PC-compatible networks and network-aware software (Windows 3.1, Microsoft Exchange) that one could consider running a business on PCs. Mobile/cloud shares this attribute, too. Phones and tablets are network-aware, of course, but the whole "mobile and cloud" world is too new, too different, too strange to be used for business. (Except for some hard-core folks who insist on doing it.)

Yet the two revolutions are different. The PC revolution had a definite center: the IBM PC at first, and Windows later. The 1980s saw IBM as the industry leader: IBM PCs were the standard unit for business computing. Plain IBM PCs at first, and then IBM PC XT units, and later IBM PC-compatibles. There were lots of companies offering personal computers that were not IBM-compatible; these offerings (and their companies) were mostly ignored. Everyone wanted "in" on the IBM PC bandwagon: software makers, accessory providers, and eventually clone manufacturers. It was IBM or nothing.

The mobile/cloud revolution has no center, no one vendor or technology. Apple devices are popular but no vendors are attempting to sell clones in the style of PC clones. To some extent, this is due to Apple's nature and their proprietary and closed designs for their devices. (IBM allowed anyone to see the specs for the IBM PC and invited other vendors to build accessories.)

Apple is not the only game in town. Google's Android devices compete handily with the Apple iPhone and iPad. Google also offers cloud services, something Apple does not. (Apple's iCloud product is convenient storage but it is not cloud services. You cannot host an application in it.)

Microsoft is competing in the cloud services area with Azure, and doing well. It has less success with it's Surface tablets and Windows phones.

Other vendors offer cloud services (Amazon.com, IBM, Oracle, SalesForce) and mobile devices (BlackBerry). Today's market sees lots of technologies. It is a far cry from the 1980s "IBM or nothing" mindset, which may show that consumers of IT products and services have matured.

When there is one clear leader, the "safe" purchasing decision is easy: go with the leader. If your project succeeds, no one cares; if your project fails you can claim that even "big company X" couldn't handle the task.

The lack of a clear market leader makes life complicated for those consumers. With multiple vendors offering capable but different products and services, one must have a good understanding of the projects before selecting a vendor. Success is still success, but failure allows others to question your ability.

Multiple competing technologies also means competition at a higher level. In the PC revolution, IBM and Compaq competed on technology, but the basic platform (the PC) was a known quantity. In mobile/cloud, we see new technologies from start-up companies such as containers and new technologies from the established vendors such as cloud management and the Swift programming language.

The world of mobile and cloud has no center, and as such it can move faster than the old PC world. Keep that in mind when building systems and selecting vendors. Be prepared for bumps and turns.

Not just innovation, but successful innovation

We've heard of "The Innovator's Dilemma", Clayton Christensen's book that describes how market leaders focus on customer needs and are surpassed by other, more innovative companies. That certainly seems to apply to the IT industry. Consider:

IBM, in the 1960s, introduced the System/360, an innovation. (It was a general-purpose computer when most computers were designed and built for specific purposes.) Later, it offered the System/370 which was a bigger, better version of the System/360. They were successful, yet IBM missed the innovation of minicomputers from DEC, Wang, and Data General. IBM offered the old design, yet customers wanted innovation. (To be fair, IBM later offered its own minicomputers, different from its mainframes, and customers did buy lots of them.)

IBM introduced the IBM PC and instantly became the leader for the PC market, eclipsing Apple, Radio Shack, Commodore, and the dozens of other manufacturers. IBM enjoyed success for a number of years. The IBM PC XT was a better version of the PC, and the IBM PC AT was a bigger, better version of the IBM PC (and PC XT). Yet IBM stumbled with the PCjr and the PS/2 lines, and never recovered. Compaq took the lead with its Deskpro line of PCs. IBM was offering innovation, yet customers wanted the old designs.

DEC was successful in the minicomputer business, yet failed in the PC business. Its early personal computers were smaller versions of its minicomputers; compared to PCs they were expensive and complicated. DEC was offering the old designs, yet customers wanted someone else's design.

Commodore built success with its PET, CBM, and especially its C-64 models. It failed with its Amiga, a very innovative computer. Commodore offered innovation, yet customers wanted the IBM PC.

It seems that successful innovation requires two components: a break from past designs and a demand from customers.

Apple innovates -- successfully. From the Apple II to the Macintosh to the iPod (the innovation there was really iTunes) to the iPad, Apple has introduced products that break from its past *and* that meet customer demand. The genius of Apple is that many of its products don't fill a demand from customers, but create the demand. Few people realized that they wanted iPhones or iPads (or Macintosh computers) until they saw one.

There is a lesson here for Microsoft. The PC market is changing; innovation is needed. Microsoft, if it wants to remain a market leader, must innovate successfully. It must introduce new products and services that will meet a customer demand.

The same lesson holds for other technology companies. IBM, Oracle, and even Red Hat should look to successful innovation.

Wednesday, January 21, 2015

Fluidity and complexity of programming languages

We have oodles of programming languages: Python, Java, C#, C++, and dozens more. We have tutorials. We have help pages. We have people who track the popularity of languages.

Almost all languages are fluid; their syntax changes. Some languages (COBOL, C++, Forth) change slowly. Other languages (Visual Basic, Java, C#) change more frequently.

Some languages are simple: C, Python, and Pascal come to mind. Others are complicated: COBOL, C++, and Ada are examples.

* * * * *

It strikes me that the languages with simple syntax are the languages with a single person leading the effort. Languages that were designed (and maintained) by an individual (or perhaps a pair of closely working individuals) tend to be simple. Python is lead by Guido van Rossum. Pascal was designed by Niklaus Wirth. C was designed by Kernighan and Ritchie. Eiffel was created by Bertrand Meyer.

In contrast, languages with complex syntax tend to be designed by committee. COBOL and Ada were designed by committee (for the federal government, no less!). C++, while a descendant of C, has a very complex syntax. While Bjarne Stroustrup did much of the work, the C++ standard committee had a lot of say in the final specification.

The one example of a complex language not designed by committee is Perl, which shows that this is a tendency, not an absolute.

* * * * *

It also strikes me that the languages which change are the proprietary languages developed and maintained by commercial companies. Languages such as Visual Basic, Java, and C# have changed rapidly over their lifetimes.

Languages developed by individuals do change, but slowly, and often by entities other than the original developers. BASIC (another simple language developed by Kemeny and Kurtz) was simple and later enhanced (made complex) by Microsoft. Pascal was simple and enhanced by Borland for their Turbo Pascal product. When Niklaus Wirth wanted to make changes to Pascal, he created the language Modula-2 (and later, Modula-3).

Programming languages designed by committee with strong standards (COBOL, Ada) tend to change slowly, due to the nature of committees.

* * * * *

Languages built within commercial entities need not be complex. They may start simple and grow into something complicated. Java and C# were both developed by individuals, and that initial simplicity shows through their current (more complex) design.

* * * * *

What can we expect in the future? I see little activity (or call for) committees to design languages. This is possibly a long-term risk, as committee-built languages, once adopted as standards, tend to be durable, stable, and cross-platform (COBOL, C++, Ada).

I do see individuals developing languages. Python and Ruby have strong followings. JavaScript is popular. I expect other individuals to create new languages for distributed computing. These languages will be simple, specific, and show little change over time.

I also see commercial vendors building languages. Apple recently introduced Swift. If they follow the trend for vendor-specific languages, we can expect changes to the Swift language, perhaps as often as every year (or with every release of MacOS or iOS). Microsoft is looking to build its cloud and mobile offerings; new versions of C# may be in the works. Oracle is working on Java; recent changes have fixed the code base and tools, new versions may change the language. Google is building Go and Dart languages. Can Google leverage them for advantage?

The Dart language is in an interesting position. It is a replacement for JavaScript and it compiles to JavaScript. It must remain simpler than JavaScript; if it becomes more complex, then programmers will simply use JavaScript instead of the harder Dart.

* * * * *

In short, I expect programming languages from vendors to change moderately rapidly. I expect programming languages from individuals to change less rapidly. Whether a programming language changes over time may affect your choice. You decide.

Tuesday, January 13, 2015

Functional programming exists in C++

Some programmers of C++ may look longingly at the new functional programming languages Haskell or Erlang. (Or perhaps the elder languages of Common Lisp or Scheme.) Functional programmi-ness is a Shiny New Thing, and C++ long ago lost its Shiny New Thing luster of object-oriented programming.

Yet C++ programmers, if they look closely enough, can find a little bit of functional programming in their language. Hidden in the C++ specification is a tiny aspect of functional programming. It occurs in C++ constructor initializers.

Initializers are specifications for the initialization of member variables in a constructor. The C++ language provides for default initialization of member variables; initializers override these defaults and let the programmer specific actions.

Given the class:

class MyInts {
private:
    int a1_;
    int a2_;
public:
    MyInts(void);
}

one can store two integers in an object of type MyInts. The old-style C++ method is to provide 'setter' and 'getter' functions to allow the setting and retrieval of values. Something like:

class MyInts {
private:
    int a1_;
    int a2_;
public:
    MyInts(void);
    void setA1(int a1) { a1_ = a1; };
    int getA1(void) const { return a1_; };
    void setA2(int a2) { a2_ = a2; };
    int getA2(void) const { return a2_; };
}

The new-style C++ (been around for years, though) dispenses with the 'setter' functions and uses initializers and parameters in the constructor:

class MyInts {
private:
    int a1_;
    int a2_;
public:
    MyInts(int a1, int a2) : a1_(a1), a2_(a2) {};
    int getA1(void) const { return a1_; };
    int getA2(void) const { return a2_; };
}

The result is an efficiently-constructed object. Another result is an immutable object, as one cannot change its state after construction. (The 'setter' functions are gone.) That may or may not be what you want, although in my experience ir probably is what you want.

Initializers are interesting. One cannot do just anything in an initializer. You can provide a constant value. You can provide a constructor for a class (if your member variable is an object). You can call a function that provides a value, but it should be either a static function (not a member function) or a function outside of the class. (Calling a member function on the same class is an undefined operation. It may work, or it may not.)

These restrictions on initializers enforce one of the attributes of functional programming: immutable objects. In my example, I eliminated the 'setter' functions to may objects of MyInts immutable, but that was an intentional effect. I could have left the 'setter' functions in place, and then objects of MyInts would be mutable.

Initializers brook no such nonsense. You have one opportunity to set the value for a member variable (you cannot initialize a member variable more than once). Once it is set, it cannot be changed, during the initialization. You cannot call a function that has a side effect of changing a member variable that has been previously set. (Such a call would be to a member function, and while permitted by the compiler, you should avoid them.)

Initializers provide a small bit of functional programming inside C++. Who would have thought?

Technically, the attributes I have described are not functional programming, but merely immutable objects. Functional programming allows one to treat functions as first class citizens of the language, creating them and passing them to other functions as needed. The initializers in C++ do not allow such constructs.

Sunday, January 11, 2015

The answer is not a bigger computer

The different providers of Infrastructure-as-a-Service (IaaS) or virtualized computers keep increasing the size of their computers. Each news release offers configurations with more cores and more memory. Bit I think they (and the people who buy the services) are missing the point.

The power of virtualized computers is not in larger computers.

Traditionally in computers, bigger was better. When we had single mainframes managing the data for companies. a larger computer meant faster returns on reports and more powerful analyses. When we had personal computers on desktops, faster computers meant faster computations in spreadsheets.

When the unit of computing is a single device (a mainframe or a PC), then bigger is better (usually).

But with cloud computing, the unit of computing is not a single device -- it is the cloud itself, which consists of a number of devices. A variable number of virtualized devices, actually. Cloud-based systems are not single application programs running on single computers; they are systems of small programs running on multiple servers. The expansion capabilities of cloud are not based on a single computer. They are not based on expanding the processor, or increasing the memory.

Cloud computing expands by increasing the number of computers.

Looking for increased performance through more cores or more memory is not the "way of the cloud". If you have a large process, one that demands lots of processing and memory, then you are not doing cloud right.

Moving to cloud means re-thinking the way we build applications. It means systems assembled from services, not large monolithic programs. Look for the large processes and break them into smaller ones. It may not be easy. Our legacy systems were designed around single-device thinking, and optimized for single devices. To be successful in cloud computing, our thinking has to change.

Thursday, January 8, 2015

Hardwiring the operating system

I tend to think of computers as consisting of four conceptual parts: hardware, operating system, application programs, and my data.

I know that computers are complex objects, and each of these four components has lots of subcomponents. For example, the hardware is a collection of processor, memory, video card, hard drive, ports to external devices, and "glue" circuitry to connect everything. (And even that is omitting some details.)

These top-level divisions, while perhaps not detailed, are useful. They allow me to separate the concerns of a computer. I can think about my data without worrying about the operating system. I can consider application programs without bothering with hardware.

It wasn't always this way. Oh, it was for personal computers, even those from the pre-IBM PC days. Hardware like the Altair was sold as a computing box with no operating system or software. Gary Kildall at Digital Research created CP/M to run on the various hardware available and designed it to have a dedicates unit for interfacing with hardware. (That dedicated unit was the Basic Input-Output System, or 'BIOS'.)

It was the very early days of computers that saw a close relationship between hardware, software, and data. Very early computers had no operating systems (operating systems themselves designed to separate the application program from the hardware). Computers were specialized devices, tailored to the task.

IBM's System/360 is recognized as the first general computer: a single computer that could be programmed for different applications, and used within an organization for multiple purposes. That computer began us on the march to separate hardware and software.

The divisions are not simply for my benefit. Many folks who work to design computers, build applications, and provide technology services find these divisions useful.

The division of computers into these four components allows for any one of the components to be swapped out, or moved to another computer. I can carry my documents and spreadsheets (data) from my PC to another one in the office. (I may 'carry' them by sending them across a network, but you get the idea.)

I can replace a spreadsheet application with a different spreadsheet application. Perhaps I replace Excel 2010 with Excel 2013. Or maybe change from Excel to another PC-based spreadsheet. The new spreadsheet software may or may not read my old data, so the interchangeability is not perfect. But again, you get the idea.

More than half a century later, we are still separating computers into hardware, operating system, application programs, and data.

And that may be changing.

I have several computing devices. I have a few PCs, including one laptop I use for my development work and e-mail. I have a smart phone, the third I have owned. I have a bunch of tablets.

For my PCs, I have installed different operating systems and changed them over time. The one Windows PC started with Windows 7. I upgraded it to Windows 8 and it now runs Windows 8.1. My Linux PCs have all had different releases of Ubuntu, and I expect to update them with the next version of Ubuntu. Not only do I get major versions, but I receive minor updates frequently.

But the phones and tablets are different. The phones (an HTC and two Samsung phones) ran a single operating system since I took them out of the box. (I think one of them got an update.) On of my tablets is an old Viewsonic gTablet running Android 2.2. There is no update to a later version of Android -- unless I want to 'root' the tablet and install another variant of Android like Cyanogen.

PCs get new versions of operating systems (and updates to existing versions). Tablets and phones get updates for applications, but not for operating systems. At least nowhere near as frequently as PCs.

And I have never considered (seriously) changing the operating system on a phone or tablet.

Part of this change is due, I believe, to the change in administration. We who own PCs administer the PC and decide when to update software. But we who think we own phones and tablets do *not* administer the tablet. We do not decide when to update applications or operating systems. (Yes, there are options to disable or defer updates, in Android and iOS.)

It is the equipment supplier, or the cell service provider, who decides to update operating systems on these devices. And they have less incentive to update the operating system than we do. (I suspect updates to operating systems generate a lot of calls from customers, either because they are confused or the update broke some functionality.)

So I see the move to smart phones and tablets, and its corresponding shift of administration from user to provider, as a step in synchronizing hardware and operating system. And once hardware and operating system are synchronized, they are not two items but one. We may, in the future, see operating systems baked in to devices with no (or limited) ways to update them. Operating systems may be part of the device, burned into a ROM.

Tuesday, January 6, 2015

The cloud brings change

Perhaps the greatest effect that cloud computing has on operations is change -- or more specifically, a move away from the strict policies against change.

Before cloud technology, operations viewed hardware and software as things that should change rarely and under strictly controlled conditions. Hardware was upgraded only when needed, and only after long planning sessions to review the new equipment and ensure it would work. Software was updated only during specified "downtime windows", typically early in the morning on a weekend when demand would be low.

The philosophies of "change only when necessary" and "only when it won't affect users" were driven by the view of hardware and software. Before cloud computing, most people had a mindset that I call the "mainframe model".

In this "mainframe model," there is one and only one computer. In the early days of computing, this was indeed the case. Turning the computer off to install new hardware meant that no one could use it. Since the entire company ran on it, a hardware upgrade (or a new operating system) meant that the entire company had to "do without". Therefore, updates had to be scheduled to minimize their effect and they had to be carefully planned to ensure that everything would work upon completion.

Later systems, especially web systems, used multiple web servers and often multiple data centers with failover, but people kept the mainframe model in their head. They carefully scheduled changes to avoid affecting users, and they carefully planned updates to ensure everything would work.

Cloud computing changes that. With cloud computing, there is not a single computer. There is not a single data center (if you're building your system properly). By definition, computers (usually virtualized) can be "spun up" on demand. Taking one server offline does not affect your customers; if demand increases you simply create another server to handle the additional workload.

The ability to take servers offline means that you can relax the scheduling of upgrades. You do not need to wait until the wee hours of the morning. You do not need to upgrade all of your servers at once. (Although running servers with multiple versions of your software can cause other problems. More on that later.)

Netflix has taken the idea further, creating tools that deliberately break individual servers. By doing so, Netflix can examine the failures and design a more robust system. Once a single server fails, other servers take over the work -- or so Netflix would like. If servers don't pick up the workload, Netflix has a problem and changes their code.

Cloud technology lets us use a new model, what I call the "cloud model". This model allows for failures at any time -- not just from servers failing but from servers being taken offline for upgrades or maintenance. Those upgrades could be hardware, virtualized hardware, operating systems, database schemas, or application software.

The cloud model allows for change. It requires a different set of rules. Instead of scheduling all changes for a single downtime window, it distributes changes over time. It mandates that newer versions of applications and databases work with older versions, which probably means smaller, more incremental changes. It also encourages (perhaps requires) software to administer the changes and ensure that all servers get the changes. Instead of growing a single tree you are growing a forest.

The fact that cloud technology brings change, or changes our ideas of changes, should not be a surprise. Other technologies and techniques (agile development, dynamic languages) have been moving in the same direction. Even Microsoft and Apple are releasing products more quickly. Change is upon us, whether we want it or not.