Thursday, February 5, 2015

The return of multi-platform, part two

Building software to run on one platform is hard. Building software to run on multiple platforms is harder. So why would one? The short answer is: Because you have to.

When Microsoft dominated IT, one could live entirely within the Microsoft world. Microsoft provided the operating system (Windows), the programming languages (Visual Basic, Visual C++, C#, and others), the programming tools (Visual Studio), the database (SQL Server), the web server (IIS), the authentication server (ActiveDirectory), the office suite (Word, Excel, Powerpoint, Outlook), the search engine (Bing), cloud services (Azure), ... everything you needed. For any question you could pose (in IT), Microsoft had an answer.

But the world is a large place, larger than any one vendor and any one vendor's offerings. Ask enough questions, try to do enough things, and you eventually find a question for which the vendor has no answer (or no suitable answer).

Microsoft's world ends at mobile devices. The Surface tablets and Windows phones have seen dismal acceptance. Instead, people (and companies) have adopted devices from Apple and Google. If you want a solution in the mobile space, you have to work with those two. (Well, you can limit your offerings to the Microsoft platforms, at the cost of a large portion of the market. You may be unwilling to make that trade-off.)

Microsoft has decided to expand to multiple platforms. They offer Word and Excel on Android and iOS devices, which means that Microsoft is *not* willing to limit themselves to Windows mobile devices. They are not willing to make the trade-off.

Beyond office applications, Microsoft has started to open its .NET framework and CLR runtime for other platforms (notably Linux).

With Microsoft embracing the notion of multi-platform, other vendors may soon follow. I suspect Apple will remain a "closed, everything Apple" company -- but they focus on consumers, not enterprises. Vendors of enterprise software (IBM, Oracle, SAS, etc.) will look to operate on multiple platforms. IBM has supported Linux for quite some time.

Microsoft's support of multiple platforms gives legitimacy to the notion. It's now "okay" to support multiple platforms.

Sunday, February 1, 2015

The return of multi-platform, part one

A long time ago, there was a concept known as "multi-platform". This concept was an attribute of programs. The idea was that a single program could run on computer systems of different designs. This is not a simple thing to implement. Different systems are, well different, and programs are built for specific processors and operating systems.

The computing world, for the past two decades, has been pretty much a Windows-only place. As such, programs on the market had to run on only one platform: Windows. That uniformity has simplified the work of building programs. (To anyone involved the creation of programs, the idea that building programs is an easy task may be hard to believe. But I'm not claiming that building programs for a single platform is simple -- I'm claiming that it is simpler than building programs for multiple platforms.)

Programs require a user interface (or an API), processing, access to memory, and access to storage devices. Operating systems provide many of those services, so instead of tailoring a program to a specific processor, memory, and input-output devices, one can tailor it to the operating system. Thus we have programs that are made for Windows, or for MacOS, or for Linux.

If we want a program to run on multiple platforms, we need it to run on multiple operating systems. So how do we build a program that can run on multiple operating systems? We've been working on answers for a number of years. Decades, actually.

The early programming languages FORTRAN and COBOL were designed for computers from different manufacturers. (Well, COBOL was. FORTRAN was an IBM creation that was flexible enough to implement on non-IBM systems.) They were standard, which meant that a program written in FORTRAN could be compiled and run on an IBM system, and compiled and run on a system from another vendor.

The "standard language" solution has advantages and disadvantages. It requires a single language standard and a set of compilers for each "target" platform. For COBOL and FORTRAN, the compiler for a platform was (generally) made by the platform vendor. The hardware vendors had incentives to "improve" or "enhance" their compilers, adding features to the language. The idea was to get customers to use one of their enhancements; once they were "hooked" it would be hard to move to another vendor. So the approach was less "standard language" and more "standard language with vendor-specific enhancements", or not really a standard.

The C and C++ languages overcome the problem of vendor enhancements with strong standards committees. They prevented vendors from "improving" languages by creating a "floor equals ceiling" standard which prohibited enhancements. For C and C++, a compliant compiler must do exactly what the standard says, and no more than that.

The more recent programming languages Java, Perl, Python, and Ruby use a different approach. They each have run-time engines that interpret or compile the code. Unlike the implementations with FORTRAN and COBOL, the implementations of these later languages are not provided by the hardware vendors or operating system vendors. Instead, they are provided by independent organizations who are not beholden to vendors.

The result is that we now have a set of languages that let us write programs for multiple platforms. We can write a Java program and run it on Windows, or MacOS, or Linux. We can do the same with Perl. And with Python. And with... well, you get the idea.

Programs for multiple platforms weakens the draw for any one operating system or hardware. If my programs are written in Visual Basic, I must run them on Windows. But if they are written in Java, I can run them on any platform.

With the fragmentation of the tech world and the rise of alternative platforms, a multi-platform program is a good thing. I expect to see more of them.

Wednesday, January 28, 2015

The mobile/cloud revolution has no center

In some ways, the mobile/cloud market is a re-run of the PC revolution. But not completely.

The PC revolution of the 1980s (which saw rise of the IBM PC, PC-DOS, and related technologies) introduced new hardware that was cheaper and easier to use than the previous technologies of mainframes and minicomputers. Today's mobile/cloud revolution shares that aspect, with cloud-based services and mobile devices cheaper than their PC-based counterparts. It's much easier to use a phone or tablet than it is to use a PC -- ask the person who installs software.

The early PC systems, while cheaper and easier to use, were much less capable than the mainframe and minicomputer systems. People ran large corporations on mainframes and small businesses on minicomputers; PCs were barely able to print and handle a few spreadsheets. It was only after PC-compatible networks and network-aware software (Windows 3.1, Microsoft Exchange) that one could consider running a business on PCs. Mobile/cloud shares this attribute, too. Phones and tablets are network-aware, of course, but the whole "mobile and cloud" world is too new, too different, too strange to be used for business. (Except for some hard-core folks who insist on doing it.)

Yet the two revolutions are different. The PC revolution had a definite center: the IBM PC at first, and Windows later. The 1980s saw IBM as the industry leader: IBM PCs were the standard unit for business computing. Plain IBM PCs at first, and then IBM PC XT units, and later IBM PC-compatibles. There were lots of companies offering personal computers that were not IBM-compatible; these offerings (and their companies) were mostly ignored. Everyone wanted "in" on the IBM PC bandwagon: software makers, accessory providers, and eventually clone manufacturers. It was IBM or nothing.

The mobile/cloud revolution has no center, no one vendor or technology. Apple devices are popular but no vendors are attempting to sell clones in the style of PC clones. To some extent, this is due to Apple's nature and their proprietary and closed designs for their devices. (IBM allowed anyone to see the specs for the IBM PC and invited other vendors to build accessories.)

Apple is not the only game in town. Google's Android devices compete handily with the Apple iPhone and iPad. Google also offers cloud services, something Apple does not. (Apple's iCloud product is convenient storage but it is not cloud services. You cannot host an application in it.)

Microsoft is competing in the cloud services area with Azure, and doing well. It has less success with it's Surface tablets and Windows phones.

Other vendors offer cloud services (Amazon.com, IBM, Oracle, SalesForce) and mobile devices (BlackBerry). Today's market sees lots of technologies. It is a far cry from the 1980s "IBM or nothing" mindset, which may show that consumers of IT products and services have matured.

When there is one clear leader, the "safe" purchasing decision is easy: go with the leader. If your project succeeds, no one cares; if your project fails you can claim that even "big company X" couldn't handle the task.

The lack of a clear market leader makes life complicated for those consumers. With multiple vendors offering capable but different products and services, one must have a good understanding of the projects before selecting a vendor. Success is still success, but failure allows others to question your ability.

Multiple competing technologies also means competition at a higher level. In the PC revolution, IBM and Compaq competed on technology, but the basic platform (the PC) was a known quantity. In mobile/cloud, we see new technologies from start-up companies such as containers and new technologies from the established vendors such as cloud management and the Swift programming language.

The world of mobile and cloud has no center, and as such it can move faster than the old PC world. Keep that in mind when building systems and selecting vendors. Be prepared for bumps and turns.

Not just innovation, but successful innovation

We've heard of "The Innovator's Dilemma", Clayton Christensen's book that describes how market leaders focus on customer needs and are surpassed by other, more innovative companies. That certainly seems to apply to the IT industry. Consider:

IBM, in the 1960s, introduced the System/360, an innovation. (It was a general-purpose computer when most computers were designed and built for specific purposes.) Later, it offered the System/370 which was a bigger, better version of the System/360. They were successful, yet IBM missed the innovation of minicomputers from DEC, Wang, and Data General. IBM offered the old design, yet customers wanted innovation. (To be fair, IBM later offered its own minicomputers, different from its mainframes, and customers did buy lots of them.)

IBM introduced the IBM PC and instantly became the leader for the PC market, eclipsing Apple, Radio Shack, Commodore, and the dozens of other manufacturers. IBM enjoyed success for a number of years. The IBM PC XT was a better version of the PC, and the IBM PC AT was a bigger, better version of the IBM PC (and PC XT). Yet IBM stumbled with the PCjr and the PS/2 lines, and never recovered. Compaq took the lead with its Deskpro line of PCs. IBM was offering innovation, yet customers wanted the old designs.

DEC was successful in the minicomputer business, yet failed in the PC business. Its early personal computers were smaller versions of its minicomputers; compared to PCs they were expensive and complicated. DEC was offering the old designs, yet customers wanted someone else's design.

Commodore built success with its PET, CBM, and especially its C-64 models. It failed with its Amiga, a very innovative computer. Commodore offered innovation, yet customers wanted the IBM PC.

It seems that successful innovation requires two components: a break from past designs and a demand from customers.

Apple innovates -- successfully. From the Apple II to the Macintosh to the iPod (the innovation there was really iTunes) to the iPad, Apple has introduced products that break from its past *and* that meet customer demand. The genius of Apple is that many of its products don't fill a demand from customers, but create the demand. Few people realized that they wanted iPhones or iPads (or Macintosh computers) until they saw one.

There is a lesson here for Microsoft. The PC market is changing; innovation is needed. Microsoft, if it wants to remain a market leader, must innovate successfully. It must introduce new products and services that will meet a customer demand.

The same lesson holds for other technology companies. IBM, Oracle, and even Red Hat should look to successful innovation.

Wednesday, January 21, 2015

Fluidity and complexity of programming languages

We have oodles of programming languages: Python, Java, C#, C++, and dozens more. We have tutorials. We have help pages. We have people who track the popularity of languages.

Almost all languages are fluid; their syntax changes. Some languages (COBOL, C++, Forth) change slowly. Other languages (Visual Basic, Java, C#) change more frequently.

Some languages are simple: C, Python, and Pascal come to mind. Others are complicated: COBOL, C++, and Ada are examples.

* * * * *

It strikes me that the languages with simple syntax are the languages with a single person leading the effort. Languages that were designed (and maintained) by an individual (or perhaps a pair of closely working individuals) tend to be simple. Python is lead by Guido van Rossum. Pascal was designed by Niklaus Wirth. C was designed by Kernighan and Ritchie. Eiffel was created by Bertrand Meyer.

In contrast, languages with complex syntax tend to be designed by committee. COBOL and Ada were designed by committee (for the federal government, no less!). C++, while a descendant of C, has a very complex syntax. While Bjarne Stroustrup did much of the work, the C++ standard committee had a lot of say in the final specification.

The one example of a complex language not designed by committee is Perl, which shows that this is a tendency, not an absolute.

* * * * *

It also strikes me that the languages which change are the proprietary languages developed and maintained by commercial companies. Languages such as Visual Basic, Java, and C# have changed rapidly over their lifetimes.

Languages developed by individuals do change, but slowly, and often by entities other than the original developers. BASIC (another simple language developed by Kemeny and Kurtz) was simple and later enhanced (made complex) by Microsoft. Pascal was simple and enhanced by Borland for their Turbo Pascal product. When Niklaus Wirth wanted to make changes to Pascal, he created the language Modula-2 (and later, Modula-3).

Programming languages designed by committee with strong standards (COBOL, Ada) tend to change slowly, due to the nature of committees.

* * * * *

Languages built within commercial entities need not be complex. They may start simple and grow into something complicated. Java and C# were both developed by individuals, and that initial simplicity shows through their current (more complex) design.

* * * * *

What can we expect in the future? I see little activity (or call for) committees to design languages. This is possibly a long-term risk, as committee-built languages, once adopted as standards, tend to be durable, stable, and cross-platform (COBOL, C++, Ada).

I do see individuals developing languages. Python and Ruby have strong followings. JavaScript is popular. I expect other individuals to create new languages for distributed computing. These languages will be simple, specific, and show little change over time.

I also see commercial vendors building languages. Apple recently introduced Swift. If they follow the trend for vendor-specific languages, we can expect changes to the Swift language, perhaps as often as every year (or with every release of MacOS or iOS). Microsoft is looking to build its cloud and mobile offerings; new versions of C# may be in the works. Oracle is working on Java; recent changes have fixed the code base and tools, new versions may change the language. Google is building Go and Dart languages. Can Google leverage them for advantage?

The Dart language is in an interesting position. It is a replacement for JavaScript and it compiles to JavaScript. It must remain simpler than JavaScript; if it becomes more complex, then programmers will simply use JavaScript instead of the harder Dart.

* * * * *

In short, I expect programming languages from vendors to change moderately rapidly. I expect programming languages from individuals to change less rapidly. Whether a programming language changes over time may affect your choice. You decide.

Tuesday, January 13, 2015

Functional programming exists in C++

Some programmers of C++ may look longingly at the new functional programming languages Haskell or Erlang. (Or perhaps the elder languages of Common Lisp or Scheme.) Functional programmi-ness is a Shiny New Thing, and C++ long ago lost its Shiny New Thing luster of object-oriented programming.

Yet C++ programmers, if they look closely enough, can find a little bit of functional programming in their language. Hidden in the C++ specification is a tiny aspect of functional programming. It occurs in C++ constructor initializers.

Initializers are specifications for the initialization of member variables in a constructor. The C++ language provides for default initialization of member variables; initializers override these defaults and let the programmer specific actions.

Given the class:

class MyInts {
private:
    int a1_;
    int a2_;
public:
    MyInts(void);
}

one can store two integers in an object of type MyInts. The old-style C++ method is to provide 'setter' and 'getter' functions to allow the setting and retrieval of values. Something like:

class MyInts {
private:
    int a1_;
    int a2_;
public:
    MyInts(void);
    void setA1(int a1) { a1_ = a1; };
    int getA1(void) const { return a1_; };
    void setA2(int a2) { a2_ = a2; };
    int getA2(void) const { return a2_; };
}

The new-style C++ (been around for years, though) dispenses with the 'setter' functions and uses initializers and parameters in the constructor:

class MyInts {
private:
    int a1_;
    int a2_;
public:
    MyInts(int a1, int a2) : a1_(a1), a2_(a2) {};
    int getA1(void) const { return a1_; };
    int getA2(void) const { return a2_; };
}

The result is an efficiently-constructed object. Another result is an immutable object, as one cannot change its state after construction. (The 'setter' functions are gone.) That may or may not be what you want, although in my experience ir probably is what you want.

Initializers are interesting. One cannot do just anything in an initializer. You can provide a constant value. You can provide a constructor for a class (if your member variable is an object). You can call a function that provides a value, but it should be either a static function (not a member function) or a function outside of the class. (Calling a member function on the same class is an undefined operation. It may work, or it may not.)

These restrictions on initializers enforce one of the attributes of functional programming: immutable objects. In my example, I eliminated the 'setter' functions to may objects of MyInts immutable, but that was an intentional effect. I could have left the 'setter' functions in place, and then objects of MyInts would be mutable.

Initializers brook no such nonsense. You have one opportunity to set the value for a member variable (you cannot initialize a member variable more than once). Once it is set, it cannot be changed, during the initialization. You cannot call a function that has a side effect of changing a member variable that has been previously set. (Such a call would be to a member function, and while permitted by the compiler, you should avoid them.)

Initializers provide a small bit of functional programming inside C++. Who would have thought?

Technically, the attributes I have described are not functional programming, but merely immutable objects. Functional programming allows one to treat functions as first class citizens of the language, creating them and passing them to other functions as needed. The initializers in C++ do not allow such constructs.

Sunday, January 11, 2015

The answer is not a bigger computer

The different providers of Infrastructure-as-a-Service (IaaS) or virtualized computers keep increasing the size of their computers. Each news release offers configurations with more cores and more memory. Bit I think they (and the people who buy the services) are missing the point.

The power of virtualized computers is not in larger computers.

Traditionally in computers, bigger was better. When we had single mainframes managing the data for companies. a larger computer meant faster returns on reports and more powerful analyses. When we had personal computers on desktops, faster computers meant faster computations in spreadsheets.

When the unit of computing is a single device (a mainframe or a PC), then bigger is better (usually).

But with cloud computing, the unit of computing is not a single device -- it is the cloud itself, which consists of a number of devices. A variable number of virtualized devices, actually. Cloud-based systems are not single application programs running on single computers; they are systems of small programs running on multiple servers. The expansion capabilities of cloud are not based on a single computer. They are not based on expanding the processor, or increasing the memory.

Cloud computing expands by increasing the number of computers.

Looking for increased performance through more cores or more memory is not the "way of the cloud". If you have a large process, one that demands lots of processing and memory, then you are not doing cloud right.

Moving to cloud means re-thinking the way we build applications. It means systems assembled from services, not large monolithic programs. Look for the large processes and break them into smaller ones. It may not be easy. Our legacy systems were designed around single-device thinking, and optimized for single devices. To be successful in cloud computing, our thinking has to change.