Thursday, September 19, 2019

The PC Reverse Cambrian Explosion

The Cambrian Explosion is a term from archaeology. It describes a massive increase in the diversity of life that occurred half a billion of years ago. Life on earth went from a measly few thousands of species to hundreds of millions of species in the blink of a geologic eye.

Personal Computers have what I call a "PC Reverse Cambrian Explosion" or PC-RCE. It occurred in the mid-1980s, which some might consider to be half a billion year ago. In the PC-RCE, computers went from hundreds of different designs to one: the IBM PC compatible.

In the late 1970s and very early 1980s, there were lots of designs for small computers. These included the Apple II, the Radio Shack TRS-80, the Commodore PET and CBM machines, and others. There was a great diversity of hardware and software, including processors and operating systems. Some computers had floppy disks, although most did not. Many computers used cassette tape for storage, and some had neither cassette nor floppy disk. Some computers had built-in displays, and others required that you get your own terminal.

By the mid 1980s, that diversity was gone. The IBM PC was the winning design, and the market wanted that design and only that design. (Except for a few stubborn holdouts.)

One might think that the IBM PC caused the PC-RCE, but I think it was something else.

While the IBM PC was popular, other manufacturers could not simply start making compatible machines (or "clones" as they were later called). The hardware for the IBM PC was "open" in that the connectors and buss specification were documented, and this allowed manufacturers to make accessories for IBM PCs. But the software (the operating system and importantly the ROM BIOS) was not open. While both had documentation for the interfaces, they could not be copied without running afoul of copyright law.

Other computer manufacturers could not make IBM PC clones. Their choices were limited to 1) sell non-compatible PCs in a market and did not want them, or 2) go into another business.

Yet we now have many vendors of PCs. What happened?

The first part of the PC-RCE was the weakening of the non-IBM manufacturers. Most went out of business. (Apple survived, by offering compelling alternate designs and focussing on the education market.)

The second part was Microsoft's ability to sell MS-DOS to other manufacturers. It made custom versions for non-compatible hardware by Tandy, Victor, Zenith, and others. While "compatible with MS-DOS" wasn't the same as "compatible with the IBM PC", it allowed other manufacturers to use MS-DOS.

A near-empty market allowed upstart Compaq to introduce its Compaq portable, which was the first system not made by IBM and yet compatible with the IBM PC. It showed that there was a way to build IBM PC "compatibles" legally and profitably. Compaq was successful because it offered a product not available from IBM (a portable computer) that was also compatible (it ran popular software) and used premium components and designs to justify a hefty price tag. (Several thousand dollars at the time.)

The final piece was the Phoenix BIOS. This was the technology that allowed other manufacturers to build compatible PCs at low prices. Compaq had built their own BIOS, making it compatible with the API specified in IBM's documents, but it was an expensive investment. The Phoenix BIOS was available to all manufacturers, which let Phoenix amortize the cost over a larger number of PCs, for a lower per-unit cost.

The market maintained demand for the IBM PC design, but it wasn't fussy about the manufacturer. Customers bought "IBM compatible PCs" with delight. (Especially if the price was lower than IBM's.)

Those events (weakened suppliers, an operating system, a legal path forward, and the technology to execute it) made the PC the one and only design, and killed off the remaining designs. (Again, except for Apple. And Apple came close to extinction on several occasions.)

Now, this is all nice history, but what does it have to do with us folks living today?

The PC-RCE gave us a single design for PCs. That design has evolved over the decades, and just about every piece of the original IBM PC has mutated into something else, but the marketed PCs have remained uniform. At first, IBM specified the design, with the IBM PC, the IBM PC XT, and the IBM PC AT. Later, Microsoft specified the design with its "platform specification" for Windows. Microsoft could do this, due to its dominance of the market for operating systems and office software.

Today, the PC design is governed by various committees and standards organizations. They specify the design for things like the BIOS (or its replacement the UEFI), the power supply, and connectors for accessories. Individual companies have sway; Intel designs processors and support circuitry used in all PCs. Together, these organizations provide a single design which allows for modest variation among manufacturers.

That uniformity is starting to fracture.

Apple's computers joined the PC design in the mid-2000s. The "white MacBook" with an Intel processor was a PC design -- so much so that Windows and Linux can run on it. Yet today, Apple is moving their Macs and MacBooks in a direction different from the mainstream market. Apple-designed chips control certain parts of their computers, and these chips are not provided to other manufacturers. (Apple's iPhones and iPads are unique designs, with no connection to the PC design.)

Google is designing its Chromebooks and slowing moving them away from the "standard" PC design.

Microsoft is building Surface tablets and laptops with its proprietary designs, close to PCs yet not quite identical.

We are approaching a time when we won't think of PCs as completely interchangeable. Instead, we will think of them in terms of manufacturers: Apple PCs, Microsoft PCs, Google PCs, etc. There will still be a mainstream design; Dell and Lenovo and HP want to sell PCs.

The "design your own PC" game is for serious players. It requires a significant investment not only in hardware design but also in software. Apple has been playing that game all along. Microsoft and Google are big enough that they can join. Other companies may get involved, using Linux (or NetBSD as Apple did) as a base for their operating systems.

The market for PCs is fragmenting. In the future, I see a modest number of designs, not the thousands that we had in 1980. The designs will be similar but not identical, and more importantly, not compatible - at least for hardware.

A future with multiple hardware platforms will be a very different place. We have enjoyed a single (evolving) platform for the past four decades. A world with multiple, incompatible platforms will be a new experience for many. It will affect not only hardware designers, but everyone involved with PCs, from programmers to network administrators to purchasing agents. Software may follow the fragmentation, and we could see applications that run on one platform and not others.

A fragmented market will hold challenges. Once committed to one platform, it is hard to move to a different platform. (Just as it is difficult today to move from one operating system to another.) Instead of just the operating system, one will have to change the hardware, operating system, and possibly applications.

It may also be a challenge for Linux and open source software. They have used the common platform as a means of expansion. Will we see specific versions of Linux for specific platforms? Will Linux avoid some platforms as "too difficult" to implement? (The Apple MacBooks, with their extra chips for security, may be a challenge for Linux.)

The fragmentation I describe is a possible future -- its not here today. I wouldn't panic, but I wouldn't ignore it, either. Keep buying PCs, but keep your eyes on them.

Friday, September 13, 2019

Apple hardware has nowhere to go

Apple has long sold products based on advanced design.

But now Apple has a problem: there is very little space to advance. The iPhone is "done" -- it is as good as it is going to get. This week's announcements about new iPhones were, in brief, all about the cameras. There were some mentions about higher-resolution screens (better than "retina" resolution, which itself was as good as the human eye could resolve), longer battery life, and a new color (green).

The iPhone is not the only product that has little "runway".

The MacBook also has little room to grow. It is as good as a laptop can get, and competitors are just as good -- at least in terms of hardware. There is little advantage to the MacBook.

The Mac (the desktop) is a pricey device for the upper end of developers, and a far cry from a workstation for "the rest of us". But it, like the MacBook, is comparable to competing desktops. There is little advantage to the Mac.

Apple knows this. Their recent move into services (television and music) shows that it is better to invest in areas other than hardware.

But how to keep demand for those pricey Apple devices? Those shiny devices are how Apple makes money.

It is quite possible that Apple will limit their services to Apple devices. They may also limit the development tools for services to Apple devices (Macs and MacBooks with macOS). Consumers of Apple services (music, television) will have to use Apple devices. Developers of services for the Apple platform will have to use Apple devices.

Why would Apple do that? For the simple reason that they can charge premium prices for their hardware. Anyone who wants "in" to the Apple set of services will have to pay the entry fee.

It also separates Apple from the rest of computing, which carries its own risks. The mainstream platforms could move in a direction without Apple.

Apple has always maintained some distance from mainstream computing, which gives Apple the cachet of "different". Some distance is good, but too much distance puts Apple on their own "island of computing".

Being on one's own island is nice, while the tourists come. If the tourists stop visiting, then the island becomes a lonely place.

Wednesday, September 4, 2019

Don't shoot me, I'm only the OO programming language!

There has been a lot of hate for object-oriented programming of late. I use the word "hate" with care, as others have described their emotions as such. After decades of success with object-oriented programming, now people are writing articles with titles like "I hate object-oriented programming".

Why such animosity towards object-oriented programming? And why now? I have some ideas.

First, we have the age of object-oriented programming (OOP) as the primary paradigm for programming. I put the acceptance of OOP somewhere after the introduction of Java (in 1995) and before Microsoft's C# and .NET initiative (in 1999), which makes OOP about 25 years old -- or one generation of programmers.

(I know that object-oriented programming was around much earlier than C# and Java, and I don't mean to imply that Java was the first object-oriented language. But Java was the first popular OOP language, the first OOP language that was widely accepted in the programming community.)

So it may be that the rejection of OOP is driven by generational forces. Object-oriented programming, for new programmers, has been around "forever" and is an old way of looking at code. OOP is not the shiny new thing; it is the dusty old thing.

Which leads to my second idea: What is the shiny new thing that replaces object-oriented programming? To answer that question, we have to answer another: what does OOP do for us?

Object-oriented programming, in brief, helps developers organize code. It is one of several techniques to organize code. Others include Structured Programming, subroutines, and functions.

Subroutines are possibly the oldest techniques to organize code. They date back to the days of assembly language, when code that was executed more than once was called with a "branch" or "call" or "jump subroutine" opcode. Instead of repeating code (and using precious memory), common code could be stored once and invoked as often as needed.

Functions date back to at least Fortran, consolidating common code that returns a value.

For two decades (from the mid 1950s to the mid-1970s), subroutines and functions were the only way to organize code. In the mid-1970s, the structured programming movement introduced an additional way to organize code, with IF/THEN/ELSE and WHILE statements (and an avoidance of GOTO). These techniques worked at a more granular level that subroutines and functions. Structured programming organized code "in the small" and subroutines and functions organized code "in the medium". Notice that we had no way (at the time) to organize code "in the large".

Techniques to organize code "in the large" did come. One attempt was dynamic-linked libraries (DLLs), introduced with Microsoft Windows but also used by earlier operating systems. Another was Microsoft's COM, which organized the DLLs. Neither were particularly effective at organizing code.

Object-oriented programming was effective at organizing code at a level higher than procedures and functions. And it has been successful for the past two-plus decades. OOP let programmers build large systems, sometimes with thousands of classes and millions of lines of code.

So what technique has arrived that displaces object-oriented programming? How has the computer world changed, that object-oriented programming would become despised?

I think it is cloud programming and web services, and specifically, microservices.

OOP lets us organize a large code base into classes (and namespaces which contain classes). The concept of a web service also lets us organize our code, in a level higher than procedures and functions. A web service can be a large thing, using OOP to organize its innards.

But a microservice is different from a large web service. A microservice is, by definition, small. A large system can be composed of multiple microservices, but each microservice must be a small component.

Microservices are small enough that they can be handled by a simple script (perhaps in Python or Ruby) that performs a few specific tasks and then exits. Small programs don't need classes and object-oriented programming. Object-oriented programming adds cost to simple programs with no corresponding benefit.

Programmers building microservices in languages such as Java or C# may feel that object-oriented programming is being forced upon them. Both Java and C# are object-oriented languages, and they mandate classes in your program. A simple "Hello, world!" program requires the definition of at least one class, with at least one static method.

Perhaps languages that are not object-oriented are better for microservices. Languages such as Python, Ruby, or even Perl. If performance is a concern, the compiled languages C and Go are available. (It might be that the recent interest in C is driven by the development of cloud applications and microservices for them.)

Object-oriented programming was (and still is) an effective way to manage code for large systems. With the advent of microservices, it is not the only way. Using object-oriented programming for microservices is overkill. OOP requires overhead that is not helpful for small programs; if your microservice is large enough to require OOP, then it isn't a microservice.

I think this is the reason for the recent animosity towards object-oriented programming. Programmers have figured out the OOP doesn't mix with microservices -- but they don't know why. They fell that something is wrong (which it is) but they don't have the ability to shake off the established programming practices and technologies (perhaps because they don't have the authority).

If you are working on a large system, and using microservices, give some thought to your programming language.