Showing posts with label standards. Show all posts
Showing posts with label standards. Show all posts

Thursday, June 16, 2022

Consolidation of processors, and more

We're in an age of consolidation. PCs are moving to the ARM processor as a standard. Apple has already replaced their entire line with ARM-based processors. Microsoft has built ARM-based laptops. The advantages of ARM (lower production cost, lower heat dissipation) make such a move worthwhile.

If this consolidation extends to all manufacturers, then we would see a uniform processor architecture, something that we have not seen in the PC era. While the IBM PC set a standard with the Intel 8088 processor, other computers at the time used other processors, mostly the Zilog Z-80 and the Mostek 6502. When Apple shifted to the Macintosh line, is changed to the Motorola 68000 processor.

Is consolidation limited to processors?

There are, today, four major operating systems: Windows, mac OS, zOS, and Linux. Could we see a similar consolidation among operating systems? Microsoft is adding Linux to Windows with WSL, which melds Linux into Windows. Apple's mac OS is based on Net BSD Unix, which is not that far from Linux. IBM's zOS supports Linux as virtual machines. IBM might, one day, replace zOS with Linux; they certainly have the ability to build one.

If both of these consolidations were to occur, then we would see a uniform processor architecture and a uniform operating system, something that has not occurred in the computing age.

(I'm not so dreamy-eyed that I believe this would happen. I expect Microsoft, Apple, and IBM to keep some degree of proprietary extensions to their systems. But let's dream a little.)

What affect would a uniform processor architecture and uniform operating system have on programming languages?

At first glance, one might think that there would be no effect. Programming languages are different things from processors and operating systems, handling different tasks. Different programming languages are good at different things, and we want to do different things, so why not keep different programming languages?

It is true that different programming languages are good at different things, but that doesn't mean that each and every programming language have unique strengths. Several programming languages have capabilities that overlap, some in multiple areas, and some almost completely. C# and VB.NET, for example. Or C# and Java, two object-oriented languages that are good for large-scale projects.

With a single processor architecture and a single operating system, Java loses one of its selling points. Java was designed to run on multiple platforms. It's motto was "Write Once, Run Everywhere." In the mid 1990s, such a goal made sense. There were different processors and different operating systems. But with a uniform architecture and uniform operating system, Java loses that point. The language remains a solid performer, so the loss is not fatal. But the argument for Java weakens.

A pair of overlapping languages is VB.NET and C#. Both are made by Microsoft, and both are made for Windows. Or were made for Windows; they are now available on multiple platforms. They overlap quite a bit. Do we need both? Anything one can do in C# one can also do in VB.NET, and the reverse is true. There is some evidence that Microsoft wants to drop VB.NET -- although there is also evidence that developers want to keep programming in VB.NET. That creates tension for Microsoft.

I suspect that specialty languages such as SQL and JavaScript will remain. SQL has embedded itself in databases, and JavaScript has embedded itself in web browsers.

What about other popular languages? What about COBOL, and FORTRAN, and Python, and R, and Delphi (which oddly still ranks high in the Tiobe index)?

I see no reason for any of them to go away. Each has a large base of existing code; converting those programs to another language would be a large effort with little benefit.

And I think that small, niche languages will remain. Programming languages such as AWK will remain because they are small, easy to use, good at what they do, and they can be maintained by a small team.

The bottom line is that the decision is not practical and logical, but emotional. We have multiple programming languages not because different languages are good at different things (although they are) but because we want multiple programming languages. Programmers become comfortable with programming languages; different programmers choose different programming languages.

Wednesday, April 14, 2021

In USB-C, the C is for confusion

USB-C has added to our tech world. Faster transfers of data, more capabilities, and, unfortunately, a bit of confusion.

To fully understand USB, one must understand the situation prior to USB, to the days of the first personal computers. (That is, the late 1970s, and prior to the IBM PC.)

In that early age, each manufacturer was free (more or less) to define their own connectors and communication protocols. Computer makers used the connectors that were available: the DB-25 for telecommunications and the Centronics connector for printers. (The DB-25 was part of the RS-232 standard, and the Centronics design would later be adopted as the IEEE-1284 standard.)

The RS-232 standard was for communications on phone lines, with terminals connected to modems at one end and computers (mainframes and minicomputers) connected to modems on the other end. The cables connecting terminals and modems were well defined. Using them to connect personal computers to printers (and other devices) was not so well defined. Each computer had its own interpretation of the standard, and each printer (or other device) had its own interpretation of the standard. Connecting computers and devices required (all too often) custom cables, so that one cable was useful for computer A to talk to device B, but it could not be used for computer C to talk to device B, or even computer A to talk to device D.

The situation with the Centronics interface on printers was somewhat better. The connector and the protocol were well-defined. But the standard applied only to the printer; it said nothing about the computer. Thus computer makers were able to pick any convenient connector for their end of the cable, and here two cables were specific to the computer. A Centronics-compatible printer would need cable A to talk to computer A and cable B to talk to computer B -- because the connectors on computers A and B were different.

Every pair of devices needed its own cable. Some cables were symmetrical, in that the connectors on both ends were the same. That did not mean the cable was reversible. For some devices, the cable was reversable -- it could be oriented either way. For other devices, one end had to be plugged in to the computer and the other end had to be plugged in to the device. Some connectors were symmetrical in that they could be oriented either way in their port -- a connector could be unplugged, flipped 180 degrees, and plugged back in to the same device. A few worked this way, most did not. The result was that cables had to be labelled with notes such as "computer end" and "modem end" or "this side up" or "this side towards power connector".

It was a mess.

The IBM PC brought if not sanity at least some standardization to this world. IBM defined a set of connectors for its PC: DIN for keyboard, DE-9 female for video, DE-9 male for serial communications, and DB-25 female for  parallel communication. Later, with the PS/2, IBM defined the mini DIN connector for keyboard and mouse, and the DE-15 female for video (the VGA connector that persists to this day). In addition to connectors, IBM defined the communication protocols, and other manufacturers adopted them. Just about every device on the market was changed to be "IBM-compatible".

But personal computers were not limited to video, serial, and parallel. Over time, we added network connections, scanners, and external drives. IBM did not have an adapter for each, to manufacturers were, once again, creating their own designs for connectors and cables. Eventually, network connectors settled on the RJ-45 that is used today, but only after a plethora of connectors and cable types were tried. There were no standards for scanners or external disks.

Some fifteen years after IBM's definition of the PC, USB arrived.

The vision of USB was a single connector and a single cable for all devices, and a single discovery protocol for communication. The acronym 'USB' is from "Universal Serial Bus".

The first USB standard did a fairly good job of it. The original connectors: USB-A and USB-B were used in pairs: each cable had one and only one of each connector. USB-A is the common, rectangular cable used now for older devices. USB-B is the rarer square connector that is apparently used only on printers and scanners.

Later USB standards adopted smaller connectors for the 'B' end of the cable. These smaller connectors were used for cameras and phones. For a while, there were various mini-B and micro-B connectors, with different numbers of wires and slightly different sizes. Today's smart phones (except for iPhones) use a micro-B connector.

The advantage of the A-B cable is twofold: standard and unambiguous orientation. The USB-A connector is used for 'host' devices such as computers and charging stations, and the USB-B connector is used for the 'client' device. (Portable rechargeable batteries have an interesting arrangement of a USB-B connector for charging the battery and a USB-A port for providing charge to a client device such as a phone.)

In all situations, the A-B cable works and one knows how to orient the cable. The 'A' connector goes to the host device, and can be inserted in only one orientation. The 'B' connector goes to the client device and it, too, can be inserted in only one orientation.

The biggest problem of the A-B arrangement was, as far as I can tell, that the orientation of the 'A' connector was not obvious, and one could easily reverse the rectangular connector and attempt to attach it in the wrong orientation.

Now let us look at the USB-C arrangement. USB-C uses a different connector (an oval shape) than the previous 'A' and 'B' connectors. This 'C' connector, like the 'A' connector, can be inserted into a port in either orientation. But unlike the 'A' connector, the 'C' connector actually lets one insert the cable fully, and --theoretically -- the cable works in either orientation. Not only that, the cable has 'C' connectors on both ends, so one can attach either end of the cable to either device -- one does not have to care about the orientation of the cable -- theoretically.

I add those 'theoretically' disclaimers because in practice, USB-C does not always work. Some cables work between two devices, and other cables do not. 'Thunderbolt' USB-C cables are different from plain USB-C cables. (We're back to 'this cable for those devices'.)

Some cables work between two devices, but only when the cable is properly oriented. That is, one end of the cable must always be attached to a specific device. The 'reversibility' of the cable has been lost. (Worse than before, as both ends of the cable look the same. We're back to labels saying 'attach to computer'.)

Some cables work, but only when the connectors are oriented properly in their respective ports. The 'reversibility' of the connector has been lost. (More labels for 'this side up'.)

We have also lost the notion of unambiguous direction, which is important for power. An early adopter of a USB-C laptop and a USB-C phone reported: "I connected my phone to my laptop via USB-C. Now my phone is trying to charge my laptop!"

USB-C has the one advantage of a smaller port. That's good for the makers of laptops and the makers of phones, I suppose. But the confusion about types of cables, and orientation of cables, and orientation of connectors is a cost.

Perhaps this confusion is only temporary. There was confusion with the initial implementations for the first USB devices. Over time, we, as an industry, figured out how to make USB-A and -B work. Maybe we need some time to figure out how to make USB-C work.

Or maybe we won't. Maybe the problems with USB-C are too complex, to close to the design. It is possible that USB-C will always have these problems.

If that is the case, we can look to a new design for USB. USB-D, anyone?

Thursday, May 5, 2016

Where have all the operating systems gone?

We used to have lots of operating systems. Every hardware manufacturer built their own operating systems. Large manufacturers like IBM and DEC had multiple operating systems, introducing new ones with new hardware.

(It's been said that DEC became a computer company by accident. They really wanted to write operating systems, but they needed processors to run the them and compilers and editors to give them something to do, so they ended up building everything. It's a reasonable theory, given the number of operating systems they produced.)

In the 1970s CP/M was an attempt at an operating system for different hardware platforms. It wasn't the first; Unix was designed for multiple platforms prior. It wasn't the only one; the UCSD p-System used a virtual processor quite like the virtual machine in the Java JVM and ran on various hardware.

Today we also see lots of operating systems. Commonly used ones include Windows, Linux, Mac OS, iOS, Android, Chrome OS, and even watchOS. But are they really different?

Android and Chrome OS are really variants on Linux. Linux itself is a clone of Unix. Mac OS is derived from NetBSD which in turn is derived from the Berkeley System Distribution of Unix. iOS and watchOS are, according to Wikipedia, "Unix-like", and I assume that they are slim versions of NetBSD with added components.

Which means that our list of commonly-used operating systems becomes:

  • Windows
  • Unix

That's a rather small list. (I'm excluding the operating systems used for special purposes, such as embedded systems in automobiles or machinery or network routers.)

I'm not sure that this reduction in operating systems, this approach to a monoculture, is a good thing. Nor am I convinced that it is a bad thing. After all, a common operating system (or two commonly-used operating systems) means that lots of people know how they work. It means that software written for one variant can be easily ported to another variant.

I do feel some sadness at the loss of the variety of earlier years. The early days of microcomputers saw wide variations of operating systems, a kind of Cambrian explosion of ideas and implementations. Different vendors offered different ideas, in hardware and software. The industry had a different feel from today's world of uniform PCs and standard Windows installations. (The variances between versions of Windows, or even between the distros of Linux, and much smaller than the differences between a Data General minicomputer and a DEC minicomputer.)

Settling on a single operating system is a way of settling on a solution. We have a problem, and *this* operating system, *this* solution, is how we address it. We've settled on other standards: character sets, languages (C# and Java are not that different), storage devices, and keyboards. Once we pick a solution and make it a standard, we tend to not think about it. (Is anyone thinking of new keyboard layouts? New character sets?) Operating systems seem to be settling.