Wednesday, April 28, 2021

Be wary of Apple's M2 processor

The success of Apple's new M1 processor in Mac, MacBook, and iMac computers sent shockwaves through the industry. Performance of the M1 is much better than most processors from Intel. Apple (and its fans) are gleeful; Intel (and its fans, if it has any) are glum.

We now have news that Apple is readying a successor processor. Pundits predict the name will either be 'M1X' or 'M2', depending on the increase in capabilities over the M1 processor.

An M1X processor will see a modest set of improvements: an increased number of cores, and some minor improvements overall.

An M2 processor, on the other hand, will see a significant number of improvements. Certainly more cores, and faster memory (DDR5?) and a better GPU.

My guess -- and this is a pure, wild guess -- is that an M2 processor will have a design flaw. I do not work with Apple or its suppliers. And, I could be wrong.

My guess is that if Apple releases an 'M2' processor (an M1 with lots of changes) then there will be some nontrivial problem that surfaces after its introduction. A problem that is not detected by Apple's quality assurance efforts, yet a problem that is not insignificant and renders the processor unusable.

The culprit here is the "second system effect" which occurs after the first system is a success. In brief, after its success with the M1 processor, Apple becomes overconfident -- or over-ambitious -- with the M2 processor, and misses a flaw in the design.

What that flaw will be I do not know. It could be insufficient heat dissipation, leading to overheating in some circumstances. It could be a flaw in floating-point arithmetic. It could be a problem in the security between different processes, allowing one process to see the data of a different process.

Should Apple release new computers with a new 'M2' processor, my advice is: Wait. Don't be the first to use them -- at least not for critical applications. Let others test them, for at least a few months.

If you want to try a few as a research project, go ahead. I'm okay with that. You may find that they perform well for you. Or you may find that they don't. Testing new equipment before committing to production is a reasonable and responsible activity.

I will say here that I am biased. I think Apple has stayed too long with the 1970s model of computing, with all computing being local and nothing in the cloud. I also think that it designs products for appearance and not function. (It's displays are designed to be touched and its keyboards are designed to be visually appealing, which in my mind is backwards.) I respect it's performance with ARM chips in iPhones, iPads, and Macintosh computers. But Apple is not infallible.

Wednesday, April 14, 2021

In USB-C, the C is for confusion

USB-C has added to our tech world. Faster transfers of data, more capabilities, and, unfortunately, a bit of confusion.

To fully understand USB, one must understand the situation prior to USB, to the days of the first personal computers. (That is, the late 1970s, and prior to the IBM PC.)

In that early age, each manufacturer was free (more or less) to define their own connectors and communication protocols. Computer makers used the connectors that were available: the DB-25 for telecommunications and the Centronics connector for printers. (The DB-25 was part of the RS-232 standard, and the Centronics design would later be adopted as the IEEE-1284 standard.)

The RS-232 standard was for communications on phone lines, with terminals connected to modems at one end and computers (mainframes and minicomputers) connected to modems on the other end. The cables connecting terminals and modems were well defined. Using them to connect personal computers to printers (and other devices) was not so well defined. Each computer had its own interpretation of the standard, and each printer (or other device) had its own interpretation of the standard. Connecting computers and devices required (all too often) custom cables, so that one cable was useful for computer A to talk to device B, but it could not be used for computer C to talk to device B, or even computer A to talk to device D.

The situation with the Centronics interface on printers was somewhat better. The connector and the protocol were well-defined. But the standard applied only to the printer; it said nothing about the computer. Thus computer makers were able to pick any convenient connector for their end of the cable, and here two cables were specific to the computer. A Centronics-compatible printer would need cable A to talk to computer A and cable B to talk to computer B -- because the connectors on computers A and B were different.

Every pair of devices needed its own cable. Some cables were symmetrical, in that the connectors on both ends were the same. That did not mean the cable was reversible. For some devices, the cable was reversable -- it could be oriented either way. For other devices, one end had to be plugged in to the computer and the other end had to be plugged in to the device. Some connectors were symmetrical in that they could be oriented either way in their port -- a connector could be unplugged, flipped 180 degrees, and plugged back in to the same device. A few worked this way, most did not. The result was that cables had to be labelled with notes such as "computer end" and "modem end" or "this side up" or "this side towards power connector".

It was a mess.

The IBM PC brought if not sanity at least some standardization to this world. IBM defined a set of connectors for its PC: DIN for keyboard, DE-9 female for video, DE-9 male for serial communications, and DB-25 female for  parallel communication. Later, with the PS/2, IBM defined the mini DIN connector for keyboard and mouse, and the DE-15 female for video (the VGA connector that persists to this day). In addition to connectors, IBM defined the communication protocols, and other manufacturers adopted them. Just about every device on the market was changed to be "IBM-compatible".

But personal computers were not limited to video, serial, and parallel. Over time, we added network connections, scanners, and external drives. IBM did not have an adapter for each, to manufacturers were, once again, creating their own designs for connectors and cables. Eventually, network connectors settled on the RJ-45 that is used today, but only after a plethora of connectors and cable types were tried. There were no standards for scanners or external disks.

Some fifteen years after IBM's definition of the PC, USB arrived.

The vision of USB was a single connector and a single cable for all devices, and a single discovery protocol for communication. The acronym 'USB' is from "Universal Serial Bus".

The first USB standard did a fairly good job of it. The original connectors: USB-A and USB-B were used in pairs: each cable had one and only one of each connector. USB-A is the common, rectangular cable used now for older devices. USB-B is the rarer square connector that is apparently used only on printers and scanners.

Later USB standards adopted smaller connectors for the 'B' end of the cable. These smaller connectors were used for cameras and phones. For a while, there were various mini-B and micro-B connectors, with different numbers of wires and slightly different sizes. Today's smart phones (except for iPhones) use a micro-B connector.

The advantage of the A-B cable is twofold: standard and unambiguous orientation. The USB-A connector is used for 'host' devices such as computers and charging stations, and the USB-B connector is used for the 'client' device. (Portable rechargeable batteries have an interesting arrangement of a USB-B connector for charging the battery and a USB-A port for providing charge to a client device such as a phone.)

In all situations, the A-B cable works and one knows how to orient the cable. The 'A' connector goes to the host device, and can be inserted in only one orientation. The 'B' connector goes to the client device and it, too, can be inserted in only one orientation.

The biggest problem of the A-B arrangement was, as far as I can tell, that the orientation of the 'A' connector was not obvious, and one could easily reverse the rectangular connector and attempt to attach it in the wrong orientation.

Now let us look at the USB-C arrangement. USB-C uses a different connector (an oval shape) than the previous 'A' and 'B' connectors. This 'C' connector, like the 'A' connector, can be inserted into a port in either orientation. But unlike the 'A' connector, the 'C' connector actually lets one insert the cable fully, and --theoretically -- the cable works in either orientation. Not only that, the cable has 'C' connectors on both ends, so one can attach either end of the cable to either device -- one does not have to care about the orientation of the cable -- theoretically.

I add those 'theoretically' disclaimers because in practice, USB-C does not always work. Some cables work between two devices, and other cables do not. 'Thunderbolt' USB-C cables are different from plain USB-C cables. (We're back to 'this cable for those devices'.)

Some cables work between two devices, but only when the cable is properly oriented. That is, one end of the cable must always be attached to a specific device. The 'reversibility' of the cable has been lost. (Worse than before, as both ends of the cable look the same. We're back to labels saying 'attach to computer'.)

Some cables work, but only when the connectors are oriented properly in their respective ports. The 'reversibility' of the connector has been lost. (More labels for 'this side up'.)

We have also lost the notion of unambiguous direction, which is important for power. An early adopter of a USB-C laptop and a USB-C phone reported: "I connected my phone to my laptop via USB-C. Now my phone is trying to charge my laptop!"

USB-C has the one advantage of a smaller port. That's good for the makers of laptops and the makers of phones, I suppose. But the confusion about types of cables, and orientation of cables, and orientation of connectors is a cost.

Perhaps this confusion is only temporary. There was confusion with the initial implementations for the first USB devices. Over time, we, as an industry, figured out how to make USB-A and -B work. Maybe we need some time to figure out how to make USB-C work.

Or maybe we won't. Maybe the problems with USB-C are too complex, to close to the design. It is possible that USB-C will always have these problems.

If that is the case, we can look to a new design for USB. USB-D, anyone?

Monday, April 5, 2021

The Golden Age of Programming

Was there a "golden age" of programming?  A time that was considered the best of times for programmers?

I assert that there was, and that it is now. Moreover, I assert that we have always been in a golden age of programming, from the early days of computing up to now.

I will use my personal history to explain.

I started programming in the mid 1970s, in high school. Our town's high school was fortunate enough (or wealthy enough) to have a minicomputer which ran timesharing BASIC, and the school used it for teaching programming. It was a DEC PDP-8/e computer with a DECwriter and three Teletypes, so up to four people could use it at once.

For me, this was a golden age of computing. It was infinitely better than what I had before (which was nothing) and it was better (in my mind) than older computers I had read about, mainframe computers that accepted programs on punch cards and required either FORTRAN or COBOL. I had read a few books on FORTRAN and COBOL and decided at that tender age that those languages were not for me, and that BASIC was a much better programming language. (I have since changed my opinion about programming languages.)

Shortly after my experience with timeshare BASIC, my father brought home a microcomputer for the family to use. It was a Heathkit H-89 (technically a WH-89, as it was already assembled) and it could be programmed in Assembly language and in BASIC. Other languages could be added, including a subset of C and a subset of Pascal. (It was also possible to purchase FORTRAN and COBOL for it, but those were expensive, so we stayed with Assembly and BASIC and C and Pascal.)

Programming on the H-89 at home was much better than programming on the PDP-8/e at school. The home computer was available twenty-four hours a day, while the computer at school was available only after classes. The home computer had a CRT display, so it did not need paper like the Teletypes on the school computer. The CRT was also faster, and it had some rudimentary graphics capabilities.

That was a golden age of programming.

The early 1980s saw the introduction of the IBM PC and PC-DOS, and with them the introduction of new languages. The IBM PC came with BASIC, and other languages such as dBase and R:base and Clarion were available, as well as COBOL and FORTRAN.

That was a golden age of programming.

The 1990s saw the adoption of C and later C++ as programming languages. The full version of C (not a subset) was available on PCs, and I worked for a company that used C (and later C++) to build applications.

The 1990s also saw the introduction of Microsoft Windows and programming languages tailored for it. There were Visual Basic and a Visual C++ from Microsoft, which came with complete IDEs that included editors and debuggers. Borland offered its own C++ with IDE. There was PowerBuilder which let one build client-server applications to take advantage of the networking capabilities in Windows.

That was a golden age of programming.

Today, we have oodles of programming languages. We have Go and Swift and Python and R. We have C# and F#, VB.NET and Java. We also still have FORTRAN and COBOL.

We have excellent tools to support programmers. We have editors, debuggers, IDEs, and syntax checkers. We have sophisticated version control systems that allow for coordinated development by teams and across multiple teams.

This is a golden age of programming. And I predict that it won't be the last.