Tuesday, May 18, 2021

The new bold colors of iMacs are for Apple, not you

 I must admit that when I first saw Apple's new iMacs and the bold colors that Apple assigned to them, I was puzzled. Why would anyone want those iMacs?

Not that the colors are unappealing. They are, in fact, quite nice.

But why put such colors on iMacs? That is, why put such colors on computers that are not portable?

I understand the reasoning for colors on laptops. Bright or bold colors (and the Apple logo) on laptops makes sense. Macbook owners identify with their laptops. In an earlier age, when Macbooks were white, their owners festooned them with stickers and artwork. Today, carrying around a Macbook lets everyone else know that one is in the club of Cool Apple Kids.

But that logic doesn't work for iMacs. People don't (as a general rule) carry their iMacs from home to the office, or use them at the local coffee shop.

And why, of all places, did Apple decide to put the colors on the back of the display? That is the one place that the user isn't looking. Users of iMacs -- at least the users who I know -- look at the display and rarely look at the back of the unit. Most folks position the iMac on a desk up against a wall, where no one can see the back of the iMac.

After a bit of puzzling, I arrived at an answer.

The colors on the iMac are not for the user.

The colors on the iMac are for Apple.

Apple's positioning of colors on the back of an iMac, and the use of bold colors, makes sense from a certain point of view -- advertising. Specifically, advertising in the corporate environment.

It's true that iMacs used in a home will be positioned on desks against a wall. But that doesn't hold for the corporate environment, with its open office plans where people sit around desks that are little more than flat tables.

In those offices, people do see the backs of computers (or displays, if the CPU is on the desk or below on the floor).

By using bold colors, Apple lets everyone in an office quickly see that a new computer has arrived. All of the other computers are black; the new Apple iMacs are red, or blue, or green, or yellow. A new iMac in an office shouts out to the entire office "I'm an Apple iMac!" -- no, better than that, it shouts "I'm a new Apple iMac!".

This is advertising, and I think it will be effective. Once one person gets a new iMac, many other folks in the office will want new iMacs. "If Sam can get a new iMac, why can't I?" will be the thinking.

Notice that this advertising is targeted for offices. It doesn't work in the home. (Although in the home, with everyone knowing what everyone else has, bold colors are not necessary to generate demand.) This advertising works in offices, especially those offices where equipment is associated with status. iMacs are the Cool New Thing, and the Very Cool People always have the Cool New Thing.

Apple is leveraging its brand well.

Monday, May 10, 2021

Large programming languages considered harmful

I have become disenchanted with the C# programming language. When it was introduced in 2001, I like the language. But the latter few years have seen me less interested. I finally figured out why.

The reason for my disenchantment is the size of C#. The original version was a medium-sized language. It was an object-oriented language, and in many ways a copy of Java (which was also a medium-sized language in 2001).

Over the years, Microsoft has released new versions of C#. Each new version added features, and increased the capabilities of the language. But as Microsoft increased the capabilities, it also increased the size of the language.

The size of a programming language is an imprecise concept. It is more than a simple count of the keywords, or the number of rules for syntax. The measure I like to use is a rough guess of how much space it requires in the head of a programmer; how much brainpower is required to learn the language and how many neurons are needed to remember the different concepts, keywords, and rules of the language.

Such a measure has not been made with any tools, at least not that I know of. All I have is a rough estimate of a language's size. But that rough estimate is good enough to classify languages into small (BASIC, AWK, original FORTRAN), medium (Ruby, Python), and large (COBOL, C#, and Perl).

It may seem natural that languages expand over time. Languages other than C# have been expanded: Java (by Sun and later Oracle), Visual Basic (by Microsoft), C++ (by committee), Perl, Python, Ruby, and even languages such as COBOL and Fortran.

But such expansions of languages worry me. The source of my worry goes back to the "language wars" of the early days of computing.

In the 1960s, 1970s, and 1980s programmers argued (passionately) over programming languages. C vs Pascal, BASIC vs FORTRAN, Assembly language vs... everything.

Those arguments were fueled, mostly in my opinion, by of the high cost of changing. Programming languages were not free. Compilers and interpreters were sold (or licensed). Changing languages meant spending for the new language -- and abandoning the investment in the old. And that meant that, once invested in a language, you were loath to give it up. And that meant you would defend that choice of programming language. People would rather fight than switch.

In the 2000s, thanks to open source, compilers and interpreters became free. The financial cost of changing from one language to another disappeared. And that meant that people could switch programming languages. And that meant that people could switch rather than fight.

So why am I worried, now, in 2021, about a new round of language wars?

The reason is the size of programming languages. More specifically, the size of the environment for any one programming language. That environment includes the language, the compiler (or interpreter), the standard library (or common packages used for development), and the IDE. Each of these components requires some amount of effort to learn and remember.

As each of these environments grows, the effort to learn it grows. And that means that the effort to switch from one language to another also grows. Changing from C# to Python, for example, requires not only learning the Python syntax, it also requires learning the common packages that are necessary for effective Python programs and also learning the IDE (probably PyCharm, which is quite different from Visual Studio).

We are rebuilding the barriers between programming languages. The old barrier was financial: it cost a lot to switch from one language to another. The new barrier is not financial but technical: the tools are free but the time to learn them is significant.

Barriers to switching programming languages can put us back in the position of defending our choices. Once again, programmers may rather fight than switch.

Monday, May 3, 2021

The fall and possible rise of UML

Lots of folks are discussing UML, and specifically the death of UML. What killed UML? Lots of people have different ideas. I have some ideas too. Rather than pin the failure on one reason, I have a bunch.

First, our methods changed, and UML was not a good fit with newer methods. UML was created in the world of large-scale waterfall projects. It works well with those projects, with design up front (disparagingly called "Big Design Up Front") as a precursor to coding. UML does not work well with Agile methods, where design and coding occur in parallel. UML assigns value to code; the idea of up-front design is to build the right code form the start and not revise it. In the UML world, changes to code are expensive and to be avoided. UML also attaches value to the designs, with identical attachments to designs and the desire to avoid changes to designs. (Although changes to designs are preferred over changes to code.)

UML works well with object-oriented programming, but not with cloud computing (small scripts instead of big code).

Second, UML entailed costs. UML notation was difficult to learn. Or at least required some time to learn. The tools took time to learn, and they also cost significant sums. The mindset was "invest now (by learning UML and buying the tools) to prevent more costly mistakes later". At the time, there were charts showing the cost of a mistake, and comparing the cost of detecting the mistake at different points in the project. A mistake detected early (say in requirements or design) was less expensive than a mistake detected later (say in coding or testing). Mistakes detected after deployment were the most expensive. This effect justified the expense of UML tools.

But UML tools were expensive, and not everyone on the team got UML tools. The tools were reserved for the designers; coders were limited to printed copies of UML diagrams. This lead to the notion that designers were special and worth more than programmers. (The elite received UML tools; the plebes did not.) This in turn lead to resentment by programmers.

A third (and often overlooked) reason was the expense for designers. When programmers performed both design and programming, their salaries covered both activities. UML formalized the design process and required a subteam of designers, and each of those designers required a salary. (And they often wanted salaries higher than those of programmers.)

A fourth (and also often overlooked) reason was the added delay to the development process.

UML created an additional step in the waterfall process. Theoretically, it did not, because UML was simply formalized design documents. But in practice, UML did create an additional step.

Before UML, a project would have the formal steps of requirements, design, coding, testing, and deployment. That's what managers thought they had. In reality, the steps were slightly different than those formal steps. The actual steps were requirements, design and coding, testing, and deployment.

Before UML: requirements -> design and code -> test -> deploy

Notice that the steps of design and code are one step, not two. It was an activity performed by the programming team. As it was a single team, people could move from designing to coding and back again, revising the design as they developed the code.

UML and a formal design deliverable changed the process to the five steps the managers thought they had:

With UML: requirements -> design -> code -> rest -> deploy

UML forced the separation of design from coding, and in doing so, changed the (informal) four-step process to a five-step process.

Programmers were used to designing as well as programming. With UML, programmers could not unilaterally change the design; they had to push back against the design. This set up conflicts between designers and programmers. Sometimes the designers "gave in" and allowed a change; other times they "held fast" and programmers had to build something they considered wrong. In either case, such differences introduced delays and political struggles when there were none before.

Those are my observations for UML, and why it failed: new methods not suitable for UML, direct expense of tools and training, direct expense of designers, and a slower development process.

* * * * *

In a way, I am sorry for the loss of UML. I think it can be a helpful tool. But not in the way it was implemented.

UML was added to a project as a design aid, and one that occurred prior to coding. Perhaps it is better to have UML as a diagnostic instead of an aspiration. That is, instead of creating UML and then generating code from UML, create code and then generate UML from the code.

In this way, UML could be a kind of "super lint" that reports on the design of the system.

There was "round-tripping" which allowed for UML to be converted to code, and then that code converted back to UML. That is not the same; it leaves UML as the center for design. And round-tripping never really worked the way we needed. A one-way code-to-UML diagnostic puts code at the center and UML as a tool to assist the programmers. (That's my bias as a programmer showing.)

A code-to-UML diagnostic could be helpful to Agile projects, just as 'lint' and other style checkers are. The tools may be less expensive (we've gotten better at tools, and a diagnostic tool is easier to build than a UML editor). We would not have a separate design team, avoiding that expense (and the associated politics). And a diagnostic tool would not slow the development process -- or at least not so much.

Maybe we will see such a tool. If we do, it will have to be developed by the open-source community. (That is, an individual who wants to scratch an itch, much like Perl, or Python, or Linux.) I don't see a large corporation building one; I don't see a business model for it.

Anyone want to scratch an itch?

Wednesday, April 28, 2021

Be wary of Apple's M2 processor

The success of Apple's new M1 processor in Mac, MacBook, and iMac computers sent shockwaves through the industry. Performance of the M1 is much better than most processors from Intel. Apple (and its fans) are gleeful; Intel (and its fans, if it has any) are glum.

We now have news that Apple is readying a successor processor. Pundits predict the name will either be 'M1X' or 'M2', depending on the increase in capabilities over the M1 processor.

An M1X processor will see a modest set of improvements: an increased number of cores, and some minor improvements overall.

An M2 processor, on the other hand, will see a significant number of improvements. Certainly more cores, and faster memory (DDR5?) and a better GPU.

My guess -- and this is a pure, wild guess -- is that an M2 processor will have a design flaw. I do not work with Apple or its suppliers. And, I could be wrong.

My guess is that if Apple releases an 'M2' processor (an M1 with lots of changes) then there will be some nontrivial problem that surfaces after its introduction. A problem that is not detected by Apple's quality assurance efforts, yet a problem that is not insignificant and renders the processor unusable.

The culprit here is the "second system effect" which occurs after the first system is a success. In brief, after its success with the M1 processor, Apple becomes overconfident -- or over-ambitious -- with the M2 processor, and misses a flaw in the design.

What that flaw will be I do not know. It could be insufficient heat dissipation, leading to overheating in some circumstances. It could be a flaw in floating-point arithmetic. It could be a problem in the security between different processes, allowing one process to see the data of a different process.

Should Apple release new computers with a new 'M2' processor, my advice is: Wait. Don't be the first to use them -- at least not for critical applications. Let others test them, for at least a few months.

If you want to try a few as a research project, go ahead. I'm okay with that. You may find that they perform well for you. Or you may find that they don't. Testing new equipment before committing to production is a reasonable and responsible activity.

I will say here that I am biased. I think Apple has stayed too long with the 1970s model of computing, with all computing being local and nothing in the cloud. I also think that it designs products for appearance and not function. (It's displays are designed to be touched and its keyboards are designed to be visually appealing, which in my mind is backwards.) I respect it's performance with ARM chips in iPhones, iPads, and Macintosh computers. But Apple is not infallible.

Wednesday, April 14, 2021

In USB-C, the C is for confusion

USB-C has added to our tech world. Faster transfers of data, more capabilities, and, unfortunately, a bit of confusion.

To fully understand USB, one must understand the situation prior to USB, to the days of the first personal computers. (That is, the late 1970s, and prior to the IBM PC.)

In that early age, each manufacturer was free (more or less) to define their own connectors and communication protocols. Computer makers used the connectors that were available: the DB-25 for telecommunications and the Centronics connector for printers. (The DB-25 was part of the RS-232 standard, and the Centronics design would later be adopted as the IEEE-1284 standard.)

The RS-232 standard was for communications on phone lines, with terminals connected to modems at one end and computers (mainframes and minicomputers) connected to modems on the other end. The cables connecting terminals and modems were well defined. Using them to connect personal computers to printers (and other devices) was not so well defined. Each computer had its own interpretation of the standard, and each printer (or other device) had its own interpretation of the standard. Connecting computers and devices required (all too often) custom cables, so that one cable was useful for computer A to talk to device B, but it could not be used for computer C to talk to device B, or even computer A to talk to device D.

The situation with the Centronics interface on printers was somewhat better. The connector and the protocol were well-defined. But the standard applied only to the printer; it said nothing about the computer. Thus computer makers were able to pick any convenient connector for their end of the cable, and here two cables were specific to the computer. A Centronics-compatible printer would need cable A to talk to computer A and cable B to talk to computer B -- because the connectors on computers A and B were different.

Every pair of devices needed its own cable. Some cables were symmetrical, in that the connectors on both ends were the same. That did not mean the cable was reversible. For some devices, the cable was reversable -- it could be oriented either way. For other devices, one end had to be plugged in to the computer and the other end had to be plugged in to the device. Some connectors were symmetrical in that they could be oriented either way in their port -- a connector could be unplugged, flipped 180 degrees, and plugged back in to the same device. A few worked this way, most did not. The result was that cables had to be labelled with notes such as "computer end" and "modem end" or "this side up" or "this side towards power connector".

It was a mess.

The IBM PC brought if not sanity at least some standardization to this world. IBM defined a set of connectors for its PC: DIN for keyboard, DE-9 female for video, DE-9 male for serial communications, and DB-25 female for  parallel communication. Later, with the PS/2, IBM defined the mini DIN connector for keyboard and mouse, and the DE-15 female for video (the VGA connector that persists to this day). In addition to connectors, IBM defined the communication protocols, and other manufacturers adopted them. Just about every device on the market was changed to be "IBM-compatible".

But personal computers were not limited to video, serial, and parallel. Over time, we added network connections, scanners, and external drives. IBM did not have an adapter for each, to manufacturers were, once again, creating their own designs for connectors and cables. Eventually, network connectors settled on the RJ-45 that is used today, but only after a plethora of connectors and cable types were tried. There were no standards for scanners or external disks.

Some fifteen years after IBM's definition of the PC, USB arrived.

The vision of USB was a single connector and a single cable for all devices, and a single discovery protocol for communication. The acronym 'USB' is from "Universal Serial Bus".

The first USB standard did a fairly good job of it. The original connectors: USB-A and USB-B were used in pairs: each cable had one and only one of each connector. USB-A is the common, rectangular cable used now for older devices. USB-B is the rarer square connector that is apparently used only on printers and scanners.

Later USB standards adopted smaller connectors for the 'B' end of the cable. These smaller connectors were used for cameras and phones. For a while, there were various mini-B and micro-B connectors, with different numbers of wires and slightly different sizes. Today's smart phones (except for iPhones) use a micro-B connector.

The advantage of the A-B cable is twofold: standard and unambiguous orientation. The USB-A connector is used for 'host' devices such as computers and charging stations, and the USB-B connector is used for the 'client' device. (Portable rechargeable batteries have an interesting arrangement of a USB-B connector for charging the battery and a USB-A port for providing charge to a client device such as a phone.)

In all situations, the A-B cable works and one knows how to orient the cable. The 'A' connector goes to the host device, and can be inserted in only one orientation. The 'B' connector goes to the client device and it, too, can be inserted in only one orientation.

The biggest problem of the A-B arrangement was, as far as I can tell, that the orientation of the 'A' connector was not obvious, and one could easily reverse the rectangular connector and attempt to attach it in the wrong orientation.

Now let us look at the USB-C arrangement. USB-C uses a different connector (an oval shape) than the previous 'A' and 'B' connectors. This 'C' connector, like the 'A' connector, can be inserted into a port in either orientation. But unlike the 'A' connector, the 'C' connector actually lets one insert the cable fully, and --theoretically -- the cable works in either orientation. Not only that, the cable has 'C' connectors on both ends, so one can attach either end of the cable to either device -- one does not have to care about the orientation of the cable -- theoretically.

I add those 'theoretically' disclaimers because in practice, USB-C does not always work. Some cables work between two devices, and other cables do not. 'Thunderbolt' USB-C cables are different from plain USB-C cables. (We're back to 'this cable for those devices'.)

Some cables work between two devices, but only when the cable is properly oriented. That is, one end of the cable must always be attached to a specific device. The 'reversibility' of the cable has been lost. (Worse than before, as both ends of the cable look the same. We're back to labels saying 'attach to computer'.)

Some cables work, but only when the connectors are oriented properly in their respective ports. The 'reversibility' of the connector has been lost. (More labels for 'this side up'.)

We have also lost the notion of unambiguous direction, which is important for power. An early adopter of a USB-C laptop and a USB-C phone reported: "I connected my phone to my laptop via USB-C. Now my phone is trying to charge my laptop!"

USB-C has the one advantage of a smaller port. That's good for the makers of laptops and the makers of phones, I suppose. But the confusion about types of cables, and orientation of cables, and orientation of connectors is a cost.

Perhaps this confusion is only temporary. There was confusion with the initial implementations for the first USB devices. Over time, we, as an industry, figured out how to make USB-A and -B work. Maybe we need some time to figure out how to make USB-C work.

Or maybe we won't. Maybe the problems with USB-C are too complex, to close to the design. It is possible that USB-C will always have these problems.

If that is the case, we can look to a new design for USB. USB-D, anyone?

Monday, April 5, 2021

The Golden Age of Programming

Was there a "golden age" of programming?  A time that was considered the best of times for programmers?

I assert that there was, and that it is now. Moreover, I assert that we have always been in a golden age of programming, from the early days of computing up to now.

I will use my personal history to explain.

I started programming in the mid 1970s, in high school. Our town's high school was fortunate enough (or wealthy enough) to have a minicomputer which ran timesharing BASIC, and the school used it for teaching programming. It was a DEC PDP-8/e computer with a DECwriter and three Teletypes, so up to four people could use it at once.

For me, this was a golden age of computing. It was infinitely better than what I had before (which was nothing) and it was better (in my mind) than older computers I had read about, mainframe computers that accepted programs on punch cards and required either FORTRAN or COBOL. I had read a few books on FORTRAN and COBOL and decided at that tender age that those languages were not for me, and that BASIC was a much better programming language. (I have since changed my opinion about programming languages.)

Shortly after my experience with timeshare BASIC, my father brought home a microcomputer for the family to use. It was a Heathkit H-89 (technically a WH-89, as it was already assembled) and it could be programmed in Assembly language and in BASIC. Other languages could be added, including a subset of C and a subset of Pascal. (It was also possible to purchase FORTRAN and COBOL for it, but those were expensive, so we stayed with Assembly and BASIC and C and Pascal.)

Programming on the H-89 at home was much better than programming on the PDP-8/e at school. The home computer was available twenty-four hours a day, while the computer at school was available only after classes. The home computer had a CRT display, so it did not need paper like the Teletypes on the school computer. The CRT was also faster, and it had some rudimentary graphics capabilities.

That was a golden age of programming.

The early 1980s saw the introduction of the IBM PC and PC-DOS, and with them the introduction of new languages. The IBM PC came with BASIC, and other languages such as dBase and R:base and Clarion were available, as well as COBOL and FORTRAN.

That was a golden age of programming.

The 1990s saw the adoption of C and later C++ as programming languages. The full version of C (not a subset) was available on PCs, and I worked for a company that used C (and later C++) to build applications.

The 1990s also saw the introduction of Microsoft Windows and programming languages tailored for it. There were Visual Basic and a Visual C++ from Microsoft, which came with complete IDEs that included editors and debuggers. Borland offered its own C++ with IDE. There was PowerBuilder which let one build client-server applications to take advantage of the networking capabilities in Windows.

That was a golden age of programming.

Today, we have oodles of programming languages. We have Go and Swift and Python and R. We have C# and F#, VB.NET and Java. We also still have FORTRAN and COBOL.

We have excellent tools to support programmers. We have editors, debuggers, IDEs, and syntax checkers. We have sophisticated version control systems that allow for coordinated development by teams and across multiple teams.

This is a golden age of programming. And I predict that it won't be the last.

Monday, March 8, 2021

Google replaces Google Pay with Google Pay to make customers pay

Google caused a stir this week when it announced a complete re-vamp of its Google Pay service. The old Google Pay was web-based and allowed for free payments by using debit accounts (the ACH network). The new Google Pay is SMS-based and imposes fees on all transactions including debit accounts. (And since it is built on top of SMS, the new Google pay works only on cell phones. It does not work on desktops, laptops, or wifi-only tablets.)

To make matters more confusing, the old service was named "Google Pay". The new service is also named "Google Pay".

Reaction has been sharp. The discussions thus far run along the lines of "Google kills another usable service" to "this change brings no benefit to the customer". Both ideas are correct, but no one has asked why Google would make such a change.

So why would Google make such a change?

A few reasons come to mind.

First, an SMS-based payment system works well in India, which is a large market for Google. A payment system in India, especially one in which every transaction provides income to Google, would help Google's bottom line.

That doesn't explain why Google would eliminate a working payment system in the US and force its current customers to change to the new Google Pay. The change requires not only installing the new app but also re-registering (with a phone number) and rebuilding contact lists.

I think that more is going on than an evil plan to make users dance through hoops. Which brings me to my second idea.

I think Google, internally, is reviewing its offerings and looking at ways to increase profits. It may be that Google (or Alphabet, it's not clear who is calling the shots here) is looking to grow revenues. Google is doing this by either reducing or eliminating its free services, replacing them with paid-for services.

Google started as a search engine, but wowed the world with its e-mail system. It offered free e-mail with extra large storage (1GB per account) at a time when most companies charged for e-mail and limited storage to hundreds of Kilobytes, perhaps a Megabyte or two. The few free e-mail services offered even less storage.

After e-mail, Google offered many other services, most of them free. But over time, the free services are becoming... not free. Services that were free for individuals had costs for organizations. Of course, the accounts for organizations had extra features that individual users never needed, such as administrative functions and group permissions. It was easy to justify the fees.

Google charging more fees is, I think, the way of the future. Google has a large user base, and is most likely counting on people staying with Google, paying relatively small fees. If some customers leave, Google will probably not miss them.

It may be that paying for services is the way of the future. It may shock some users, and there may be much shouting and gnashing of teeth, but it is a definite possibility.

What interests me is the notion that Google is doing this not because they want to, and not because they can, but because the have to. Is it possible that advertising revenue (Google's current source of income) is no longer sufficient to power Google?