Sunday, August 14, 2022

Where has the FUD gone?

A long time ago, the IT industry was subjected to FUD (Fear, Uncertainty, Doubt) advertising campaigns.

FUD was a marketing strategy used by IBM. When competitors announce a new product (or worse, a breakthrough product), IBM would announce a similar product, but only in broad terms. The idea was to encourage people to wait for IBM's product, thereby reducing sales of the competing product. (It also created hype for the IBM product.)

To be effective, a FUD announcement had to describe a future product. It created doubt and uncertainty, and fear of making a bad choice of technology.

I haven't seen a FUD announcement for a long time.

They were common when IBM was the dominant manufacturer for computing equipment. The FUD campaign may have ended shortly after the introduction of the PS/2 (itself a product with a FUD announcement) line of computers. The market rejected the PS/2's Micro Channel Architecture, and accepted Compaq's ISA (the old PC standard architecture) with its 80368 processors. (Although the market did adopt IBM's VGA display and 1.44 MB hard "floppy" disk as standards.)

Compaq didn't use FUD announcements to take the lead from IBM. It delivered products, and its announcements for those products were specific. The message was "our products have this technology and you can buy them today".

There is one company today which has something similar to FUD announcements. But they are different from the old-style FUD announcements of yesteryear.

That company is Apple.

Apple is the one company that announces future products. It does it in a number of ways, from its annual marketing events to its reliable product schedule. (Everyone knows that Apple releases new iPhones in September, for example.)

Apple's FUD campaign seems to be accidental. I don't think that Apple is playing the same game that IBM played in the 1980s. IBM made FUD announcements to deter people from buying products from other companies. Apple may be doing the same thing, but instead of affecting other companies, I think it is Apple itself that suffers from these announcements.

Apple announcing a new processor for its MacBook line doesn't deter people from buying Windows laptops. They need laptops, they have already chosen Windows (for whatever reason) and they are going to buy them. Very few people change their purchasing decision from Windows to a yet-to-be-defined MacBook.

But a lot of Apple users defer their purchases. Many Apple users, in the middle of planning an upgrade, put off their purchase until the new MacBook was released.

Trade publications advise people to defer the purchase of Mac Minis, based on Apple's announcements and regular product schedule.

This behavior is the same as IBM's FUD from thirty years ago -- but with the difference that the (unintentional) target is the company itself.

It may be that Apple is aware of customers deferring their purchases. Perhaps Apple wants that behavior. After all, if Apple withheld new product information until the release of the product, those who recently purchased the older version may feel betrayed by Apple. It may be that Apple is forgoing immediate revenue in exchange for customer goodwill.

I'm happy to live in a world without FUD announcements. The IT world with FUD has challenges for planning, and a constant fear of missing out on important technology. The current world is a much more relaxed place. (That may sound odd to technology managers, but believe me, today's world is much better than the older one.)

Tuesday, August 9, 2022

E-mail Addresses Considered Harmful

PCWorld lost (temporarily) their YouTube account because their e-mail address changed.

YouTube, like many web services, uses e-mail addresses for customer IDs. This, I think, is a poor practice.

Many web services and many cloud services create dependencies on an email address. Your account ID is your email address. (This is a cheap way to ensure unique IDs.) When updating my e-mail address on these sites, I am changing my ID.

IDs should be unique, short, and permanent. E-mail addresses are unique, and they are usually short, but they are not permanent. E-mail addresses can change. Specifically, e-mail addresses can change outside of the control of the organization that uses them as IDs. I changed my main e-mail address recently, and had to go through all of my accounts (I keep a list) and update each of them.

For most sites, I was able to change my e-mail address. Some sites let me change my contact e-mail address but did not allow me to change my ID. Those sites send e-mails to my new address, but I must use my old e-mail address to log in. Other sites let me change my e-mail address as ID, but kept sending notifications to my old e-mail address. (Their web site stores the e-mail addresses for notifications as a copy of the login ID, and those e-mail addresses are not updated when the ID is changed.)

Clearly, different web sites have different ideas about the separation of ID and e-mail.

Companies running web services, or cloud services, should carefully select their IDs for customers. How do they use those IDs? Are they stored in databases? Are they keys in databases? If a customer changes their e-mail address, what happens to the records with the old e-mail address? How does a new e-mail address affect queries? Do the two e-mail addresses appear as two different customers?

This is why database keys (and user IDs) should be unique and permanent.

Banks and insurance companies understand this. I am a customer to a few insurance companies and several banks. All of them -- without exception -- use their own IDs for my account. Not email addresses.

The underlying concept here is ownership. When I open an account with a bank and they ask me to provide an ID (not an e-mail), they are really asking me to pick an ID from a (very) large set of unused IDs that conform to their rules (so many letters and digits). I pick the ID, but they own it. They can change it if they want. (I've never seen that happen, but it could.) And if they change it, nothing else in my electronic life changes.

An e-mail address, in contrast, is owned by the e-mail provider (GMail, Yahoo, Microsoft, etc.). I don't own it, I merely "rent" it. Anyone I give it to, either a friend, colleague, or web service, is only borrowing it. It can be withdrawn from circulation at any time, either by me or the e-mail service.

Building a service on data that you don't own is risky. I understand the appeal of e-mail addresses as IDs. (It is easy, everyone else does it, it doesn't require our own code to create new IDs for customers, and anyway customers don't want another ID for our service and that is a disincentive for them to use our new service so use the e-mail addresses because the folks in marketing want it that way.)

Yet I must balance those appealing factors with the risks. For individuals, the e-mail address may be an acceptable ID. For corporate accounts, e-mail addresses as ID pose risks to the customer. (Just as PCWorld.)

In essence, using e-mail addresses as IDs is simple for the service, but imposes risks on customers. That may not be the best of business practices.


Thursday, August 4, 2022

Eggs and baskets

PCWorld, a venerable trade publication-now-website of the IT realm, recently lost its YouTube video channel. The channel was disabled (or suspended? or deleted?) and no content was available. For more than eight days.

From what I can discern, IDG's YouTube account was controlled by an IDG e-mail address. Everything worked until IDG was purchased by Foundry, and Foundry changed all of IDG's e-mail addresses to Foundry addresses, didn't change the account at YouTube, and YouTube, seeing no activity on the IDG e-mail address or maybe getting bounce messages, cancelled the account.

Thus, the PCWorld video channel was unavailable for over a week.

Why didn't PCWorld restore its channel? Or make its content available on another service? 

My guess is that IDG stored all of their video content on YouTube. That is, the only copy was on YouTube. IDG probably relied on YouTube to keep backup copies and multiple servers for disaster recovery. In short, IDG followed the pattern for cloud-based computing.

The one disaster for which IDG failed to prepare was the account cancellation.

I must say here that a lot of this is speculation on my part. I don't work for PCWorld, or at IDG (um, Foundry) or at YouTube. I don't know that the sequence I have outlined is what actually happened.

My point is not to identify exactly what happened.

My point is this: cloud solutions, like any other type of technology, can be fragile. They can be fragile in ways that we do not expect.

The past half-century of computing has shown us that computers fail. They fail in many ways, from physical problems to programming errors to configuration mistakes. Those failures often cause problems with data, sometimes deleting all of it, sometimes deleting part of it, and sometimes modifying (incorrectly) part of the data. We have a lot of experience with failures, and we have built a set of good practices to recover from those failures.

Cloud-based solutions do not eliminate the need for those precautions. While cloud-based solutions offer protection against some problems, they introduce new problems.

Such as account cancellation.

Businesses (and people, often), when entering into agreements, look for some measure of security. Businesses want to know that the companies they pick to be suppliers will be around for some time. They avoid "fly by night" operations.

A risk in cloud-based solutions is account closure. The risk is not that Google (or Oracle) will go out of business, leaving you stranded. The risk is that the cloud supplier will simply stop doing business with you.

I have seen multiple stories about people or businesses who have had their accounts closed, usually for violating the terms of service. When said people or businesses reach out to the cloud provider (a difficult task in itself, as they don't provide phone support) the cloud provider refuses to discuss the issue, and refuses to provide any details about the violation. From the customer's perspective, the results are very much as if the cloud provider went out of business. But this behavior cannot be predicted from the normal signal of "a reliable business that will be around for a while".

It may take some time, and a few more stories about sudden, unexplained and uncorrectable account closures, but eventually people (and businesses) will recognize the risk and start taking preventative actions. Actions such as keeping local copies of data, backups of that data (not local and not on the main cloud provider), and a second provider for fail-over.

In other words:

Don't put all of your eggs in one cloud basket.

Tuesday, July 26, 2022

Mental models of computers

In the old movie "The Matrix", there is a scene in which the character Cipher is looking at code and another character, Neo, asks why he looks at the code and not the presentation-level view. Cipher explains that the code is better, because the presentation level is designed to fool us humans. At this the moment that Neo re-thinks his view of computers.

That scene (some years after the debut of the movie) got me thinking about my view of computers.

My mental model of computers is based on text. That is, I think of a computer as a device that processes text and talks to other devices that process text. The CPU processes text, terminals display text to users and accept text via the keyboard, printers print text, and disks store text. (Disks also store data in binary form, but for me that is simply a strange form of text.)

This model is perhaps not surprising, as my early experiences with computers were with text-oriented devices and text-oriented programs. Those computers included a DEC PDP-8 running BASIC; a DECsystem-10 running TOPS-10 with FORTRAN, Pascal, and a few other text-oriented languages; a Heathkit H-89 running HDOS (an operating system much like DEC's RT-11) and BASIC, assembly language, FORTRAN, C, and Pascal.

The devices I used to interact with computers were text terminals. The PDP-8 used Teletype ASR-33s, which had large mechanical keyboards (way more mechanical than today's mechanical keyboards) and printed text on a long continuous roll of paper. The DECsystem-10 and the H-89 both used CRT terminals (no paper) and mostly text with a few special graphics characters.

In those formative years, all of my experience with computers was for programming. That is, the primary purpose of a computer was to learn programming and to do programming. Keep in mind that this was before much of the technical world we have today. There was no Google, no Netflix, no Windows. Spreadsheets were the new thing, and even they were text-oriented. The few graphs that existed in computing were made on special (read that as "expensive and rare") equipment that few people had.

In my mind, back then, computers were for programming and programming was a process that used text and the computers used text so they were a good match for programming.

Programming today is still a text-oriented process. The "source code" of programs, the version that we humans write and that computers either compile or interpret into executed code, is text. One can write programs in the Windows "Notepad" program. (One must save them to disk and then tell the compiler to convert that saved file, but that is simply the process to get a program to run.)

So what does this have to do with "The Matrix" and specifically why is that one scene important?

It strikes me that while my experience with computers started with programming and text-oriented devices, not every (especially now-a-days) has that same experience. Today, people are introduced to computers with cell phones, or possibly tablets. A few may get their first experience on a laptop running Windows or macOS.

All of these are different from my text-based introduction. And all of these are graphics-based. People today, when they first encounter computers, encounter graphical interfaces, and use computers for many things other than programming. People today must have a very different mental model about computers. I saw computers as boxes that processed text. Today, most people see computers as boxes that process graphics, and sound, and voice.

What a shock it must be for someone today to start to learn programming. They are taken out of their comfortable mental model and forced to use text. Some classes begin with simple "hello, world" programs that not only use text source code but also produce text output. How primitive this must seem to people familiar with graphical interfaces! (Some classes begin with simple programs that present web pages, which is a bit better in that the output is familiar, but the source code is still text.)

But this different mental model may be a problem for people entering the programming world. They are moving from a graphical world to a text-based world, and that transition can be difficult. Modern IDE programs ease the transition by allowing many operations in a graphical environment, but the source code remains text.

Do people revolt? Do they reject the text-oriented approach to source code? I imagine that some find the change in mental models difficult, perhaps too difficult, and they abandon programming.

A better question is: Why has no one created a graphical-oriented programming language? Not just a programming language in an IDE -- we already have those. I'm thinking of a new approach to programming, something very different from the text approach of today.

It might be that programming has formed a self-reinforcing loop. Only programmers can create new programming languages and programming environments, and these programmers (obviously) are comfortable with the text paradigm. Perhaps the see no need to make such a large change.

Or it might be that the text model is the best model for programming. Programming is the organization of ideas into clearly specified collections and operations, and text handles that task better than graphics. Visual representations of collections and operations can be clear, and they can be ambiguous. (But then, text representations can also be ambiguous, so I'm not sure that there is a clear advantage for text.)

Or possibly we simply have not seen the right person to come along, with the right mix of technical skills, graphics abilities, and desire for a visual programming language. It may be that graphical programming languages are possible, and that we just haven't invented them.

I want to think it is the last of these reasons, because that means there is a lot more for us to learn about programming. The introduction of a visual programming language will open new vistas for programming, and applications, and computing.

I want to think that there will always be something new for the programmers.

Wednesday, July 20, 2022

The Macbook camera may always be second-rate

A recent article on MacWorld complained about Apple's "solution" of a webcam for MacBooks, namely using the superior camera in the iPhone.

It is true that iPhones have better cameras than MacBooks. But why?

I can think of no technical reason.

It's not that the iPhone camera won't fit in the MacBook. The MacBook has plenty of space. It is the iPhone that puts space at a premium.

It's not that the iPhone camera won't work with a MacBook processor. The iPhone camera works in the iPhone with its A12 (or is it A14?) processor. The MacBook has an M1 or an M2 processor, using very similar designs. Getting the iPhone camera to work with an M1 processor should be relatively easy.

It's not a matter of power. The MacBook has plenty of electricity. It is the iPhone that must be careful with power consumption.

It's not that the MacBook developers don't know how to properly configure the iPhone camera and get data from it. (The iPhone developers certainly know, and they are just down the hall.)

It's not a supply issue. iPhone sales dwarf MacBook sales (in units, as well as dollars). Diverting cameras from iPhones to MacBooks would probably not even show in inventory reports.

So let's say that the reason is not technical.

Then the reason must be non-technical. (Obviously.)

It could be that the MacBook project lead wants to use a specific camera for non-technical reasons. Pride, perhaps, or ego. Maybe the technical lead, on a former project, designed the camera that is used in the MacBook, and doesn't want to switch to someone else's camera. (I'm not convinced of this.)

Maybe Apple has a lot of already-purchased cameras and wants to use them, rather than discarding them. (I'm not believing this, either.)

I think the reason may be something else: marketing.

When Apple sells a MacBook with an inferior camera, and it provides the "Continuity Camera" service to allow an iPhone to be used as a camera for that MacBook, Apple has now given the customer a reason to purchase an iPhone. Or if the customer already has an iPhone, a reason to stay with the iPhone and not switch to a different brand.

It's not a nice idea. In fact, it's rather cynical: Apple deliberately providing a lesser experience in MacBooks for the purpose of selling more iPhones.

But it's the only one that fits.

Maybe I'm wrong. Maybe Apple has a good technical reason for supplying inferior cameras in MacBooks.

I hope that I am. Because I want Apple to be a company that provides quality products, not inferior products carefully crafted to increase sales of other Apple products.

Thursday, July 14, 2022

Two CPUs

Looking through some old computer magazines from the 1980s, I was surprised at the number of advertisements for dual-CPU boards.

We don't see dual-CPU configurations now. I'm not talking about dual cores (or multiple cores) but dual CPUs. Two actual, and different, CPUs. In the 1980s, the common mix was a Z-80 and an 8086 on the same board. Sometimes it was an 8085 and an 8086.

Dual-CPU boards were popular as the industry transitioned from 8-bit processors (8080, Z-80, 6502, and 6800) to 16-bit processors (8088 and 8086, mainly). A dual-CPU configuration allowed one to test the new CPU while still keeping the old software (and data) available.

Dual-CPU configurations did not use two CPUs at the same time. They allowed for one or the other CPU to be active, to run an 8-bit operating system or a 16-bit operating system, much like today's dual-boot configuration for multiple operating systems on a single PC.

Computers at the time were more expensive and more modular than they are today. Today's computers have a motherboard with CPU (often with integrated graphics), slots for memory, slots for GPU, SATA ports for disks, and a collection of ports (video, USB, ethernet, sound, and sometimes PS/2 keyboard and mouse). All of those items are part of the motherboard.

In contrast, computers in the 1980s (especially those not IBM-compatible) used a simple backplane with a buss and separate cards for CPU, memory, floppy disk interface, hard disk interface, serial and parallel ports, real-time clock, and video display. It was much easier to replace the CPU and keep the rest of the computer.

So why don't we see dual-CPU configurations today? Why don't we see (for example) a PC with an Intel processor and an ARM processor?

There are a number of factors.

One is the cost of hardware. Computers today are much less expensive than computers in the 1980s, especially after accounting for inflation. Today, one can get a basic laptop for $1000, and a nice one for $2000. In the 1980s, $1000 would get a disk subsystem (interface card and drives) but the cost of the entire computer was upwards of $2000 (in 1980 dollars).

Another factor is the connection between hardware and operating system. Today, operating systems are tightly bound to hardware. A few people use grub or BootCamp to run different operating systems on the same computer, but they are few. For most people, the hardware and the operating system are a set, both coming in the same box.

Booting from a floppy disk (instead of an internal fixed disk) puts a degree of separation between hardware and operating system. One can easily insert a different disk to boot a different operating system.

Computers today are small. Laptops and micro-size desktops are convenient to plunk down almost anywhere. It is easy enough to find space for a second computer. That was not the case in the 1980s, when the CPU was the size of today's large tower unit, and one needed a large terminal with CRT and keyboard. Space for a second computer was a luxury.

Finally, today's processors are sophisticated, with high integration with the other electronics on the motherboard. The processors of the 1980s were quite isolationist in their approach to other circuitry, and it was (relatively) easy to design the circuits to allow for two CPUs and to enable one and not the other. The connections for today's Intel processors and today's ARM processors are much more sophisticated, and the differences between Intel and ARM are pronounced. A two-CPU system today requires much more than the simple enable/disable circuits of the past.

All of those factors (expense of hardware, ease at replacing a single-CPU card with a dual-CPU card, limited space, the ease of changing operating systems, and the ability to link two unlike CPUs on a single card) meant that the dual-CPU configuration was the better choice, in the 1980s. But all of those factors have changed, so in 2022 the better choice is to use two different computers.

I suppose one could leverage the GPU slot of a PC, and install a special card with an ARM processor, but I'm not sure that the card in the GPU slot can operate as a buss master and control the ports and memory of the motherboard. But such a board would have limited appeal, and probably not a viable product.

I think we won't see a dual-CPU system. We won't see a PC with an Intel processor and an ARM processor, with boot options to use one or the other. But maybe we don't need dual-CPU PCs. We can still explore new processors (by purchasing relatively inexpensive second PCs) and we can transition from older CPUs to newer ones (by purchasing relatively inexpensive replacement PCs).

Thursday, June 23, 2022

Web pages are big

Web pages load their own user interface code.

There are two immediate consequences of this.

First, each web page must load its own libraries to implement that interface. That slows the loading of web pages. The libraries include more than code; they include stylesheets, fonts, graphics, images and lots of HTML to tie it all together. The code often includes bits to identify the type of device (desktop web browser, mobile web browser, etc.) and functions to load assets when needed ("lazy loading").

I imagine that lots of this code is duplicated from page to page on a web site, and lots of the functions are similar to corresponding functions on other web sites.

Second, each web page has its own user interface, its own "look and feel", to use a term from the 1980s.

Each web page (or perhaps more accurately, each web site) has its own appearance, and its own conventions.

Even the simple convention of "login and logout links are in the top right corner" is not all that common. Of the dozens of web sites that I frequent, many have the "login" and "logout" links in the top right corner, but many others do not. Some have the links close to the top (but not topmost) and close to the right side (but not rightmost). Some web sites bury the "login" and "logout" links in menus. Some web sites put one of the "login" and "logout" links in a menu, but leave the other on the page. Some web sites put the "login" link in the center of their welcome page. And there are other variations.

Variation in the user interface is not evil, but it is inconsistent and it increases the mental effort to visit different web sites. But what do the owners of each web site care? As long as customers come to their web site (and pay them) then the web site is working, according to the company. The fact that it is not consistent with other web sites is not a problem (for them).

Web sites have to load all of their libraries, which increases overall load time for the site. The companies running the web sites probably care little, as the cost is imposed on their customers. The attitude that many companies take is probably (I say "probably" because I have not spoken to companies about this) is that the user (the customer), if dissatisfied with load time, can purchase a faster computer or a faster internet service. The company feels no obligation to improve the experience for the customer.

* * * * *

The situation of individual, differing user interfaces is not unique. In the 1980s, prior to Microsoft Windows, PC software had different user interfaces. The word processors of the time (WordPerfect, WordStar, and even Microsoft Word which had a version for MS-DOS) each had their own "look and feel". The spreadsheets of the time (Lotus 1-2-3, Quattro Pro, and Microsoft Multiplan) each had their own user interfaces, different from each other and different from the user interfaces for word processors. Database packages (dBase, R:Base, Clipper, Condor) each had their own... you get the idea.

Windows offered consistency in the user interface. (It also offered graphics, which is what I think really sold Windows over IBM's OS/2, but that's another topic.) With Windows, programs started the same way, appeared the same way, and provided a set of common functions for opening files, printing, copying and pasting data, and more.

Windows arrived at an opportune time. Computers were fairly common, people (and companies) were using them for serious work, and applications had their various user interfaces. Windows offered consistency across applications, and reduced the effort to learn new applications. A spreadsheet was different from a word processor, but at least someone who was familiar with a word processor under Windows could perform basic operations in a spreadsheet under Windows. People, when learning new applications, could focus on those aspects that were unique to the new applications, not the common operations.

The result was that people learned to use computers more rapidly than in the earlier age of MS-DOS. Windows was sold on the reduction of effort (and therefore costs) in using computers.

* * * * *

Will we see a similar transition for the web? Will someone come along and sell a unified interface for web apps, advertising a lower cost of use? 

In a sense, we have. The apps on smart phones have a more consistent user interface than web sites. This is due to Apple's and Google's efforts, providing libraries for common UI functions and guidelines for application appearance.

But I don't see a unifying transition for web sites in traditional (desktop) browsers. Each company wants its own look and feel, its own brand presence. It doesn't care that web sites take a long time to load, and it probably doesn't care that web sites require a lot of expensive maintenance. Microsoft was able to sell Windows from a position of strength, in a market that had few options. With the web, any company can set up a web site and offer it to the world. There is no convenient choke point, and there is no company strong enough to offer a user interface that could meet the needs of the myriad web sites in existence.

Which means that we are stuck with large web pages, long download times, and different interfaces for different web sites.