Tuesday, November 28, 2023

Today's AI means QA for data

Some time ago, I experimented with n-grams. N-grams are a technique that reads an existing text and produces a second text that is similar but not the same. It splits the original text into pieces; for 2-grams it uses two letters, for 3-grams it uses three letters, etc. It computes the frequency of each combination of letters and then generates new text, selecting each letter based on the frequency of occurrence after a set of letters.

For 2-grams, the word 'every' is split into 'ev', 've', 'er', and 'ry'. When generating text, the program sees that 'e' is followed by either 'v' or 'r' and builds text with that same pattern. That's with an input of one word. With a larger input, the letter 'e' is followed by many different letters, each with its own frequency.

Using a program (in C, I believe) that read text, split it into n-grams, and generated new text, I experimented with names of friends. I gave the program a list of names and the program produced a list of names that were recognizable as names, but not the names of the original list. I was impressed, and considered it pretty close to magic.

It strikes me that the AI model ChatGPT uses a similar technique, but with words instead of individual letters. Given a large input, or rather, a condensation of frequencies of words (the 'weights') it can generate text using the frequencies of words that follow other words.

There is more to ChatGPT, of course, as the output is not simply random text but text about a specified topic. But let's focus on the input data, the "training text". That text is half of what makes ChatGPT possible. (The other half being the code.)

The training text enables, and also limits, the text generated by ChatGPT. If the training text (to create the factors) were limited to Shakespeare's plays and sonnets, for example, any output from ChatGPT would strongly resemble Shakespeare's work. Or if the training were limited to the Christian Bible, then the output would be in the style of the Bible. Or if the training text were limited to lyrics of modern songs, then the output would be... you get the idea.

The key point is this: The output of ChatGPT (or any current text-based AI engine) is defined by the training text.

Therefore, any user of text-based AI should understand the training text for the AI engine. And this presents a new aspect of quality assurance.

For the entire age of automated data processing, quality assurance has focussed on code. The subject of scrutiny has been the program. The input data has been important, but generally obtained from within the organization or from reputable sources. It was well understood and considered trustworthy.

And for the entire age of automated data processing, the tests have been pointed at the program and the data that it produces. All of the procedures for tests have been designed for the program and the data that it produces. There was little consideration to the input data, and almost no tests for it. (With the possible exception of completeness of input data, and input sets for unusual cases.)

I think that this mindset must change. We must now understand and evaluate the data that is used to train AI models. Is the data appropriate for our needs? Is the data correct? Is it marked with the correct metadata?

With a generally-available model such as ChatGPT, where one does not control the training data, nor does one have visibility into the training data, such analyses are not possible. We have to trust that the administrators of ChatGPT have the right data.

Even with self-hosted AI engines, where we control the training data, the effort is significant. The work includes collecting the data, verifying its provenance, marking it with the right metadata, updating it over time, and removing it when it is no longer appropriate.

It strikes me that the work is somewhat similar to that of a librarian, managing books in a library. New books must be added (and catalogued), old books must be removed.

Perhaps we will see "Data Librarian" as a new job title.

Monday, October 30, 2023

Apple M3

I found Apple's presentation "Scary Fast" to be not scary but somewhat disturbing. Or perhaps "disappointing" is the better adjective.

Apple's M3 processors are impressive, but perhaps not as impressive as the "Scary Fast" presentation implies. I'm not saying that Apple is lying or presenting false information, but they are picking the information very carefully.

Apple compares the M3 to the M1 processor, and the MacBook Pro (with an M3 Pro) to the fastest Intel-based MacBook.

I find those comparisons odd. Why compare the M3 to the old M1 processor? Why not compare it to the M2. And comparing an M3-based MacBook Pro to an Intel-based MacBook seems even odder. (Is anyone still using Intel-based MacBooks? Other than me?)

But Apple's cherry-picking for performance comparisons is not the major disappointment.

The big issue, the one issue that I think Apple misses completely, is that its hardware is outrunning its software. Apple's M3 processors are fast and capable, but so were Apple's M2 processors. The M2 processors were so powerful that the low-end, plain M2 processors was more than enough for almost everyone. If I were equipping an office with Apple devices, I would give everyone a MacBook Air, or possibly a low-end MacBook Pro. Those are enough for just about all typical office work. (Folks editing video or running large test sets might benefit from the higher processors, but they are a small portion of the audience.)

Apple's hardware is faster than everyone else's, just as high-end sports cars are faster than the average automobile. But for most people, average automobiles are good enough. Most people don't want the expenses of a high-end sports car, nor can they take advantage of the speed. Apple's M3 processors are fast, but pure speed translates into performance for only a few users. It is quite likely that today (that is, with no M3 processors in the field) most people have computers that are more than fast enough for their needs.

Apple concentrates on hardware and invests little in a number of other areas:

- Cloud-based computing
- Artificial intelligence
- Design of programming languages, multi-threaded applications, parallel tasks, and coordination of distinct processes
- Programming tools, from command-line tools to IDEs
- Automated testing of GUI programs
- Voice recognition

That's not to say that Apple has done nothing in these areas. My point is that Apple has done a small amount and relies on others to do the work in these areas. And that work isn't getting done. Apple's obsession with hardware is costing them opportunities in these areas. It holds Apple back, preventing it from growing the technology. It also holds us back, because we have to wait for Apple.

Tuesday, October 10, 2023

Apple, TSMC, and 3nm chips

The news from a few months back is that Apple has purchased all of TSMC's capacity for 3nm chips for one year. That's a pretty impressive deal. It gives Apple exclusive access to TSMC's latest chip technology, locking out all other PC manufacturers. It also shows that Apple is planning on a lot of sales in the coming year.

Yet I see a dark side to this arrangement.

First, it places a cap on Apple's sales in the year. Apple has "maxed out" its chip source; it cannot get more from TSMC. Apple's growth is now constrained by TSMC's growth, which has been less than planned. (TSMC's new fabrication plant in Arizona has been delayed and cannot produce multi-chip assemblies.)

With a cap on production, Apple must choose carefully which chips it wants from TSMC. What percentage will be M2 chips? M2 Pro chips? A17 chips for iPhones? If Apple guesses wrong, it could have a lot of unsold inventory for one product and be unable to meet sales demand for another.

Second, it makes allies of PC manufacturers (anyone who isn't Apple) and chip manufacturers (anyone who isn't TSMC). TSMC may have difficulty winning business from Lenovo, Dell, and even Microsoft. The arrangement probably doesn't help Apple's relationships with Intel and Samsung, either.

Third, it shows that Apple's latest processors are not second-sourced. (Second-sourcing was a common practice in the 1980s. It reduced risk to the customer and to the primary manufacturer.) Not having a second source for its processors means that any disruption to manufacturing will directly the finished products. If TSMC cannot deliver, Apple has nowhere to turn.

It may be that Apple's chips cannot be second-sourced. I don't know the details, but it may be that Apple provided specifications for the chips, and TSMC designed the chip layout. If that is the case, then it is most likely that TSMC owns the layouts, not Apple, and for Apple to get chips from Intel or Samsung those companies would have to start with the specifications and design their own chips. That's a lengthy process, and might take longer than the expected lifetime of the chips. (The M1 chip is all but obsolete, and the M3 is already replacing the M2 chip. The "A" series chips have similarly rapid turnover.)

So Apple purchasing all of TSMC's capacity for a year sounds impressive -- and it is -- but it also reveals weaknesses in Apple's position.


Monday, September 11, 2023

Google raises prices, which may be a good thing

Google has raised prices for several of its services. The annual rates for Workspace, YouTube Premium, and Nest are all going up. The internet is not happy, of course. Yet I see a benefit in these price increases, and not just to Google. I think consumers may benefit from them.

It does sound odd. How can consumers -- who pay these prices -- benefit from increases? Wouldn't they benefit more from decreases in prices?

My answer is: not necessarily.

My thinking is this:

While Google is a large company, with many products and services, most of its revenue comes from advertising.  One could say that Google is an advertising company that has a few side projects that are supported by advertising revenue.

The model was: make a lot of money in advertising and offer other services.

Google was -- and is -- wealthy enough that it could give away e-mail and storage. When Google first offered it's GMail service, it allowed up to one Gigabyte of storage to each user, an amount that was unheard-of at the time.

It's tempting to want this model to continue. It gives us "something for nothing". But letting advertising pay for everything else has a downside.

When a service depends on revenue from advertising, it is natural for the company to expect that service to help advertising. If the service doesn't, then that service is either changed or discontinued. (Why continue to offer a service that costs money to maintain but doesn't help with revenue?)

Google has a reputation for cancelling projects. Perhaps those projects were cancelled because they did not provide revenue via advertising -- or didn't help the advertising group gain customers, or better market data, or something else.

When a service is funded by advertising, that service is beholden to advertising.

In contrast, when a service has its own revenue -- enough revenue to generate a profit -- then that service is somewhat isolated from the advertising budget. If YouTube Premium costs $X to run and brings in $Y in revenue (and Y is greater than X) then YouTube Premium has a good argument to continue, despite what the folks in advertising want.

The same goes for other services like Nest and Google's cloud storage.

I expect that no one enjoys increasing prices. I certainly don't. But I recognize the need for services to be independent, and free of the influence of other lines of business. Higher revenue leads to services that are stronger and longer-lasting. (Or so I like to think.)

I may grumble about the increase in prices for services. But I grumble with restraint.

Thursday, August 10, 2023

The Apple Mac Pro is a Deluxe Coffee Maker

Why does Apple offer the Mac Pro? It's expensive and it offers little more than the Mac Studio. So why is it in the product line? Several have asked this question, and I have an idea.

But before discussing the Apple Mac Pro, I want to talk about coffee makers. Specifically the marketing of coffee makers.

Coffee makers are, at best, a commodity. One puts coffee, water, and electricity in, and after a short time coffee comes out. The quality of the coffee is, I believe, dependent on the coffee and the water, not the mechanism.

As bland as they are, there is one legend about coffee makers. It has to do with the marketing of coffee makers, and it goes something like this:

(Please note that I am working from a dim memory and much -- if not all -- of this legend may be wrong. But the idea will serve.)

A company that made and sold coffee makers was disappointed with sales, and wanted to increase its profits. They brought in a consultant to help. The consultant looked at the product line (there were two, a basic model and a fancy model), sales figures, sales locations, advertising, and various other things.  The consultant then met with the big-wigs at the company and presented recommendations.

The executives at the company were expecting to hear about marketing strategies, advertising, and perhaps pricing. And the consultant did provide recommendations along those lines.

"Add a third coffee maker to your product line," he said. "Make it a deluxe model with all of the features of your current models, and a few more, even features that people won't use. Sell it for an expensive price."

The executives were surprised to hear this. How could a third coffee maker, especially an expensive one, improve sales? Customers were not happy with the first two; a third would be just as bad.

"No," said the consultant. "The deluxe model will improve sales. It won't have many sales itself, but it will encourage people to buy the fancy (not deluxe) model. Right now your customers see that fancy model as expensive, and a poor value. A third model, with lots of features and a high price will convince customers that the fancy (not deluxe) model is a bargain."

The company tried this strategy ... and it worked! Just as the consultant said. Sales of the deluxe model were dismal, but sales of the (now) middle-tier fancy (not deluxe) model perked up. (Pun intended.)

We often forget that sales is about psychology as well as features.

Now let's consider Apple and the Mac Pro. The Mac Pro is not a good bargain. It performs only slightly better than the Mac Studio, yet it carries a much higher price tag. The Mac Pro has features that are ... questionable at best. (PCI slots that won't take graphics cards. Don't forget the wheels!)

Perhaps -- just perhaps -- Apple is using the Mac Pro to boost sales of the Mac Studio. Pricing the Mac Pro the way Apple does, it makes the Mac Studio a much more attractive option.

I suspect that if Apple had no Mac Pro and put the Mac Studio at the top of its product line, then a lot of people would argue for the Mac Mini as the better option. Those same people can make the same argument with the Mac Pro and convince themselves to buy the Mac Studio.

So maybe the Mac Pro isn't a Mac Pro at all. Maybe it is a deluxe coffee maker.

Thursday, August 3, 2023

We'll always have source code

Will AI change programming? Will it eliminate the need for programmers? Will we no longer need programs, or rather, the source code for programs? I think we will always have source code, and therefore always have programmers. But perhaps not as we think of programmers and source code today.

But first, let's review the notions of computers, software, and source code.

Programming has been with us almost as long as we have had electronic computers.

Longer than that, if we include the punch cards used by the Jacquard loom, But let's stick to electronic computers and the programming of them.

The first digital electronic computers were built in the 1940s. They were programming not by software but by wires -- connecting various wires to various points to perform a specific set of computations. There was no concept of a program -- at least not one for the computer. There were no programming languages and there was no notion of source code.

The 1950s saw the introduction of the stored-program computer. Instead of wiring plug-boards, program instructions were stored in cells inside the computer. We call these instructions "machine code". When programming a computer, machine code is a slightly more convenient than wiring plug-boards, but not by much. Machine code consists of a number of instructions, which each reside at distinct, sequential locations in memory. The processor executes the program by simply reading one instruction from a starting location, executing it, and then reading the next instruction at the next memory address.

Building a program in machine code took a lot of time and required patience and attention to detail. Changing a program often meant inserting instructions, which meant that the programmer had to recalculate all of the destination addresses for loops, branches, and subroutines. With stored-program computers, there was the notion of programming, but not the notion of source code.

Source code exists to be processed by a computer and converted into machine code. We first had source code with symbolic assemblers. Assemblers were (and still are) programs that read a text file and generate machine code. Not just any text file, but a text file that follows specific rules for content and formatting, and specifies a series of machine instructions but as text -- not as numbers. The assembler did the grunt work of converting "mnemonic" codes to numeric machine codes. It also converted numeric and text data to the proper representation for the processor, and calculated the destinations for loops, branches, and subroutines. Revising a program written in assembly language was much easier than revising machine code.

Later languages such as FORTRAN and COBOL converted higher-level text into machine code. They, too, had source code.

Early C compilers converted code into assembly code, which then had to be processed by an assembler. This last sequence looked like this:

    C source code --> [compiler] --> assembly source code --> [assembler] --> machine code

I've listed both the C code and the assembly code as "source code", but in reality only the C code is the source code. The assembly code is merely an intermediate form of the code, something generated by machine and later read by machine.

A better description of the sequence is:

    C source code --> [compiler] --> assembly code --> [assembler] --> machine code

I've changed the "assembly source code" to "assembly code". The adjective "source" is not really correct for it. The C program (at the left) is the one and only source.

Later C compilers omitted this intermediate step and generated machine code directly. The sequence become:

    C source code --> [compiler] --> machine code

Now let's consider AI. (You didn't forget about AI, did you?)

AI can be used to create programs in two ways. One is to enhance a traditional programming IDE with AI, and thereby assist the programmer as he (or she) is typing. That's no different from our current process; all we have done is made the editor a bit smarter.

The other way is to use AI directly and ask it to create the program. In this method, a programmer (or perhaps a non-programmer) provides a prompt text to an AI engine and the AI engine creates the entire program, which is then compiled into machine code. The sequence looks like this:

    AI prompt text --> [AI engine] --> source code --> [compiler] --> machine code

Notice that the word "source" has sneaked back into the middle of the stream. The term doesn't belong there; that code is intermediate and not the source. A better description is:

    Source AI prompt text --> [AI engine] --> intermediate code --> [compiler] --> machine code

This description puts the "source" back on the first step of the process. That prompt text is the true source code. One may argue that a prompt text is not really source code, that it is not specific enough, or not Turing-complete, or not formatted like a traditional program. I think that it is the source code. It is created by a human and it is the text used by the computer to generate the machine code that we desire. That makes it the source.

Notice that in this new process with AI, we still have source code. We still have a way for humans to instruct computers. I've been writing about source code as if it were written. Source code has always been written (or typed, or keypunched) in the past. It is possible that future systems recognize human speech and build programs from that (much like on several science fiction TV programs). If so, those spoken words will be the source code.

AI may change the programming world. It may upend the industry. It may force many programmers to learn new skills, or to retire. But humans will always want to express their desires to computers. The way they express them may be through text, or through speech, or (in some far-off day) through direct neural links. Those thoughts will be source code, and we will always have it. The people who create that source code are programmers, so we will always have them.

We will always have source code and programmers, but source code and programming will change over time.

Thursday, July 20, 2023

Hollywood's blind spot

Hollywood executives are probably correct in that AI will have a significant effect on the movie industry.

Hollywood executives are probably underestimating the effect that AI will have on the movie industry.

AI, right now, can create images. Given some prompting text, an AI engine can form an image that matches the description in the text. The text can be simple, such as "a zombie walking in an open field", or it can be more complex.

It won't be long before AI can make not a single image but a video. A video is nothing more than a collection of images, each different from the previous in minor ways. When played back at 24 frames per second, the human mind perceives the images not as individual images but as motion. (This is how movies on film work, and how movies on video tape work.) I'm sure people are working on "video from AI" right now -- and they may already have it.

A movie is, essentially, a collection of short videos. If AI can compose a single video, then AI can compose a collection of videos. The prompting text for a movie might resemble a traditional movie script -- with some formatting changes and additional information about costumes, camera angles, and lighting.

Thus, with enough computing power, AI can start with an enhanced, detailed script and render a movie. Let's call this a "script renderer".

A script renderer makes the process of moviemaking cheap and fast. It is the word processor of the twenty-first century. And just as word processors upended the office jobs of the twentieth century, the script renderer will upend the movie jobs of this century. Word processors (the software on commonplace computers) replaced people and equipment: secretaries, proofreaders, typewriters, carbon paper, copy machines, and Wite-out erasing fluid.

Script renderers (okay, that's a clumsy term and we'll probably invent something better) will do similar things for movies. If an AI can make a movie from a script, then movie makers don't need equipment (cameras, lights, costumes, sets, props, microphones) and the people who handle that equipment. It may be possible for a single individual to write a script, send it through a renderer, and get a movie. What's more, just as word processors let one print a document, review it, make changes, and print it again, a script renderer will let one render a movie, view it, make changes, and render it again -- perhaps all in a few hours.

Hollywood executives, if they have seen this far ahead, may be thinking that their studios will be much more profitable. They won't need to pay actors, or camera operators, or build sets, or ... lots of other things. All of those expenses disappear, but the revenue from the movies remain.

But here's what they don't see: Making a movie will simply be a matter of computing power. Anyone with a computer and access to a sufficiently powerful AI will be able to convert a script into a movie.

Today, anyone can start a newsletter. Or print invitations to a party. Or their own business cards.

Tomorrow, anyone will be able to make a movie. It won't be easy; one still needs a script with the right details, and one should have a compelling story and good dialog. But it will be much easier than it is today.

And create movies they will. Not just movies, but TV episodes, mini series, and perhaps even short videos like the old Flash Gordon serials.

I suspect that the first wave of "civilian movies" will be built on existing materials. Fans of old "Star Trek" shows will create new episodes with new stories but using the likenesses of the original actors. The studios will sue, of course, but it won't be a simple case of copyright infringement. The owners of the old shows will have to build a case on different grounds. (They will probably prevail, if only because the amateurs cannot pay the court costs.)

The second wave will be different. It will be new material, away from the copyrighted and trademarked properties. But it will still be amateurish, with poor dialog and awkward pacing.

The third wave of non-studio movies will be better, and will be the real threat to today's movie studios. These movies will have higher quality, and will obtain some degree of popularity. That will get the attention of Hollywood executives, because now these "civilian" movies will compete with "real" movies.

Essentially, AI removes the moat around movie studios. That moat is the equipment, sound stages, and people needed to make a movie today. When the moat is gone, lots of people will be able to make movies. And lots will.


Thursday, July 13, 2023

Streaming services

Streaming services have a difficult business model. The cost of producing (or licensing) content is high, and the revenue from subscriptions or advertisements is low. Fortunately, the ratio of subscribers to movies is high, and the ratio of advertisements to movies is also high. Therefore, the streaming services can balance revenue and costs.

Streaming services can increase their revenue by adjusting subscription fees. But the process is not simple. Raising subscription fees does raise income per subscriber, but it may cause some subscribers to cancel their subscription. Here, economics comes into play, with the notion of the "demand curve", which measures (or attempts to measure) the willingness of customers to pay at different price levels.

Streaming services can decrease their costs by removing content. For licensed content (that is, movies and shows that are made by other companies) the streaming service pays a fee. If they don't "carry" those movies or services, then they don't have to pay. Cancelling their license reduces their cost.

For content that the service produces, the costs are more complex. There is the cost of production, which is a "sunk cost" -- the money has been spent, whether the service carries the movie/show or not. There are also ongoing costs, in the form of residual payments, which are paid to the actors, writers, and other contributors while the movie or show is made available. Thus, a service that has produced a movie can reduce its costs (somewhat) by not carrying said movie.

That's the basic economics of streaming services, a very simplified version. Now let's look at streaming services and the value that they provide to viewers.

I divide streaming services into two groups. Some services make their own content, and other services don't. The situation is somewhat more complicated, because the content-making services also license content from others. Netflix, Paramount+, and Roku all run streaming services, all make their own content, and all license other content to show on their service. Tubi, Frndly, and Pluto TV make no content and simply license content from others.

The content-producing services, in my mind, are the top-tier services. Disney+ makes its own content and buys (permanently) other content to add to its library, and is recognized as a top-tier service. Netflix, Paramount+, and Peacock create their own content (and license some) and I consider them top-tier services.

The services that don't produce content, the services that simply license content and then make it available, are the second-tier services. They are second-tier because their content is available for a limited time. They don't own content; they can only rent it. Therefore, content will be available for some amount of time, and then disappear from the service. (Roku, for example, had the original "Bionic Woman" series, but it is not available now.)

For second-tier services, content comes and goes. There is no guarantee that a specific movie or show will be available in the future. Top-tier services, in contrast, have the ability to keep movies and shows available. They don't, and I think that damages their brand.

Services damage their brand when they remove content, in three ways.

First, they reduce the value of their service. If a service reduces the number of movies and shows available, then they have reduced their value to me. This holds in an absolute sense, and also in a relative sense. If Disney+ removes movies, and Paramount+ keeps its movies, then Disney+ drops in value relative to Paramount+.

Second, they break their image of "all things X". When Paramount+ dropped the series "Star Trek: Prodigy", they lost the right to claim to be home to all things Star Trek. (I don't know that Paramount+ has every made this claim. But they cannot make it now.)

Third, the services lose the image of consistency. On a second-tier service, which lives off of licensed (essentially rented) content, I expect movies and shows to come and go. I expect a top-tier service to be predictable. If I see that it has a movie available this month, I expect them to have it next month, and six months from now, and a year from now. I expect the Disney+ service to have all of the movies that Disney has made over the years, now and in the future. I expect the Paramount+ service to have all of the Star Trek movies and TV shows, now and in the future.

By dropping content, the top-tier services become more like the second-tier services. When Netflix, or Max, or Peacock remove content, they become less reliable, less predictable, less... premium.

Which they may want to consider when setting their monthly subscription rates.


Wednesday, July 5, 2023

Twitter, Elon Musk, and dignity

A lot has been said about Elon Musk's actions at Twitter. I will add a little more, with some ideas that I have not seen anywhere else. (Also, I recognize that Musk has stepped aside and is letting Linda Yaccarino run the show. But I don't know if Musk is still involved.)

Musk's behavior at Twitter has been described as chaotic, petulant, and just plain wrong. He has made decisions with wide-sweeping actions, and made them hastily and with little respect for the long-time employees at Twitter. Those decisions have had consequences.

I'm going to focus not on the decisions, and not on the consequences, but on the process. Musk is running Twitter as if it were a start-up, a company with an idea of a product or service, perhaps a prototype or minimum viable product, and few or no customers. Start-ups need to find a product or service that resonates with customers, something that makes customers ready to pay for the product or service. It is common for a start-up to try several (sometimes quite varied) approaches.

A start-up looking for its product (or its value proposition, to use MBA-speak) needs to move quickly. It has limited resources and it does not have the luxury of waiting for multiple levels of bureaucracy to review decisions and slowly reach a consensus. The CEO must make decisions quickly and with minimal delay.

That's the behavior I see in Musk at Twitter: unilateral, arbitrary decisions made with no advance notice.

While such behavior is good (and sometimes necessary) at start-ups, it is not good at established companies. Established companies are, well, established. They have well-defined products and services. They have a base of customers who pay them money on a regular basis. Those customers have expectations, based on the previous actions of the company.

Arbitrary changes to products and services, made on short notice, do not sit well with those customers. Customers want predictability, just as you and I want predictability from our internet providers and streaming services.

(Note to self: a future column might discuss consistency and predictability for streaming services.)

Back to customers of Twitter: They want predictability, and Musk is not providing it.

The users of Twitter, distinct from the customers who pay for advertising, also want consistency and predictability. Arbitrary changes can drive users away, which reduces advertising view counts, which reduces advertising rates, which reduces income for advertising.

It seems to me that Musk is well-suited to run a start-up, and poorly suited to run an established company.

(Note to self: a future column might discuss the transition from start-up to established company.)

Perhaps the best action that Musk can take is to remove himself from the management of Twitter and let others run the company. He has done that, to some extent. He should step completely aside. I'm not commenting on Yaccarino's competency to run Twitter; that is another topic.

Sometimes the best way to solve a problem is to let others handle it.

Tuesday, May 2, 2023

The long life of the hard disk

It was in 1983 that IBM introduced the IBM XT -- the IBM PC with the built-in hard disk. While hard disks had been around for years, the IBM XT made them visible to the vast PC market. Hard disks were expensive, so there was a lot of advertising and also a lot of articles about the benefits of hard disks.

Those benefits were: faster operations (booting PC-DOS, loading programs, reading and writing files, more secure because you can't lose the disk like a floppy, and more reliable compared to floppy disks.

The hard disk didn't kill floppy disks. They remained popular for some time. Floppy disks disappeared some time after Apple introduced the iMac G3 (in 1998). Despite Apple's move, floppies remained popular.

Floppy disks did gradually lose market share, and in 2010 Sony stopped manufacturing floppy disks. But hard disks remained a staple of computing.

Today, in 2023, the articles are now about replacing hard disks with solid-state disks. The benefits? Faster boot times, faster program loading times, faster reading and writing. (Sound familiar?) Reliability isn't an issue, nor is the possibility of losing media.

Apple again leads the market in moving from older storage technology. Their product line (from iPhones and iPads to MacBooks and iMacs) all use solid-state storage. Microsoft is moving in that direction too, pressuring OEMs and individuals to configure PCs to boot Windows from an internal SSD rather than a hard disk. It won't be long before hard disks are dropped completely and not even manufactured.

But consider: the hard disk (with various interfaces) was the workhorse of storage for PCs from 1983 to 2023 -- forty years.

Floppy disks (in the PC world) were used from 1977 to 2010, somewhat less than forty years. But they were used prior to PCs, so maybe their span was also forty years.

Does that mean that SSDs will be used for forty years? We've had them since 1978 (if you count the very early versions) but they moved into the main stream of computing in 2017 with Intel's Optane products. Forty years after 2017 puts us in 2057. But that would be the end of SSDs -- their replacement should arrive earlier than that, possibly fifteen years earlier.

Tuesday, April 25, 2023

Chromebook life spans

At the start of the Covid pandemic, back in 2020, lots of schools needed a way for students to attend from home. They selected Chromebooks. Chromebooks were less expensive than Windows PCs or MacBooks, easier to administrate, and less vulnerable to hacking (by bad guys and students alike).

Schools bought a lot of them.

And now, those same schools are learning that Chromebooks come with expiration dates. Many of them have three-year life spans.

The schools -- or rather, the people who manage budgets for schools -- are not happy.

There is a certain amount of caveat emptor here, which the school IT and budget administrators failed to perform, but I would rather focus on the life spans of Chromebooks.

Three years isn't all that long in the IT world. How did Google (who designs the Chromebook specification) select that term?

(We should note that not all Chromebooks have three-year life spans. Some Chromebooks expire after five or even seven years. It is the schools that selected the three-year Chromebooks that are unhappy. But let's focus on the three-year term.)

(We should also note that the Chromebook life span is for updates. The Chromebooks continue to work; Google simply stops updating ChromeOS and the Chrome browser. That may or may not be an issue; I myself used an old Chromebook for years after its expiration date. Eventually, web sites decided that the old version of Chrome was not worth talking to, and I had to replace the Chromebook.)

I have an idea about the three-year life span. I don't work at Google, and have no contacts there, so I'm speculating. I may be wrong.

It seems to me that Google selected the three-year life span to tailor Chromebooks not to schools but to large corporations. Large corporations (or maybe IT vendors), back in the 1990s, convinced Congress to adjust the depreciation schedules for IT equipment, reducing the expected life to three years. This change had two effects. First, the accelerated schedule lets corporations write off the expense of IT equipment faster. Second, IT vendors convinced large corporations to replace their IT equipment every three years. (The argument was that it was cheaper to replace old PCs rather than maintain them.)

With corporations replacing PCs every three years, it made sense for Google to build their Chromebooks to fit that schedule. While PCs did not have built-in expiration dates, corporations were happy to replace their PCs on that three-year schedule.

A three-year expiration gave Google several advantages. They could design the Chromebooks with less expensive components. They could revise the ChromeOS operating system rapidly and not worry about backwards compatibility. Google could sell the idea of planned obsolescence to the makers of Chromebooks (HP, Dell, Lenovo, etc.) as a market that would provide a steady demand.

Again, this is all speculation. I don't know that Google planned any of this.

But it is consistent.

Schools are upset that Chromebooks have such a short supported life. Google made Chromebooks with those short support life spans because the target was corporations. Corporations replaced IT equipment every three years because of tax laws and the perceived costs of maintaining older hardware.

If we take away anything from this, perhaps we should note that Google was focussed on the corporate market. Other users, such as schools or non-profits or individuals, were possibly not considered in their calculations.

Monday, March 27, 2023

The Long Tail

One of the ideas from the dot-com boom was that of the "long tail". If one ranks their customers by sales, then the best customers -- the ones that order the most -- are clumped to the left, and the customers that order the least are in a long thin grouping to the right. That long, thin grouping is "the long tail".

This idea came about in the mid-2000s. It was considered revolutionary by some (mostly those who pushed the idea) and it was, in retrospect, a different way of doing business. It relied on a reduction of cost to serve customers.

Prior to "the long tail", businesses marked certain customers as unprofitable, and built their pricing to discourage those customers. (Banks, for instance, want a minimum amount when opening a new account. There is a cost to manage the account, and the profits from a low-balance account fall below that cost.)

The "long tail" advocates recognized that computers and the internet allowed for low-cost interactions with customers, and saw profit in small purchases. The efficiencies of self-service, or fully automated service, allowed for customers with smaller or fewer purchases. New businesses were formed, and became successful.

Yet there were some aspects of long-tail businesses that weren't predicted. Those aspects are the absence of customer service and the treatment of customers as disposable entities.

Consider Google. The tech giant has several businesses in the long tail of computer services. They include e-mail, data storage, processing, office tools such as word processors and spreadsheets. Companies can sign up for paid plans (the "fat" part of the tail) and individuals can sign up for free or nominally free plans (the "long" part of the tail).

But notice that the plans for individuals (free e-mail and spreadsheets, for example) have nothing in the way of customer support. There is no help line to call. There is no support by e-mail. (There are web pages with documentation, and there is a web forum staffed by volunteers, which has some answers to some questions.)

Customer support is expensive, and the individual plans (either free or low-cost) generate such few profits for Google (on an individual basis) that a single 10-minute support call would wipe out years of profits.

Long-tail businesses don't offer real-person support because they cannot afford to.

But the headaches for the customers in long-tail products and services don't end there.

Google has, on more than one occasion, cancelled a person's account. The common explanation is that the person did something that violated the terms of service. What, exactly the person did is not specified, and there is no action listed to correct the problem.

I am picking on Google here, but this happens to users of other services, too.

What makes it bad for Google's customers (and Google, indirectly) is Google's wide variety of services. A violation of terms and conditions in one service can cause Google to suspend a person's account, which removes access to all of Google's services, including e-mail, spreadsheets, data storage, and more. Google provides no information to the customer, and the customer is effectively cut off from all services.

Google has discarded the customer. It seems a harsh resolution, but it is a low-cost one. Google does not spend the time discussing options with the customer; the cost of cutting off service is, essentially zero. Google loses the minimal profits from that one customer, but those profits don't cover the cost of investigations and discussions. It is cheaper to dispose of the customer than to retain them. There are lots more customers.

Long-tail businesses don't value customer retention.

These are two results of long-tail business models. I won't say that they are wrong. The economic conditions seem to require their existence.

And the market does offer alternatives. Microsoft, for example, offers support (limited, but more than Google) for its Microsoft 365 services. In doing so, they move the customer from the thin part of the tail to a thicker part.

The old adage "You get what you pay for" seems to apply here.

Wednesday, February 22, 2023

Paying for social media

Twitter has implemented a monthly charge for its "Twitter Blue" service. Facebook (or Meta) has announced something similar.

Apple introduced its "Tracking Transparency" initiative (which allows users to disable tracking by apps) and that changed the market. Advertisers are apparently paying less to Facebook (and possibly Twitter) because of this change.

It was perhaps inevitable that Twitter and Facebook would look to replace that lost revenue. And where to look? To the users, of course! Thus the subscription fees were invented.

Twitter's fee is $8 per month, and Meta's is $12 per month. (Both are higher when purchased on an Apple device.)

Meta's price seems high. I suspect that Meta will introduce a second tier, with fewer features, and with a lower monthly rate.

Facebook and Meta must be careful. They may think that they are competing with streaming services and newspaper subscriptions. Streaming services have different pricing, from ad-supported services that charge nothing to Netflix and HBOmax that charge $15 per month (or thereabouts).

But newspapers and streaming services are different from social media. Netflix, HBOmax, and the other streaming services create content (or buy the rights to content) and provide it to viewers. Newspapers create content (or buy the rights) and present it to readers. For both, the flow of information is one-way: from the service to the user.

Social media operates differently. Users create the content, with posts and updates. That information is of interest to family, friends, and colleagues. The value to users is not merely in content, but in the network of connections. A social media site with lots of your friends is interesting to you; a site with only a few is less interesting, and a site with no friends is of no interest.

Meta and Twitter face a different challenge than Netflix and HBOmax. If streaming services raise prices or do other things to drive away customers, the value for the remaining customers remains the same. But if Facebook or Twitter drive away users, then they are reducing the value of the service to the remaining users. Meta and Twitter (and any other social media site) must act carefully when introducing changes.

I tend to think that these new fees are the result of necessity, and not of simple greed. That is, Twitter and Facebook need the revenue. If that is the case, then we users of web sites and social media may be in for more fees. It seems that simple, non-targeted advertising doesn't work for web sites, and targeted advertising (with no data sent to advertisers) doesn't work either.

Advertisements coupled with detailed user information did work, in that it provided enough revenue to web sites. That arrangement was ended by Apple's "Transparency in Tracking" initiative.

We're now in a "next phase" of social media, one in which users will pay for the service. (Or some users will pay, and other users will pay higher amounts for additional services, and some users may pay nothing.)

Thursday, February 16, 2023

Unstoppable cannon balls, immovable posts, and Apple

In the mid 20th Century, Martin Gardner wrote a series of articles for Scientific American. His column was called "Mathematical Games"; the content was less math and more puzzles, riddles, and brain teasers. One such brain teaser went something like this:

"Assume that there are unstoppable cannon balls. These cannon balls are different from the normal variety in that once shot from a cannon, they do not stop. They push aside any object in their way. Also assume that there are immovable posts. These posts are different from the normal variety in that they do not move, for any reason. Now, what happens when an unstoppable cannon ball strikes an immovable post?"

Readers of Mr. Gardner's columns had to wait for answers, which appeared in the magazine's next issue. I won't make you, dear readers, wait that long. The answer to the riddle of the unstoppable cannon ball and the immovable post is simple: they cannot exist together. If one has an unstoppable cannon ball, then by definition the universe cannot have an immovable post. Or, if one has an immovable post, then again by definition one cannot have an unstoppable cannon ball.

While that answer may be disappointing, it has a certain wisdom. That wisdom may help Apple.

With the introduction of the M1 and M2 processor lines, Apple has entered into the realm of brain teasers. They don't have an unstoppable cannon ball or an immovable post, but they have built similar things in their product line.

The problem for Apple is the Mac Pro computer. The Mac Pro is Apple's premium computer; it sports the best processor, the fastest disks, the speediest memory, and -- of course -- the highest price tag. But it has one thing that other computers in Apple's product line no longer have: the ability to replace components. The Mac Pro is the only computer that let's the user replace memory, add disks, and add GPU cards. Apple computers (not phones, not tablets) in the past have allowed for upgrades. My vintage Apple Powerbook G4 allows one to replace memory, disk drives, and battery. The original Macbook allowed for the same.

Over the years, Apple changed their products and gradually removed the ability to change components. Today's Macbook laptops and non-Pro Mac computers are all fully encased; there is no way to open them and swap components. (At least not for the average user.)

The M1 and M2 system-on-chip processors make upgrades or changes impossible. Everything is on the chip: CPU, GPU, memory, storage, and more.

The benefit of the everything-on-one-chip design is performance. When components are housed in separate chips (such as CPU in one, memory in another, and GPU in yet another) then one must provide connecting wires. These wires (or traces on the system board) run from one component to the next. Driving the signals across these wires requires extra circuitry - dedicated transistors to raise the voltage of signals from the on-chip levels to the levels for the system board. Corresponding receiver circuits adapt the signals from board-level voltages to on-chip voltages. Each of those drivers and receivers slows the signal. (It's not much, but at the frequencies of today's computers, those small delays add to significant delays.)

The distance from one component to another also causes delays. Again, each delay is small, but over the billions of operations, they add up.

Which brings us back to the unstoppable cannon ball and the immovable post.

The older designs with discrete components are the immovable post. By itself, this is not a problem.

With the system-on-chip designs of M1 and M2, Apple has built, essentially, an unstoppable cannon ball. They have left the universe of swappable components and entered the universe of system-on-chip.

You cannot have both. You cannot have a computer that has all components on a single chip, and still allows for pieces to be upgraded.

Now, you can have some computers in your line with swappable components, and others with system-on-chip designs. In that sense, you can have both.

But you cannot have a single computer with both. A computer is either totally integrated or it has replaceable components. Keep in mind that the total integration design has the much better performance.

Apple wants its Mac Pro to have replaceable components and the best performance of the line. Apple wants the Mac Pro to be the top of its product line, with the best performance (and the priciest of price tags). I don't see a way to make this happen.

The performance of Apple's M2 Ultra processor is good. Really good. Better than the old, Intel-based, swappable component Mac Pro. A new, Intel-based, swappable component Mac Pro (using the latest processors and memory chips) could be faster than the old one, but not by much. It *may* be a little faster than the M2 Ultra, but it won't be *much* faster. It certainly won't be the flagship product that Apple wants.

Apple can build computers based on the M1 or M2 processor, and they will have top performance, but they won't have replaceable components. (The unstoppable cannon ball.) Apple can build computers with replaceable components (either Intel or AMD processors, or discrete processors based on the M1/M2 CPU) but they won't have the performance. (The immovable post.)

The idea of a top-tier computer system with replaceable parts is now a thing of the past. It probably has always been a thing of the past, as high performance computers have always integrated as much as possible. The notion of replaceable parts came from the hobbyist market and the original IBM PC, which wisely traded performance for flexibility. In the 1980s, when we had a poor understanding of what we wanted from computers, flexibility was the better choice.

Today, we have very definite ideas about our computers. We don't need to experiment with different video cards and memory configurations. We don't need to add network cards to some but not all computers. (Our manufacturers also have much better processes, and computer components are much more reliable. Computers run, and we have little need to replace a failed component.)

Apple could offer a Mac computer that has replaceable parts. It would be a low-end computer, not the high-end Mac Pro. I suspect that Apple will not make such a computer. It would be more expensive to produce, have a larger support effort (customers making mistakes and asking questions), and have limited appeal in the Apple fan base.

But Apple cannot build a high-end Mac Pro with replaceable components. It won't have the performance, and the Mac Pro is all about performance.

I think that Apple will build a Mac Pro, but with the "Extreme" variant of an M2 (or possibly M3) processor. The Mac Pro will be the only computer in Apple's line with the "Extreme" variant; other computers will use the plain, "Pro", "Max", or "Ultra" version of its processors. The new Mac Pro won't have replaceable parts, but it will have superior performance. People may be surprised, but I won't be one of them.

Thursday, February 9, 2023

AI answers may improve traditional search

Isaac Asimov, the writer of science and science fiction, described his experience with publishing houses as a writer. People had warned him to stay away from the publishing world, telling him that it was full of unscrupulous opportunists who would take advantage of him. Yet his experience was a good one; the publishers, editors, and others he worked with were (for the most part) honest, hard-working, and ethical.

Asimov had a conjecture about this. He surmised that for some time prior to his arrival as a writer, the publishing industry did have a large number of unscrupulous opportunists, and they gave the industry a bad reputation. He further theorized that when he started as an author, those individuals had moved on to a different industry. Not because of his arrival, but because there was a newer, larger, and more lucrative industry to take advantage of individuals. It was the movie industry that provided a better "home" for those individuals. Once they saw that movies were the richer target, they abandoned the publishing industry, and left the ethical people (who really wanted to work in publishing) behind.

I don't recall that Asimov proved his conjecture, but it has a good feel to it.

What does this have to do with software? Well, not much for the programming world, but maybe a lot for the online search world.

Search engines (Google, Bing, Duck-duck-go, and others) make a valiant attempt to provide good results, but web sites use tricks to raise a web site's ranking in the search engines. The result is that today, in 2023, many searches work poorly. Searches to purchase something work fairly well, and some searches for answers (when does the Superbowl start) tend to be relevant, but many queries return results that are not helpful.

As I see it, web site operators, in their efforts to increase sales, have hired specialists to optimize their ranking in search engines, leading to an endless race of constantly outdoing their competition. The result is that search engines provide little in the way of "organic" lists and too many "sponsored" or "optimized" responses.

The situation with search engines is, perhaps, similar to the pre-Asimov era of publishing: full of bad operators that distort the product.

So what happens with the new AI-driven answer engines?

If people switch from the old search engines to the new answer engines, we can assume that the money will follow. That is, the answer engines will be popular, and lead to lots of ad revenue. When the revenue shifts from search engines to answer engines, the optimizations will also shift to answer engines. Which means that the efforts to game search engines will stop, and search engines can drift back to organic results.

This change occurs only if the majority of users switch to the answer engines. If a sizable number of people stay on the older search engines, then the gains from optimizing results will remain, and the optimization games will continue.

I'm hoping that most people do switch to the new answer engines, and a small number of people -- just enough for search engines to remain in business -- keep using the older engines.

Wednesday, February 1, 2023

To build and to maintain

I had the opportunity to visit another country recently (which one doesn't matter) and I enjoyed the warmer climate and the food. I also had the opportunity to observe another country's practices for building and maintaining houses, office buildings, roads, bridges, and other things.

The United States is pretty good at building things (roads, bridges, buildings, and such) and also good at maintaining them. The quality of construction and the practices for maintenance vary, of course, and overall governments and large corporations are better at them than small companies or individuals.

In the country I visited, the level of maintenance was lower. The culture of the country is such that people in the country are good at building things, but less concerned with maintaining them. This was apparent in things like signs in public parks: once installed they were left exposed to the elements where they faded and broke in the sun and wind.

My point is not to criticize the country or its culture, but to observe that maintaining something is quite different from building it.

That difference also applies to software. The practices of maintaining software are different from the practices of constructing software.

Software does not wear or erode like physical objects. Buildings expand and contract, develop leaks, and suffer damage. Software, stored in bits, does not expand and contract. It does not develop leaks (memory leaks aside), It is impervious to wind, rain, and fire. So why do I say that software needs maintenance?

I can make two arguments for maintenance of software. The first argument is a cyber-world analog of damage: The technology platform changes, and sometimes the software must change to adapt. A Windows application, for example, may have been designed for one version of Windows. Windows, though, is not a permanent platform; Microsoft releases new versions with new capabilities and other changes. While Microsoft makes a considerable effort to maintain compatibility, there are times when changes are necessary. Thus, maintenance is required.

The second argument is less direct, but perhaps more persuasive. The purpose of maintenance (for software) is to ensure that the software continues to run, possibly with other enhancements or changes. Yet software, when initially built, can be assembled via shortcuts and poor implementations -- what we commonly call "technical debt". Often, those choices were made to allow for rapid delivery.

Once the software is "complete" -- or at least functional, maintenance can be the act of reducing technical debt, with the goal of allowing future changes to be made quickly and reliably. This is not the traditional meaning of maintenance for software, yet it seems to correspond well with the maintenance of "real world" objects such as automobiles and houses. Maintenance is work performed to keep the object running.

If we accept this definition of maintenance for software, then we have a closer alignment of software with real-world objects. It also provides a purpose for maintenance; to ensure the long-term viability of the software.

Let's go back to the notions of building and maintaining. They are very different, as anyone who has maintained software (or a house, or an automobile).

Building a thing (software or otherwise) requires a certain set of skills and experience.

Maintaining that thing requires a different set of skills and experience. Which probably means that the work for maintenance needs a different set of management techniques, and a different set of measurements.

And building a thing in such a way that it can be maintained requires yet another set of skills and experience. And that implies yet another set of management techniques and measurements.

All of this may be intuitively obvious (like solutions to certain mathematics problems were intuitively obvious to my professors). Or perhaps not obvious (like solutions to certain math problems were to me). In either case, I think it is worth considering.

Monday, January 16, 2023

The end of more

From the very beginning, PC users wanted more. More pixels and more colors on the screen. More memory. Faster processors. More floppy disks. More data on floppy disks. (Later, it would be more data on hard disks.)

When IBM announced the PC/XT, we all longed for the space (and convenience) of its built-in hard drive. When IBM announced the PC/AT we envied those with the more powerful 80286 processor (faster! more memory! protected mode!). When IBM announced the EGA (Enhanced Graphics Adapter) we all longed for the higher-resolution graphics. With the PS/2, we wanted the reliability of 3.5" floppy disks and the millions of colors on a VGA display.

The desire for more didn't stop in the 1980s. We wanted the 80386 processor, and networks, and more memory, and faster printers, and multitasking. More programs! More data!

But maybe -- just maybe -- we have reached a point that we don't need (or want) more.

To quote a recent article in MacWorld:

"Ever since Apple announced its Apple silicon chip transition, the Mac Pro is the one Mac that everyone has anxiously been awaiting. Not because we’re all going to buy one–most of the people reading this (not to mention me, my editor, and other co-workers) won’t even consider the Mac Pro. It’s a pricey machine and the work that we do is handled just as well by any Mac in the current lineup".

Here's the part I find interesting:

"the work that we do is handled just as well by any Mac in the current lineup"

Let that sink in a minute.

The work done in the offices of MacWorld (which I assume is typical office work) can be handled by any of Apple's Mac computers. That means that the lowliest Apple computer can handle the work. Therefore, Macworld, being a commercial enterprise and wanting to reduce expenses, should be equipping its staff with the low-end MacBook Air or Mac mini PCs. To do otherwise would be wasteful.

It is not just the Apple computers that have outpaced computing needs. Low end Windows PCs also handle most office work. (I myself am typing this on a Dell desktop that was made in 2007.)

The move from 32-bit processing to 64-bit processing had a negligible affect on many computing tasks. Microsoft Word, for example, ran just as well in 32-bit Windows as it did in 64-bit Windows. The move to 64-bit processing did not improve word processing.

There are some who do still want more. People who play games want the best performance from not only video cards but also central processors and memory. Folks who edit video want performance and high-resolution displays.

But the folks who need, really need, high performance are a small part of the PC landscape. Many of the demanding tasks in computation can be handled better by cloud-based systems. It is only a few tasks that require local, high-performance processing.

The majority of PC users can get by with a low-end PC. The majority of PC users are content. One may look at a new PC with more memory or more pixels, but the envy has dissipated. We have enough colors, enough pixels, and enough storage.

If we have reached "peak more" in PCs, what does that mean for the future of PCs?

An obvious change is that people will buy PCs less frequently. With no urge to upgrade, people will keep their existing equipment longer. Corporations that buy PCs for employees may continue on a "replace every three years" schedule, but that was driven by depreciation rules and tax laws. Small mom-and-pop businesses will probably keep computers until a replacement is necessary (I suspect that they have been doing that all along). Some larger corporations may choose to defer PC replacements, noting that cash outlays for new equipment are still cash outlays, and should be minimized.

PC manufacturers will probably focus on other aspects of their wares. PC makers will strive for better battery life, durability, or ergonomic design. They may even offer Linux as an alternative to Windows.

It may be that our ideas about computing are changing. It may be that instead of local PCs that do everything, we are now looking at cloud computing (and perhaps older web applications) and seeing a larger expanse of computing. Maybe, instead of wanting faster PCs, we will shift our desires to faster cloud-based systems.

If that is true, then the emphasis will be on features of cloud platforms. They won't compete on pixels or colors, but they may compete on virtual processors, administration services, availability, and supported languages and databases. Maybe we won't be envious of new video cards and local memory, but envious instead of uptime and automated replication. 

Monday, January 9, 2023

After the GUI

Some time ago (perhaps five or six years ago) I watched a demo of a new version of Microsoft's Visual Studio. The new version (at the time) had a new feature: the command search box. It allowed the user to search for a command in Visual Studio. Visual Studio, like any Windows program, used menus and icons to activate commands. The problem was that Visual Studio was complex and had a lot of commands -- so many commands that the menu structure to hold them all was enormous, and searching for a command was difficult. Many times, users failed to find the command.

The command search box solved that problem. Instead of searching through menus, one could type the name of the command and Visual Studio would execute it (or maybe tell you the path to the command).

I also remember, at the time, thinking that this was not a good idea. I had the distinct impression that the command search box showed that the GUI paradigm had failed, that it worked up to a point of complexity but not beyond that point.

In one sense, I was right. The GUI paradigm does fail after a certain level of complexity.

But in another sense, I was wrong. Microsoft was right to introduce the command search box.

Microsoft has added the command search box to the online versions of Word and Excel. These command boxes work well, once you get acquainted with them. And you must get acquainted with them; some commands are available only through the command search box, and not through the traditional GUI.

Looking back, I can see the benefit of changing the user interface, and changing it in such a way as to make a new type of user interface.

The first user interface for personal computers was the command line. In the days of PC-DOS and CP/M-86, users had to type commands to invoke actions. There were some systems (such as the UCSD p-System) that used full-screen text displays as their interface, but these were rare. Most systems required the user to learn the commands and type them.

Apple's Macintosh and Microsoft's Windows used a GUI (Graphical User Interface) which provided the possible commands on the screen. Users could click on an icon to open a file, another icon to save the file, and a third icon to print the file. The icons were visible, and more importantly, they were the same across all programs. Rarely used commands were listed in menus, and one could quickly look through the menu to find a command.

Graphical User Interfaces with icons and buttons and menus worked, until they didn't. They were adequate for simple programs such as the early versions of Word and Excel, but they were difficult to use on complex programs that offered dozens (hundreds?) of commands.

The command search box addresses that problem. A program that uses the command search box, instead of displaying all possible commands in icons and buttons and menus, shows the commonly-used commands in the GUI and hides the less-used commands in the search box.

The search box is also rather intelligent. Enter a word or a phrase and the application shows a list of commands that are either what you want or close to it. It is, in a sense, a small search engine tuned to the commands for the application. As such, you don't have to remember the exact command.

This is a departure from the original concept of "show all possible actions". Some may consider it a refinement of the GUI; I think of it as a separate form of user interface.

I think that it is a separate form of interface because this concept could be applied to the traditional command line. (Command line interfaces are still around. Ask any user of Linux, or any admin of a server.) Today's command line interfaces are pretty much the same as the original ones from the 1970s, in that you must type the command from memory.

Some command shell programs now prompt you with suggestions to auto-complete a command. That's a nice enhancement. I think another enhancement could be something similar to the command search box of Microsoft Excel: a command that takes a phrase and reports matches. Such an option does not require graphics, so I think that this search-based interface is not tied to a GUI.

Command search boxes are the next step in the user interface. It follows the first two designs: command line (where you must memorize commands and type them exactly) and GUI (where you can see all of the commands in icons and menus). Command search boxes don't require every command to be visible (like in a GUI) and they don't require the user to recall each command exactly (like in a command line). They really are something new.

Now all we need is a name that is better than "command search box".

Monday, January 2, 2023

Southwest airlines and computers

Southwest Airlines garnered a lot of attention last week, A large winter storm caused delays on a large number of flights, a problem with which all of the airlines had to cope. But Southwest had a more difficult time of it, and people are now jumping to conclusions about Southwest and its IT systems.

Before I comment on the conclusions to which people are jumping, let me explain what I know about the problem.

The problem in Southwest's IT systems, from what I can tell, has little to do with the age of their programs or the programming languages that they chose. Instead, the problem is caused by a mix of automated and manual processes.

Southwest, like all airlines, must manage its aircraft and crews. For a large airline, this is a daunting task. Airplanes fly across the country, starting at one point and ending at a second point. Many times (especially for Southwest) the planes stop at intermediate points. Not only do airplanes make these transits, but crews do as well. The pilots and cabin attendants go along for the ride, so to speak.

Southwest, or any airline, cannot simply assign planes and crews at random. They must take into account various constraints. Flight crews, for example, can work for so many hours and then they must rest. Aircraft must be serviced at regular intervals. The distribution of planes (and crews) must be balanced -- an airline cannot end its business day with all of its aircraft and crews on the west coast, for example. The day must end with planes and crews positioned to start the next day.

For a very small airline (say one with two planes) this scheduling can be done by hand. For an airline with hundreds of planes, thousands of employees, and thousands of flights each day, the task is complex. It is no surprise that airlines use computers to plan the assignment of planes and crews. Computers can track all of the movements and ensure that constraints are respected by the plan.

But the task does not end with the creation of a set of flight assignments. During each day, random events can happen that delay a flight. Delays can be caused by headwinds, inclement weather, or sick passengers. (I guess crew members, being people, can get sick, too.)

Delays in one flight may mean delays in subsequent flights. Airlines may swap crews or planes from one planned flight to another, or they may simply wait for the late equipment. Whatever the reason, and whatever the change, the flight assignments have to be recalculated. (Much like a GPS system in your car recalculates the route when you miss an exit or a turn, except on a much larger scale.)

Southwest's system has two main components: an automated system and a manual process. The automated system handles the scheduling of aircraft and crews. The manual process handles the delays, and provides information to the automated system.

During the large winter storm, a large number of flights were delayed. So many flights were delayed that the manual process for updating information was overwhelmed -- people could not track and input the information fast enough to keep the automated system up to date.

A second problem happened on the automated side. So many people visited the web site (to check the status of flights) that it, too, could not handle all of the requests.

This is what I think happened. (At least, this makes sense to me.)

A number of people have jumped to the conclusion that Southwest's IT systems were antiquated and outdated, and that lead to the breakdown. Some people have jumped further and concluded that Southwest's management actively prevented maintenance and enhancements of their IT systems to increase profits and dividend payouts.

I'm not willing to blame Southwest's management, at least not without evidence. (And I have seen none.)

I will share these thoughts:

1. Southwest's IT systems -- even if they are outdated -- worked for years (decades?) prior to this failure.

2. All systems fail, given the right conditions.

One can argue that Southwest's system, a combination of automated and manual processes, could be redesigned to have more work handled by the automated side. It would require some way to track flights and record crews and planes arriving at a destination. Such changes are not trivial, and should be made with care.

One can argue that Southwest's IT systems use old programming techniques (and maybe even old programming languages), and Southwest should modernize their code. I find this argument unpersuasive, as newer programming languages and code written in those languages is not necessarily better (or more reliable) than the old code.

One can argue that Southwest's IT system could not scale up to handle the additional demand, and that Southwest should use cloud technologies to better meet variable demand. That is also a weak argument; moving to cloud technologies will not automatically make a system scalable.

Clearly this event was an embarrassment for Southwest, as well as a loss of some customer goodwill. (Not to mention the expense of refunds.) Given that a large winter storm could happen again (if not this year then possibly next year), Southwest may want to make adjustments to its scheduling systems and processes. But I would caution them against a large-scale re-write of their entire system. Such large projects tend to fail. Instead, I would recommend small, incremental improvements to their databases, their web sites, and their scheduling systems.

Whatever course Southwest chooses, I hope that it is executed with care, and with respect for the risks involved.