Thursday, August 10, 2023

The Apple Mac Pro is a Deluxe Coffee Maker

Why does Apple offer the Mac Pro? It's expensive and it offers little more than the Mac Studio. So why is it in the product line? Several have asked this question, and I have an idea.

But before discussing the Apple Mac Pro, I want to talk about coffee makers. Specifically the marketing of coffee makers.

Coffee makers are, at best, a commodity. One puts coffee, water, and electricity in, and after a short time coffee comes out. The quality of the coffee is, I believe, dependent on the coffee and the water, not the mechanism.

As bland as they are, there is one legend about coffee makers. It has to do with the marketing of coffee makers, and it goes something like this:

(Please note that I am working from a dim memory and much -- if not all -- of this legend may be wrong. But the idea will serve.)

A company that made and sold coffee makers was disappointed with sales, and wanted to increase its profits. They brought in a consultant to help. The consultant looked at the product line (there were two, a basic model and a fancy model), sales figures, sales locations, advertising, and various other things.  The consultant then met with the big-wigs at the company and presented recommendations.

The executives at the company were expecting to hear about marketing strategies, advertising, and perhaps pricing. And the consultant did provide recommendations along those lines.

"Add a third coffee maker to your product line," he said. "Make it a deluxe model with all of the features of your current models, and a few more, even features that people won't use. Sell it for an expensive price."

The executives were surprised to hear this. How could a third coffee maker, especially an expensive one, improve sales? Customers were not happy with the first two; a third would be just as bad.

"No," said the consultant. "The deluxe model will improve sales. It won't have many sales itself, but it will encourage people to buy the fancy (not deluxe) model. Right now your customers see that fancy model as expensive, and a poor value. A third model, with lots of features and a high price will convince customers that the fancy (not deluxe) model is a bargain."

The company tried this strategy ... and it worked! Just as the consultant said. Sales of the deluxe model were dismal, but sales of the (now) middle-tier fancy (not deluxe) model perked up. (Pun intended.)

We often forget that sales is about psychology as well as features.

Now let's consider Apple and the Mac Pro. The Mac Pro is not a good bargain. It performs only slightly better than the Mac Studio, yet it carries a much higher price tag. The Mac Pro has features that are ... questionable at best. (PCI slots that won't take graphics cards. Don't forget the wheels!)

Perhaps -- just perhaps -- Apple is using the Mac Pro to boost sales of the Mac Studio. Pricing the Mac Pro the way Apple does, it makes the Mac Studio a much more attractive option.

I suspect that if Apple had no Mac Pro and put the Mac Studio at the top of its product line, then a lot of people would argue for the Mac Mini as the better option. Those same people can make the same argument with the Mac Pro and convince themselves to buy the Mac Studio.

So maybe the Mac Pro isn't a Mac Pro at all. Maybe it is a deluxe coffee maker.

Thursday, August 3, 2023

We'll always have source code

Will AI change programming? Will it eliminate the need for programmers? Will we no longer need programs, or rather, the source code for programs? I think we will always have source code, and therefore always have programmers. But perhaps not as we think of programmers and source code today.

But first, let's review the notions of computers, software, and source code.

Programming has been with us almost as long as we have had electronic computers.

Longer than that, if we include the punch cards used by the Jacquard loom, But let's stick to electronic computers and the programming of them.

The first digital electronic computers were built in the 1940s. They were programming not by software but by wires -- connecting various wires to various points to perform a specific set of computations. There was no concept of a program -- at least not one for the computer. There were no programming languages and there was no notion of source code.

The 1950s saw the introduction of the stored-program computer. Instead of wiring plug-boards, program instructions were stored in cells inside the computer. We call these instructions "machine code". When programming a computer, machine code is a slightly more convenient than wiring plug-boards, but not by much. Machine code consists of a number of instructions, which each reside at distinct, sequential locations in memory. The processor executes the program by simply reading one instruction from a starting location, executing it, and then reading the next instruction at the next memory address.

Building a program in machine code took a lot of time and required patience and attention to detail. Changing a program often meant inserting instructions, which meant that the programmer had to recalculate all of the destination addresses for loops, branches, and subroutines. With stored-program computers, there was the notion of programming, but not the notion of source code.

Source code exists to be processed by a computer and converted into machine code. We first had source code with symbolic assemblers. Assemblers were (and still are) programs that read a text file and generate machine code. Not just any text file, but a text file that follows specific rules for content and formatting, and specifies a series of machine instructions but as text -- not as numbers. The assembler did the grunt work of converting "mnemonic" codes to numeric machine codes. It also converted numeric and text data to the proper representation for the processor, and calculated the destinations for loops, branches, and subroutines. Revising a program written in assembly language was much easier than revising machine code.

Later languages such as FORTRAN and COBOL converted higher-level text into machine code. They, too, had source code.

Early C compilers converted code into assembly code, which then had to be processed by an assembler. This last sequence looked like this:

    C source code --> [compiler] --> assembly source code --> [assembler] --> machine code

I've listed both the C code and the assembly code as "source code", but in reality only the C code is the source code. The assembly code is merely an intermediate form of the code, something generated by machine and later read by machine.

A better description of the sequence is:

    C source code --> [compiler] --> assembly code --> [assembler] --> machine code

I've changed the "assembly source code" to "assembly code". The adjective "source" is not really correct for it. The C program (at the left) is the one and only source.

Later C compilers omitted this intermediate step and generated machine code directly. The sequence become:

    C source code --> [compiler] --> machine code

Now let's consider AI. (You didn't forget about AI, did you?)

AI can be used to create programs in two ways. One is to enhance a traditional programming IDE with AI, and thereby assist the programmer as he (or she) is typing. That's no different from our current process; all we have done is made the editor a bit smarter.

The other way is to use AI directly and ask it to create the program. In this method, a programmer (or perhaps a non-programmer) provides a prompt text to an AI engine and the AI engine creates the entire program, which is then compiled into machine code. The sequence looks like this:

    AI prompt text --> [AI engine] --> source code --> [compiler] --> machine code

Notice that the word "source" has sneaked back into the middle of the stream. The term doesn't belong there; that code is intermediate and not the source. A better description is:

    Source AI prompt text --> [AI engine] --> intermediate code --> [compiler] --> machine code

This description puts the "source" back on the first step of the process. That prompt text is the true source code. One may argue that a prompt text is not really source code, that it is not specific enough, or not Turing-complete, or not formatted like a traditional program. I think that it is the source code. It is created by a human and it is the text used by the computer to generate the machine code that we desire. That makes it the source.

Notice that in this new process with AI, we still have source code. We still have a way for humans to instruct computers. I've been writing about source code as if it were written. Source code has always been written (or typed, or keypunched) in the past. It is possible that future systems recognize human speech and build programs from that (much like on several science fiction TV programs). If so, those spoken words will be the source code.

AI may change the programming world. It may upend the industry. It may force many programmers to learn new skills, or to retire. But humans will always want to express their desires to computers. The way they express them may be through text, or through speech, or (in some far-off day) through direct neural links. Those thoughts will be source code, and we will always have it. The people who create that source code are programmers, so we will always have them.

We will always have source code and programmers, but source code and programming will change over time.

Thursday, July 20, 2023

Hollywood's blind spot

Hollywood executives are probably correct in that AI will have a significant effect on the movie industry.

Hollywood executives are probably underestimating the effect that AI will have on the movie industry.

AI, right now, can create images. Given some prompting text, an AI engine can form an image that matches the description in the text. The text can be simple, such as "a zombie walking in an open field", or it can be more complex.

It won't be long before AI can make not a single image but a video. A video is nothing more than a collection of images, each different from the previous in minor ways. When played back at 24 frames per second, the human mind perceives the images not as individual images but as motion. (This is how movies on film work, and how movies on video tape work.) I'm sure people are working on "video from AI" right now -- and they may already have it.

A movie is, essentially, a collection of short videos. If AI can compose a single video, then AI can compose a collection of videos. The prompting text for a movie might resemble a traditional movie script -- with some formatting changes and additional information about costumes, camera angles, and lighting.

Thus, with enough computing power, AI can start with an enhanced, detailed script and render a movie. Let's call this a "script renderer".

A script renderer makes the process of moviemaking cheap and fast. It is the word processor of the twenty-first century. And just as word processors upended the office jobs of the twentieth century, the script renderer will upend the movie jobs of this century. Word processors (the software on commonplace computers) replaced people and equipment: secretaries, proofreaders, typewriters, carbon paper, copy machines, and Wite-out erasing fluid.

Script renderers (okay, that's a clumsy term and we'll probably invent something better) will do similar things for movies. If an AI can make a movie from a script, then movie makers don't need equipment (cameras, lights, costumes, sets, props, microphones) and the people who handle that equipment. It may be possible for a single individual to write a script, send it through a renderer, and get a movie. What's more, just as word processors let one print a document, review it, make changes, and print it again, a script renderer will let one render a movie, view it, make changes, and render it again -- perhaps all in a few hours.

Hollywood executives, if they have seen this far ahead, may be thinking that their studios will be much more profitable. They won't need to pay actors, or camera operators, or build sets, or ... lots of other things. All of those expenses disappear, but the revenue from the movies remain.

But here's what they don't see: Making a movie will simply be a matter of computing power. Anyone with a computer and access to a sufficiently powerful AI will be able to convert a script into a movie.

Today, anyone can start a newsletter. Or print invitations to a party. Or their own business cards.

Tomorrow, anyone will be able to make a movie. It won't be easy; one still needs a script with the right details, and one should have a compelling story and good dialog. But it will be much easier than it is today.

And create movies they will. Not just movies, but TV episodes, mini series, and perhaps even short videos like the old Flash Gordon serials.

I suspect that the first wave of "civilian movies" will be built on existing materials. Fans of old "Star Trek" shows will create new episodes with new stories but using the likenesses of the original actors. The studios will sue, of course, but it won't be a simple case of copyright infringement. The owners of the old shows will have to build a case on different grounds. (They will probably prevail, if only because the amateurs cannot pay the court costs.)

The second wave will be different. It will be new material, away from the copyrighted and trademarked properties. But it will still be amateurish, with poor dialog and awkward pacing.

The third wave of non-studio movies will be better, and will be the real threat to today's movie studios. These movies will have higher quality, and will obtain some degree of popularity. That will get the attention of Hollywood executives, because now these "civilian" movies will compete with "real" movies.

Essentially, AI removes the moat around movie studios. That moat is the equipment, sound stages, and people needed to make a movie today. When the moat is gone, lots of people will be able to make movies. And lots will.


Thursday, July 13, 2023

Streaming services

Streaming services have a difficult business model. The cost of producing (or licensing) content is high, and the revenue from subscriptions or advertisements is low. Fortunately, the ratio of subscribers to movies is high, and the ratio of advertisements to movies is also high. Therefore, the streaming services can balance revenue and costs.

Streaming services can increase their revenue by adjusting subscription fees. But the process is not simple. Raising subscription fees does raise income per subscriber, but it may cause some subscribers to cancel their subscription. Here, economics comes into play, with the notion of the "demand curve", which measures (or attempts to measure) the willingness of customers to pay at different price levels.

Streaming services can decrease their costs by removing content. For licensed content (that is, movies and shows that are made by other companies) the streaming service pays a fee. If they don't "carry" those movies or services, then they don't have to pay. Cancelling their license reduces their cost.

For content that the service produces, the costs are more complex. There is the cost of production, which is a "sunk cost" -- the money has been spent, whether the service carries the movie/show or not. There are also ongoing costs, in the form of residual payments, which are paid to the actors, writers, and other contributors while the movie or show is made available. Thus, a service that has produced a movie can reduce its costs (somewhat) by not carrying said movie.

That's the basic economics of streaming services, a very simplified version. Now let's look at streaming services and the value that they provide to viewers.

I divide streaming services into two groups. Some services make their own content, and other services don't. The situation is somewhat more complicated, because the content-making services also license content from others. Netflix, Paramount+, and Roku all run streaming services, all make their own content, and all license other content to show on their service. Tubi, Frndly, and Pluto TV make no content and simply license content from others.

The content-producing services, in my mind, are the top-tier services. Disney+ makes its own content and buys (permanently) other content to add to its library, and is recognized as a top-tier service. Netflix, Paramount+, and Peacock create their own content (and license some) and I consider them top-tier services.

The services that don't produce content, the services that simply license content and then make it available, are the second-tier services. They are second-tier because their content is available for a limited time. They don't own content; they can only rent it. Therefore, content will be available for some amount of time, and then disappear from the service. (Roku, for example, had the original "Bionic Woman" series, but it is not available now.)

For second-tier services, content comes and goes. There is no guarantee that a specific movie or show will be available in the future. Top-tier services, in contrast, have the ability to keep movies and shows available. They don't, and I think that damages their brand.

Services damage their brand when they remove content, in three ways.

First, they reduce the value of their service. If a service reduces the number of movies and shows available, then they have reduced their value to me. This holds in an absolute sense, and also in a relative sense. If Disney+ removes movies, and Paramount+ keeps its movies, then Disney+ drops in value relative to Paramount+.

Second, they break their image of "all things X". When Paramount+ dropped the series "Star Trek: Prodigy", they lost the right to claim to be home to all things Star Trek. (I don't know that Paramount+ has every made this claim. But they cannot make it now.)

Third, the services lose the image of consistency. On a second-tier service, which lives off of licensed (essentially rented) content, I expect movies and shows to come and go. I expect a top-tier service to be predictable. If I see that it has a movie available this month, I expect them to have it next month, and six months from now, and a year from now. I expect the Disney+ service to have all of the movies that Disney has made over the years, now and in the future. I expect the Paramount+ service to have all of the Star Trek movies and TV shows, now and in the future.

By dropping content, the top-tier services become more like the second-tier services. When Netflix, or Max, or Peacock remove content, they become less reliable, less predictable, less... premium.

Which they may want to consider when setting their monthly subscription rates.


Wednesday, July 5, 2023

Twitter, Elon Musk, and dignity

A lot has been said about Elon Musk's actions at Twitter. I will add a little more, with some ideas that I have not seen anywhere else. (Also, I recognize that Musk has stepped aside and is letting Linda Yaccarino run the show. But I don't know if Musk is still involved.)

Musk's behavior at Twitter has been described as chaotic, petulant, and just plain wrong. He has made decisions with wide-sweeping actions, and made them hastily and with little respect for the long-time employees at Twitter. Those decisions have had consequences.

I'm going to focus not on the decisions, and not on the consequences, but on the process. Musk is running Twitter as if it were a start-up, a company with an idea of a product or service, perhaps a prototype or minimum viable product, and few or no customers. Start-ups need to find a product or service that resonates with customers, something that makes customers ready to pay for the product or service. It is common for a start-up to try several (sometimes quite varied) approaches.

A start-up looking for its product (or its value proposition, to use MBA-speak) needs to move quickly. It has limited resources and it does not have the luxury of waiting for multiple levels of bureaucracy to review decisions and slowly reach a consensus. The CEO must make decisions quickly and with minimal delay.

That's the behavior I see in Musk at Twitter: unilateral, arbitrary decisions made with no advance notice.

While such behavior is good (and sometimes necessary) at start-ups, it is not good at established companies. Established companies are, well, established. They have well-defined products and services. They have a base of customers who pay them money on a regular basis. Those customers have expectations, based on the previous actions of the company.

Arbitrary changes to products and services, made on short notice, do not sit well with those customers. Customers want predictability, just as you and I want predictability from our internet providers and streaming services.

(Note to self: a future column might discuss consistency and predictability for streaming services.)

Back to customers of Twitter: They want predictability, and Musk is not providing it.

The users of Twitter, distinct from the customers who pay for advertising, also want consistency and predictability. Arbitrary changes can drive users away, which reduces advertising view counts, which reduces advertising rates, which reduces income for advertising.

It seems to me that Musk is well-suited to run a start-up, and poorly suited to run an established company.

(Note to self: a future column might discuss the transition from start-up to established company.)

Perhaps the best action that Musk can take is to remove himself from the management of Twitter and let others run the company. He has done that, to some extent. He should step completely aside. I'm not commenting on Yaccarino's competency to run Twitter; that is another topic.

Sometimes the best way to solve a problem is to let others handle it.

Tuesday, May 2, 2023

The long life of the hard disk

It was in 1983 that IBM introduced the IBM XT -- the IBM PC with the built-in hard disk. While hard disks had been around for years, the IBM XT made them visible to the vast PC market. Hard disks were expensive, so there was a lot of advertising and also a lot of articles about the benefits of hard disks.

Those benefits were: faster operations (booting PC-DOS, loading programs, reading and writing files, more secure because you can't lose the disk like a floppy, and more reliable compared to floppy disks.

The hard disk didn't kill floppy disks. They remained popular for some time. Floppy disks disappeared some time after Apple introduced the iMac G3 (in 1998). Despite Apple's move, floppies remained popular.

Floppy disks did gradually lose market share, and in 2010 Sony stopped manufacturing floppy disks. But hard disks remained a staple of computing.

Today, in 2023, the articles are now about replacing hard disks with solid-state disks. The benefits? Faster boot times, faster program loading times, faster reading and writing. (Sound familiar?) Reliability isn't an issue, nor is the possibility of losing media.

Apple again leads the market in moving from older storage technology. Their product line (from iPhones and iPads to MacBooks and iMacs) all use solid-state storage. Microsoft is moving in that direction too, pressuring OEMs and individuals to configure PCs to boot Windows from an internal SSD rather than a hard disk. It won't be long before hard disks are dropped completely and not even manufactured.

But consider: the hard disk (with various interfaces) was the workhorse of storage for PCs from 1983 to 2023 -- forty years.

Floppy disks (in the PC world) were used from 1977 to 2010, somewhat less than forty years. But they were used prior to PCs, so maybe their span was also forty years.

Does that mean that SSDs will be used for forty years? We've had them since 1978 (if you count the very early versions) but they moved into the main stream of computing in 2017 with Intel's Optane products. Forty years after 2017 puts us in 2057. But that would be the end of SSDs -- their replacement should arrive earlier than that, possibly fifteen years earlier.

Tuesday, April 25, 2023

Chromebook life spans

At the start of the Covid pandemic, back in 2020, lots of schools needed a way for students to attend from home. They selected Chromebooks. Chromebooks were less expensive than Windows PCs or MacBooks, easier to administrate, and less vulnerable to hacking (by bad guys and students alike).

Schools bought a lot of them.

And now, those same schools are learning that Chromebooks come with expiration dates. Many of them have three-year life spans.

The schools -- or rather, the people who manage budgets for schools -- are not happy.

There is a certain amount of caveat emptor here, which the school IT and budget administrators failed to perform, but I would rather focus on the life spans of Chromebooks.

Three years isn't all that long in the IT world. How did Google (who designs the Chromebook specification) select that term?

(We should note that not all Chromebooks have three-year life spans. Some Chromebooks expire after five or even seven years. It is the schools that selected the three-year Chromebooks that are unhappy. But let's focus on the three-year term.)

(We should also note that the Chromebook life span is for updates. The Chromebooks continue to work; Google simply stops updating ChromeOS and the Chrome browser. That may or may not be an issue; I myself used an old Chromebook for years after its expiration date. Eventually, web sites decided that the old version of Chrome was not worth talking to, and I had to replace the Chromebook.)

I have an idea about the three-year life span. I don't work at Google, and have no contacts there, so I'm speculating. I may be wrong.

It seems to me that Google selected the three-year life span to tailor Chromebooks not to schools but to large corporations. Large corporations (or maybe IT vendors), back in the 1990s, convinced Congress to adjust the depreciation schedules for IT equipment, reducing the expected life to three years. This change had two effects. First, the accelerated schedule lets corporations write off the expense of IT equipment faster. Second, IT vendors convinced large corporations to replace their IT equipment every three years. (The argument was that it was cheaper to replace old PCs rather than maintain them.)

With corporations replacing PCs every three years, it made sense for Google to build their Chromebooks to fit that schedule. While PCs did not have built-in expiration dates, corporations were happy to replace their PCs on that three-year schedule.

A three-year expiration gave Google several advantages. They could design the Chromebooks with less expensive components. They could revise the ChromeOS operating system rapidly and not worry about backwards compatibility. Google could sell the idea of planned obsolescence to the makers of Chromebooks (HP, Dell, Lenovo, etc.) as a market that would provide a steady demand.

Again, this is all speculation. I don't know that Google planned any of this.

But it is consistent.

Schools are upset that Chromebooks have such a short supported life. Google made Chromebooks with those short support life spans because the target was corporations. Corporations replaced IT equipment every three years because of tax laws and the perceived costs of maintaining older hardware.

If we take away anything from this, perhaps we should note that Google was focussed on the corporate market. Other users, such as schools or non-profits or individuals, were possibly not considered in their calculations.