Tuesday, May 5, 2026

AI is going to break a lot of companies

Some years ago (never mind exactly) a mentor told me that the difference between a scientist and a businessman was that the scientist wanted to do a thousand things once and a businessman wanted to do one thing a thousand times. That logic still applies today, but AI may present a problem.

Before I talk about AI, let's think about businesses.

Most businesses do like to perform a single thing (such as selling a hamburger) many times. That's a bit of an exaggeration, as hamburger shops like to sell hamburgers with cheese (or without), with fries (or with onion rings), with soda, and sometimes chicken sandwiches instead of hamburgers. But I think you get the basic idea: businesses like to sell a limited set of products or services in a limited set of configurations and make a profit on each transaction.

That idea extends beyond hamburger shops. Auto dealers sell cars, publishers sell books or newspapers, movie studios sell movies, ... the list goes on.

All of these businesses like consistency and stability. While the news changes from day to day and movies have different plots and special effects, to basic idea of selling hamburgers remains the same from day to day, and the basic idea of selling movie tickets remains the same.

Technology has always been a motivator of change. The IBM PC changed the use of computers in businesses. Prior to the IBM PC, most businesses used computers either as large centralized processors of mostly accounting data, with a few exception cases for dedicated word processors or experiments with a TRS-80 or Apple II computer. After the IBM PC, desktop computers were adopted throughout the business world.

But the rate of technological was manageable. IBM (and later Microsoft) limited the changes to hardware and software, and kept old things working with new things. When IBM introduced the PC AT, it required a new version of PC DOS yet it ran almost all programs from the original IBM PC. New versions of Microsoft Windows run almost all programs from the previous versions. The first versions of Windows ran 16-bit DOS programs, too.

Occasionally, a company introduced a product or service that was radically new. Apple brought out the Macintosh computer (or the LISA, an earlier product with similar features) that did not run software from the Apple II. Microsoft released Windows NT which had a completely different architecture and notably did not run many older programs.

These major shifts occurred infrequently, and we always had the opportunity to stay with the older systems for some time. That transition period allowed for gradual upgrades to systems. Most importantly, it allowed businesses to plan upgrades and to operate in a period of known technology. That period provided stability.

Businesses like stability because they can predict the short-term future, and make plans for investments, expansions, new products, and advertising. They can also adjust their supply chains and automate internal processes.

So how does AI affect stability of technology?

In short, AI causes changes in almost every aspect of business, and AI itself is changing rapidly.

When PCs became available, they first replaced typewriters and dedicated word processing systems. The spreadsheet made it easy to analyze data. Later uses included databases and e-mail. Their uses were limited and specific. Their adoption was measured and gradual.

AI, in contrast, can be used in almost any part of the business, from drafting e-mails to analyzing current business conditions to suggesting changes for supply chain management. That means that many areas of the company can change to use AI. That doesn't provide a measured and gradual change.

AI is inconsistent. A request today gets a certain response, a request tomorrow gets a different response (sometimes slightly different, sometimes significantly different). That doesn't provide stability.

The AI engines change frequently. The original IBM PC was first delivered in 1982. The PC XT (almost identical except it included a hard disk) in 1982. The PC AT (with the 80286 processor and a larger hard disk) in 1984. Compaq delivered the 80386-based Desktop Pro in 1987. All of those did basically the same thing.

Today the AI providers (OpenIA, Microsoft, Google, xAI) update their offerings frequently. Those updates are not trivial, and the changes are significant. In some ways, it is like the PC market before IBM announced its PC, with multiple vendors and multiple standards and limited compatibility and consistency.

The frequent changes to AI engines is at odds with the desire of companies for a stable, consistent set of technology on which they can run their business. I don't see it slowing, which means that companies will have to deal with changes (and problems) in their AI purchases. (I also don't see a company that can dominate the market, as IBM dominated the PC market in 1981.)

The changes caused by AI and the ongoing changes in AI products will strain businesses. They will have to adapt to an environment of constant change.

Most won't like it.

Some won't survive it.

Wednesday, April 22, 2026

AI and Enshittification

"Enshittification" is a term that describes a specific corporate economic activity. In short, products and services start out great for consumers, and over time become more expensive and offer lower quality.

My take: "enshittifcation" is a company adjusting prices and quality to maximize profits. They don't do it to irritate users. They don't do it to annoy employees (enshittification happens to employees too; witness the conversion of employer-paid pensions with employee-paid 401-k plans). Companies "enshittify"products and services to increase profits.

So let's look at AI.

If "enshittification" is simply "maximizing profits", then what will happen to AI services?

The "free" AI services (such as Google's search engine and Microsoft's Bing) will see more advertisements and more sponsored results.

The paid-for services will see... price increases.

I expect that AI services will follow the path set by video streaming services. They will start with simple and inexpensive plans, and then change to include multiple tiers with different capabilities at each (differently priced) tier. Just as video streaming has ad-supported and ad-free tiers, AI will develop tiers, although perhaps not split into ad-supported and ad-free.

And, just like video streaming services, AI services will increase their fees (for all tiers, although perhaps not all at once) over time. And increase them faster than inflation, just like video streaming services.

How high will prices go? If an AI engine can replace a human at certain jobs, then the cost for that AI engine should rise to match that cost. If employers are willing to pay a certain amount for a human to provide coding (for example) then those same employers should be willing to pay just as much for an AI bot to provide coding.

For the AI enthusiasts, these price increases are in a blind spot. I suspect that the enthusiasm for AI in the workplace is driven not by what AI can produce, but by the cost. I also suspect that many employers think that AI costs will remain the same, or roughly the same, over time. They are not expecting the higher-than-inflation cost increases that I am predicting here.

There are, of course, multiple AI services. One can argue that competition will act as a brake on price increases. That doesn't hold for video streaming services, because streaming services are not interchangeable. Each service has its own proprietary content, and that "locks in" customers.

The ability to lock in customers is not immediately obvious for AI services. Today's services are different but also very similar. AI companies may be working on ways to lock in customers, probably by building custom models (or custom weights) for each customer. Or possibly by offering custom APIs for specific capabilities. I'm not sure of the form, but I'm confident that some will be attempted.

In the long term, I think that those who adopt AI will find that it is not as cheap as they thought, and that it will be expensive to move to another (also not so cheap) AI service, and especially difficult to switch back to humans.

Tuesday, March 3, 2026

AI and the mortgage debt crisis of 2008

In 2008, investment banks saw tremendous losses caused by defaults on mortgages. It wasn't just mortgages; investment companies had bundled and repackaged mortgage loans into securities and sold those securities to other investors. The demand for these mortgage-backed securities was high (they paid good interest) and that demand spurred demand for mortgages, which spurred banks to offer (and originate) mortgages to a large number of people including a large number of people whom they would normally not give mortgage loans. The problem came when interest rates rose, causing mortgage payments to increase (many were adjustable-rate mortgages), and many mortgage holders could not afford the higher payments. They defaulted on the loans, which triggered failures through the entire chain of investments.

The end products, the mortgage-based securities, were supposedly top quality. The mortgages upon which they were based were not; the investment bankers had convinced themselves that a combination of mixed-grade mortgages could support a top-grade investment product.

It was a system that worked, until it didn't.

What does this have to do with AI? Keep in mind the notion of building top-grade products from a composite of mixed-grade products.

AI -- at least AI for programming -- works by building a large dataset of programs and then using that dataset to generate requested programs. The results are, in a sense, averages of certain selected items in the provided data (the "training data").

The quality of the output depends on the quality of the input. If I train an AI model on a large set of incorrect programs, the results will match those flawed programs. By training on large sets of programs, AI providers are betting on the "knowledge of the masses"; they assume that a very large collection of programs will be mostly correct. Scanning open source repositories is a common way to build such datasets. Companies with large datasets of their own (such as Microsoft) can use those private datasets for training an AI model.

I think that averaging to correctness works for most requests, but not necessarily for all requests.

I expect that simpler code is more available in code repositories, and complex and domain-specific code is less common. We can see lots and lots of "hello, world" programs, in almost any programming language. We can see lots of simple classes for a customer address (again, in almost any programming language).

We don't see lots of code for obscure applications, or very large applications. There are few publicly available applications to run oil rigs, for example. Or large, multinational accounting systems. Or perhaps even control software for a consumer-grade microwave oven.

There may be a few large, complex programs available in AI training data. But a few (or one) is not drawing on "the knowledge of the masses". It is not averaging a large set of mostly right code into a correct set of code.

Here we can see the parallel of AI for coding to the mortgage securities industry. The latter built (what it thought were) top-grade investment products from mixed-grade mortgages. The former is building (what it and users think are) quality code from mixed-grade existing code.

But I won't be surprised to learn that AI coding models work for small, simple code and fail for large, complex code.

In other words, AI coding works -- until it doesn't.

Friday, February 6, 2026

Microsoft doesn't know how customers want to use AI

Microsoft has pushed its "Copilot" AI in a lot of places. It's in Windows. It's in Office (excuse me, "Microsoft 365") applications. It's in Visual Studio Code, Visual Studio, and GitHub. If Microsoft has a property, Microsoft has injected Copilot into it.

Little of this (if any) has gone over well with customers. Combined with the injection of advertising, the push of AI has created so much dissatisfaction that customers are leaving Windows for Mac or (gasp) Linux.

A lot has been written (or recorded and posted on YouTube) about this. I won't rehash the arguments here.

What I will ask is this: Why is Microsoft doing this? Why is Microsoft putting Copilot into its products and services willy-nilly, much like it did with the ".NET" label for product names.

I have an idea:

Microsoft doesn't know how customers will use AI, or what they want to do with it.

This is a change for Microsoft. For much of its life, Microsoft has played "catch-up" with technology. After its lead with BASIC, and its fortunate contract with IBM for PC-DOS, Microsoft has been following others. It followed Apple's MacIntosh computers with Windows. It followed a number of database providers with SQL Server. It followed NetScape with Internet Explorer. It followed Java with C#. It followed the iPod with the Zune (look it up). It followed Amazon AWS with Azure.

Now Microsoft is following other AI providers with its Copilot. But those other AI providers are different from Apple and NetScape and Sun Microsystems (the makers of Java). They all knew what their customers wanted, and they provided a solution that met those wants.

Today's providers of AI don't know what their customers want. They don't know how to make a profit from AI. But they are popular and Microsoft is following them, which means that Microsoft doesn't know when their customers want from AI and Microsoft doesn't know how to make a profit from AI.

I find all of this rather unsettling.

 

Thursday, January 22, 2026

A flood of used GPUs

It seems to me that we will soon be inundated with a large number of used GPUs. I'm not sure what we are going to do with them, but I suspect that some creative people will devise uses for them.

My idea starts with the data centers used for AI. These are large facilities with lots (thousands, probably tens of thousands) of servers each with one or more GPU. Some are being built as I write this, some have been just recently "turned on", and some are getting old (in terms of technology).

GPUs don't last forever. They suffer from two forms of obsolescence. The first is wear. While GPU chips last quite a long time, other parts of the GPU wear more quickly. Fan motors, capacitors, and other discrete electronic components degrade after significant use.

The second form is capacity, or more specifically the availability of a newer, faster, more efficient GPU. We've seen this in the PC gaming market, with new GPUs announced every year. I myself have benefitted from this phenomenon. A while back, when living in an apartment in Oregon, someone else "donated" an old PC to the recycling area. The PC was mostly complete, with case, power supply, motherboard, memory, and -- interestingly -- a GPU. (The former owner had removed all disk drives, but left everything else.)

GPUs are not cheap, but the former owner thought that the GPU in this PC had little value. I estimate that the former owner had used this PC for about five years.

So let's take that five year figure and apply it to data centers, specifically data centers for AI, because that use lots of GPUs.

What happens after a data center has been online for five years? Technology advances, and there will be newer, faster, more efficient GPUs on the market. The owners of the data center will look at those new GPUs with envy. (Especially the "more efficient" aspect of the new GPUs.)

I predict that some data centers will see their older GPUs replaced. (Perhaps the entire server, not just the GPU.) Which means that the big tech owners of data centers will have a large pile of used GPUs (or servers) sitting on the side.

What to do with those old GPUs? One could recycle them, and I suspect that many will be, but that costs money. They could be buried in a landfill, but that costs money too.

Which leaves another option: sell them.

Selling used GPUs is tricky. You cannot label them as new (not legally). But I suspect that there will be a market for used, recent-model GPUs. We might see a large number of them on the market, which means the price will be relatively low.

Buying used GPUs is also tricky. Used GPUs may fail quickly, and there is usually no warranty.

If you have been pondering a project that uses a GPU (or a number of GPUs), this may be your opportunity to start it.