Wednesday, April 22, 2026

AI and Enshittification

"Enshittification" is a term that describes a specific corporate economic activity. In short, products and services start out great for consumers, and over time become more expensive and offer lower quality.

My take: "enshittifcation" is a company adjusting prices and quality to maximize profits. They don't do it to irritate users. They don't do it to annoy employees (enshittification happens to employees too; witness the conversion of employer-paid pensions with employee-paid 401-k plans). Companies "enshittify"products and services to increase profits.

So let's look at AI.

If "enshittification" is simply "maximizing profits", then what will happen to AI services?

The "free" AI services (such as Google's search engine and Microsoft's Bing) will see more advertisements and more sponsored results.

The paid-for services will see... price increases.

I expect that AI services will follow the path set by video streaming services. They will start with simple and inexpensive plans, and then change to include multiple tiers with different capabilities at each (differently priced) tier. Just as video streaming has ad-supported and ad-free tiers, AI will develop tiers, although perhaps not split into ad-supported and ad-free.

And, just like video streaming services, AI services will increase their fees (for all tiers, although perhaps not all at once) over time. And increase them faster than inflation, just like video streaming services.

How high will prices go? If an AI engine can replace a human at certain jobs, then the cost for that AI engine should rise to match that cost. If employers are willing to pay a certain amount for a human to provide coding (for example) then those same employers should be willing to pay just as much for an AI bot to provide coding.

For the AI enthusiasts, these price increases are in a blind spot. I suspect that the enthusiasm for AI in the workplace is driven not by what AI can produce, but by the cost. I also suspect that many employers think that AI costs will remain the same, or roughly the same, over time. They are not expecting the higher-than-inflation cost increases that I am predicting here.

There are, of course, multiple AI services. One can argue that competition will act as a brake on price increases. That doesn't hold for video streaming services, because streaming services are not interchangeable. Each service has its own proprietary content, and that "locks in" customers.

The ability to lock in customers is not immediately obvious for AI services. Today's services are different but also very similar. AI companies may be working on ways to lock in customers, probably by building custom models (or custom weights) for each customer. Or possibly by offering custom APIs for specific capabilities. I'm not sure of the form, but I'm confident that some will be attempted.

In the long term, I think that those who adopt AI will find that it is not as cheap as they thought, and that it will be expensive to move to another (also not so cheap) AI service, and especially difficult to switch back to humans.

Tuesday, March 3, 2026

AI and the mortgage debt crisis of 2008

In 2008, investment banks saw tremendous losses caused by defaults on mortgages. It wasn't just mortgages; investment companies had bundled and repackaged mortgage loans into securities and sold those securities to other investors. The demand for these mortgage-backed securities was high (they paid good interest) and that demand spurred demand for mortgages, which spurred banks to offer (and originate) mortgages to a large number of people including a large number of people whom they would normally not give mortgage loans. The problem came when interest rates rose, causing mortgage payments to increase (many were adjustable-rate mortgages), and many mortgage holders could not afford the higher payments. They defaulted on the loans, which triggered failures through the entire chain of investments.

The end products, the mortgage-based securities, were supposedly top quality. The mortgages upon which they were based were not; the investment bankers had convinced themselves that a combination of mixed-grade mortgages could support a top-grade investment product.

It was a system that worked, until it didn't.

What does this have to do with AI? Keep in mind the notion of building top-grade products from a composite of mixed-grade products.

AI -- at least AI for programming -- works by building a large dataset of programs and then using that dataset to generate requested programs. The results are, in a sense, averages of certain selected items in the provided data (the "training data").

The quality of the output depends on the quality of the input. If I train an AI model on a large set of incorrect programs, the results will match those flawed programs. By training on large sets of programs, AI providers are betting on the "knowledge of the masses"; they assume that a very large collection of programs will be mostly correct. Scanning open source repositories is a common way to build such datasets. Companies with large datasets of their own (such as Microsoft) can use those private datasets for training an AI model.

I think that averaging to correctness works for most requests, but not necessarily for all requests.

I expect that simpler code is more available in code repositories, and complex and domain-specific code is less common. We can see lots and lots of "hello, world" programs, in almost any programming language. We can see lots of simple classes for a customer address (again, in almost any programming language).

We don't see lots of code for obscure applications, or very large applications. There are few publicly available applications to run oil rigs, for example. Or large, multinational accounting systems. Or perhaps even control software for a consumer-grade microwave oven.

There may be a few large, complex programs available in AI training data. But a few (or one) is not drawing on "the knowledge of the masses". It is not averaging a large set of mostly right code into a correct set of code.

Here we can see the parallel of AI for coding to the mortgage securities industry. The latter built (what it thought were) top-grade investment products from mixed-grade mortgages. The former is building (what it and users think are) quality code from mixed-grade existing code.

But I won't be surprised to learn that AI coding models work for small, simple code and fail for large, complex code.

In other words, AI coding works -- until it doesn't.

Friday, February 6, 2026

Microsoft doesn't know how customers want to use AI

Microsoft has pushed its "Copilot" AI in a lot of places. It's in Windows. It's in Office (excuse me, "Microsoft 365") applications. It's in Visual Studio Code, Visual Studio, and GitHub. If Microsoft has a property, Microsoft has injected Copilot into it.

Little of this (if any) has gone over well with customers. Combined with the injection of advertising, the push of AI has created so much dissatisfaction that customers are leaving Windows for Mac or (gasp) Linux.

A lot has been written (or recorded and posted on YouTube) about this. I won't rehash the arguments here.

What I will ask is this: Why is Microsoft doing this? Why is Microsoft putting Copilot into its products and services willy-nilly, much like it did with the ".NET" label for product names.

I have an idea:

Microsoft doesn't know how customers will use AI, or what they want to do with it.

This is a change for Microsoft. For much of its life, Microsoft has played "catch-up" with technology. After its lead with BASIC, and its fortunate contract with IBM for PC-DOS, Microsoft has been following others. It followed Apple's MacIntosh computers with Windows. It followed a number of database providers with SQL Server. It followed NetScape with Internet Explorer. It followed Java with C#. It followed the iPod with the Zune (look it up). It followed Amazon AWS with Azure.

Now Microsoft is following other AI providers with its Copilot. But those other AI providers are different from Apple and NetScape and Sun Microsystems (the makers of Java). They all knew what their customers wanted, and they provided a solution that met those wants.

Today's providers of AI don't know what their customers want. They don't know how to make a profit from AI. But they are popular and Microsoft is following them, which means that Microsoft doesn't know when their customers want from AI and Microsoft doesn't know how to make a profit from AI.

I find all of this rather unsettling.

 

Thursday, January 22, 2026

A flood of used GPUs

It seems to me that we will soon be inundated with a large number of used GPUs. I'm not sure what we are going to do with them, but I suspect that some creative people will devise uses for them.

My idea starts with the data centers used for AI. These are large facilities with lots (thousands, probably tens of thousands) of servers each with one or more GPU. Some are being built as I write this, some have been just recently "turned on", and some are getting old (in terms of technology).

GPUs don't last forever. They suffer from two forms of obsolescence. The first is wear. While GPU chips last quite a long time, other parts of the GPU wear more quickly. Fan motors, capacitors, and other discrete electronic components degrade after significant use.

The second form is capacity, or more specifically the availability of a newer, faster, more efficient GPU. We've seen this in the PC gaming market, with new GPUs announced every year. I myself have benefitted from this phenomenon. A while back, when living in an apartment in Oregon, someone else "donated" an old PC to the recycling area. The PC was mostly complete, with case, power supply, motherboard, memory, and -- interestingly -- a GPU. (The former owner had removed all disk drives, but left everything else.)

GPUs are not cheap, but the former owner thought that the GPU in this PC had little value. I estimate that the former owner had used this PC for about five years.

So let's take that five year figure and apply it to data centers, specifically data centers for AI, because that use lots of GPUs.

What happens after a data center has been online for five years? Technology advances, and there will be newer, faster, more efficient GPUs on the market. The owners of the data center will look at those new GPUs with envy. (Especially the "more efficient" aspect of the new GPUs.)

I predict that some data centers will see their older GPUs replaced. (Perhaps the entire server, not just the GPU.) Which means that the big tech owners of data centers will have a large pile of used GPUs (or servers) sitting on the side.

What to do with those old GPUs? One could recycle them, and I suspect that many will be, but that costs money. They could be buried in a landfill, but that costs money too.

Which leaves another option: sell them.

Selling used GPUs is tricky. You cannot label them as new (not legally). But I suspect that there will be a market for used, recent-model GPUs. We might see a large number of them on the market, which means the price will be relatively low.

Buying used GPUs is also tricky. Used GPUs may fail quickly, and there is usually no warranty.

If you have been pondering a project that uses a GPU (or a number of GPUs), this may be your opportunity to start it.

Sunday, December 14, 2025

Microsoft stumbles

Microsoft's attempt to sell AI has been going ... less than spectacularly. It seems that few people want to buy it.

This stumble by Microsoft is a good time to look back at how it has succeeded, and how it has failed, in the past. Microsoft has had a number of successes: BASIC, MS-DOS, Windows, Internet Explorer, Visual Visual Basic, Visual Studio and C#, and Azure. It has also had failures: "Bob" the friendly desktop, "Clippy" the original "AI" assistant, Visual C++, the Zune music player, Windows Phone (both hardware and software), 

Much of Microsoft's success has been built not on technical innovation or product quality. Instead, it was built on marketing and legal agreements.

The first success: BASIC

Microsoft's BASIC was ROM-able; it could be packaged in a ROM and sold as part of a complete PC. That made it attractive to PC manufacturers. A few early PCs used their own versions of BASIC in ROM, but Microsoft's was the most capable. (In this case, Microsoft did have the best product.) Microsoft BASIC became the standard (literally, too; it was adopted by ANSI) and everyone wanted. It was a success driven by the market but also by licensing agreements.

When a manufacturer didn't buy Microsoft's BASIC -- such as Apple -- Microsoft made a plug-in card complete with Z-80 processor and basic interpreter in ROM.

The second success: MS-DOS

Microsoft made a contract with IBM to sell it an operating system, and retained the right to sell that operating system to others. When the IBM PC was released, it immediately became popular as did PC-DOS. (IBM also offered CP/M-86 and USCD p-System for the PC, but higher prices discouraged their adoption.)

The success of the IBM PC, and the success of other computers running MS-DOS (early ones not compatible with the PC, later ones compatible) gave Microsoft a revenue stream and a unique place in the market. Microsoft started setting standards for device drivers and technology to access more that the PC's 1MB memory range.

This success was due to the licensing agreement with IBM, and later licensing agreements with PC manufacturers. Microsoft negotiated a fee for each PC manufactured, regardless of its operating system. Thus, manufacturers had an incentive to include MS-DOS with the hardware.

The third success: Windows

Microsoft gained power with Windows. Microsoft Office, and its superior performance due to API calls not available to competitors. The 'tar baby' effect, in which one Windows product (Outlook) required another Windows product (Exchange). (Or a number of products each requiring SQL Server.)

The fourth success: Internet Explorer

It became popular and the corporate standard. Many web sites advertised "best viewed in IE" and some web sites failed on other browsers.

But since then, Microsoft has had precious few successes. Its notable wins are Azure (capable but still competing with AWS and Google cloud services) and the Surface tablet (premium hardware that shows what is possible and keeps the Windows ecosystem alive). IE's success was relatively short-lived. Google's Chrome rose partly as a revolt against Microsoft. Now Chrome runs the web, IE is gone, and even Edge has Chrome inside.

Microsoft has not designed a successful product (one that became dominant) in the past two decades.

Now Microsoft is pushing AI, specifically "agentic AI". And by "pushing" I don't mean "hawking" but "stuffing down user's throats". Windows 11 is getting agentic AI functions whether you want them or not.

Microsoft's early successes gave them a lot of power in the market. With that power apparently came arrogance, not just a sense of "Microsoft knows best" but "you're going to take this new tech whether you want it or not". Which is just what Microsoft is doing with AI and Windows 11.

But now there are reports of people switching from Windows to Apple (or Linux) to avoid the coming AI. This indicates that Microsoft's market position is not as strong as it was, and that people (when pushed) will choose alternatives. Apple is a reasonable alternative, and even Linux and open source software is capable enough for many office and home functions.

If Microsoft wants to succeed, they must become humble and stop pushing tech onto people. They must shift their mindset from "we know best" to "we've got products that people want". Right now, they don't have products (at least with AI) that people want.

Saturday, December 6, 2025

The end of the PC empire

Micron Technology, a large manufacturer of memory DIMMs for PCs, recently announced that it was exiting that business and is redirecting its efforts to memory components for AI server farms.

I think the impact of this announcement is not fully understood.

This change by Micron Technology indicates a larger shift in the industry: away from PCs and towards AI. Away from consumer PCs (desktops and laptops), and also office PCs. The PC, the king of the tech world for decades, has lost its crown.

The IBM PC, announced in 1981, legitimized the then-sputtering tech market for PCs. Before the IBM PC (and for some time after its introduction), PC makers such as Apple, Commodore, and Radio Shack all had to cobble their products together from components available from other systems. Instead of designing the display, the disk, the memory, etc., manufacturers had to survey the market for available components and then design a system with those components. Even the original IBM PC used a keyboard from IBM's System/23 desktop computer system.

But PCs were popular, and manufacturers couldn't ignore the market. They started designing components for PCs. When Microsoft introduced Windows and set hardware standards, the transition was complete: The PC was the center of attention. Standards were set (and followed). Supply chains were built to provide components that met those standards, with robust delivery schedules. One could easily buy components and build PCs.

Some thirty years later, component manufacturers are now looking at the market for AI servers, and they cannot ignore it. Which means that they will pay less attention to the PC market -- or ignore it completely, like Micron is doing.

(I'm rather skeptical of the AI boom, and doubtful that it is sustainable, but that is another question. Micron is placing its bets. I'm assuming that other companies will follow.)

What does this change mean for the PC market? At a minimum, manufacturers of PCs will find it harder to obtain components. Some components will become more expensive. Others will become impossible to find. PC manufacturers may have to submit custom orders for components, or find other sources. It is easy to predict that the price of PCs will rise.

But we may also see fewer PC models, and longer times between announcements of new models. We may see "limited run" announcements in which a new model is available for a limited period of time, or a single production run.

Apple will be somewhat immune to this effect, as they design most components for their PCs. Sourcing components (that is, getting someone else to build Apple-designed components in large quantities) should be possible because Apple's scale is such that one does not ignore it.

But other manufacturers (Dell, LG, HP, etc.) who have relied on the PC supply chain may find their business at risk.

For consumers, I think we will see fewer offerings: Fewer PC models, and fewer configuration options. (Which perhaps may not be such a bad thing. I am often overwhelmed by the number of possibilities when I look for a new PC.)


Tuesday, September 30, 2025

The game changes for H1-B workers

Donald Trump recently raised the application fee for H-1B visas from $2000 to $100,000. That is a significant change and it affects the calculations for using H-1B visas to obtain labor from other countries. 

The purpose of the H-1B visa program is to admit foreigners with special skills into the US and to use those skills to assist US employers. Over the years, the actual purpose has changed to allow for low-cost workers with some skills (not necessarily rare or special) to work in the US. Employers have used the program as a means of reducing labor expenses.

With the increased fee, the calculations for an H-1B visa change. The increase more than covers the reduction in labor expenses, and the result is that workers on H-1B visas now cost more than native US employees.

Companies have three paths forward (assuming that they do not lobby for exemptions).

First, they can pay the fee and continue to use foreign labor in the US. If they were hiring workers with truly exception skills, this is still a reasonable (albeit more expensive) solution.

Second, they can hire native US workers instead of foreign workers. They have an incentive to avoid this option: It is tantamount to admitting that they used the H-1B visas for cheap labor. It is also a more expensive solution, but perhaps less expensive than continuing with foreign workers on H-1B visas.

The third option is to transfer the workers from the US back to their native countries, and continue to assign them work. This is the least expensive path, but it sets up many companies for morale problems.

Companies have, for the past year and more, asked -- or demanded -- that workers cease remote work and instead work in the office. The explanations from companies have been singular: they need the productivity that comes from in-person collaboration.

If a company sends its H-1B workers "back home" and has them work remotely, then the argument for in-person collaboration is severely weakened and morale among the remaining (in-house) workers will plummet. Managers might explain that contract workers (many H-1B visa holders are contract workers) are less important for collaboration, but this sets up a two-tiered mindset for workers and is probably dangerous in the long run.

If managers allow all workers to work remotely, then their argument for "return to office" falls apart and they look like fools. Managers will do almost anything to avoid looking foolish, so we can assume that remote work for all workers will not happen.

Each company will have to find its own way forward with this change to the H-1B visa program. The decision involves direct expenses for labor and visas, policies for working in the office or remotely, and employee morale.