Sunday, December 15, 2024

All the Windows 10 PCs

Microsoft's Windows 11 is not compatible with much of the existing PCs in the world. Microsoft gave no reasons for such incompatibility, but we can deduce that by specifying a certain level of hardware (processor and memory, mostly) Microsoft was able to implement certain features for security.

Regardless of the reason, a lot of PCs could not move to Windows 11, and therefore stayed on Windows 10. Soon, support for Windows 10 will stop, and those PCs will not get updates -- not even security updates. (Microsoft does offer extended support for a small fee.)

What's going to happen to all of those Windows 10 PCs? Microsoft recommends that you upgrade to Windows 11, and if that is not possible, replace (or recycle) your PC. Here's what I think will happen.

A large number of Windows 10 PCs (perhaps the majority) will stay on Windows 10. People will continue to use the PC, with Windows 10, to do their normal tasks. They won't get security updates, and that they will be okay with that.

Some number of Windows 10 PCs will be replaced. I suspect that this number (as a percentage of Windows PCs) is small. The people who want Windows 11 already have it. A few people may be waiting for the "right time" to upgrade to Windows 11, so they will replace their PCs.

Some number of Windows 10 PCs will be converted to Linux. This may be a smaller percentage than either of the "stay on Windows 10" or "replace" crowds.

I should point out that many PCs that are replaced are then sold to people who resell them. Some will physically destroy the PC, but others simply reformat the disk (or replace the disk) and resell the PC, either with Windows or with Linux. Thus, a PC that is "replaced" can continue its life as a Linux PC.

All in all, the decision by Microsoft to make some PCs obsolete (sort of) will lead to an increase in the number of PCs running Linux.

For me, this decision is personal. I have an old-ish HP laptop which runs Windows 10. It won't run Windows 11 -- even with Microsoft loosening the requirements for Windows 11. I have a decision: keep Windows 10, or switch to Linux. (I like the laptop and want to keep using it.)

Keeping Windows 10 is easy, but offers little benefit. I use Windows for few tasks (most of my work is in Linux) and there are only two items that require Windows: remote access to the office, and administering some old Apple Time Capsules and Airports.

My other option is to convert it to Linux. Conversion is also easy -- I've installed Linux on a number of other PCs. Once converted, I may need to use WINE to run the Apple Airport administration program. (Or I may simply replace the Apple Time Capsules and Airports with modern file servers and routers.) Access to the office isn't that important. The office supplies me with an official PC for access; my personal Windows PC is a back-up method when the official PC fails. (Which it has not done for as long as I can remember.)

So I think I will take up Microsoft's suggestion to get off of Windows 10. But it won't be to go to Windows 11. I have another PC running Windows 11; I don't need two.

Wednesday, September 25, 2024

Back to the Office

Amazon.com is the latest in a series of companies to insist that employees Return-To-Office (RTO).

Some claim that Amazon's motives (and by extension, any company that requires employees to work in the office) is really a means of reducing their workforce. The idea is that employees would rather leave the company than work in the office, and enforcing office-based work is a convenient way to get employees to leave. (Here, "convenient" means "without layoffs".)

I suspect that Amazon (and other companies) are not using RTO as a means to reduce their workforce. It may avoid severance payments and the publicity of layoffs, but it holds other risks. One risk is that the "wrong" number of employees may terminate their employment, either too many or too few. Another risk is that the "wrong" employees may leave; high performers may pursue other opportunities and poor performers may stay. It also selects employees based on compliance (those who stay are the ones who will follow orders) while the independent and confident individuals leave. That last effect is subtle, but I suspect that Amazon's management is savvy enough to understand it.

But while employers are smart enough to not use RTO as a workforce reduction technique, they are still insisting upon it. I'm not sure that they are fully thinking though the reasons they use to justify RTO. Companies have pretty much uniformly claimed that an office-based workforce is more productive, despite studies which show the opposite. Even without studies, employees can often get a feel for productivity, and they can tell that RTO does not improve it. Therefore, by claiming RTO increases productivity, management loses credibility.

That loss of credibility may be minimal now, but it will hang over management for some time. And in some time, there may be another crisis, similar to the COVID-19 pandemic, that forces companies to close offices. (That crisis may be another wave of COVID, or it may be a different virus such as Avian flu or M-pox, or it may be some other form of crisis. It may be worldwide, nationwide, or regional. But I fear that it is coming.)

Should another crisis occur, one that forces companies to close offices and ask employees to work from home, how will employees react? My guess is that some employees will reduce their productivity. The thinking is: If working in the office improves productivity (and our managers insist that it does), then working at home must reduce productivity (and therefore I will deliver what the company insists must happen).

Corporate managers may get their wish (high productivity by working in the office) although not the way that they want. By explaining the need for RTO in terms of productivity, they have set themselves up for a future loss of productivity when they need employees to work from home (or other locations).

Tuesday, September 17, 2024

Apple hardware is outpacing Apple software

Something interesting about the new iPhone 16: the software isn't ready. Specifically, the AI ("Apple Intelligence") enhancements promised by Apple are still only that: promises.

Apple can develop hardware faster than it can develop software. That's a problem.

Apple has had this problem for a while. The M1 Mac computers first showed this problem. Apple delivered the computers, with their integrated system-on-chip design and more efficient processing, but delivered no software to take advantage of that processing.

It may be that Apple cares little for software. They sell computers -- hardware -- and not software. And it appears that Apple has "outsourced" the development of applications: it relies on third parties to build applications for Macs and iPhones. Oh, Apple delivers some core applications, such as utilities to configure the device, the App Store to install apps, and low-powered applications such as Pages and Numbers. But there is little beyond that.

Apple's software development focusses on the operating system and features for the device: MacOS and Pages for the Mac, iPadOS and Stage Manager for the iPad, iOS and Facetime and Maps for the iPhone. Apple builds no database systems, has lukewarm support and enhancements for the Xcode IDE, and few apps for the iPhone.

I suspect that Apple's ability to develop software has atrophied. Apple has concentrated its efforts on hardware (and done rather well) but has lost its way with software.

That explains the delay for Apple Intelligence on the iPhone. Apple spent a lot of time and effort on the project, and (I suspect) most of that was for the hardware. Updates to iOS for the new iPhone were (probably) fairly easy and routine. But the new stuff, the thing that needed a lot of work, was Apple Intelligence.

And it's late.

Thinking about the history of Apple's software, I cannot remember a similar big feature added by Apple. There is Facetime, which seems impressive but I think the iPhone app is rather simple and a lot of the work is in the back end and scalability of that back end. Stage Manager was (is) also rather simple. Even features of the Apple Watch such as fall detection and SOS calls are not that complex. Operating systems were not that difficult: The original iOS was new, but iPadOS is a fork of that and WatchOS is a fork of it too (I think).

Apple Intelligence is a large effort, a greenfield effort (no existing code), and one that is very different from past efforts. Perhaps it is not surprising that Apple is having difficulties.

I expect that Apple Intelligence will be delivered later than expected, and will have more bugs and problems than most Apple software.

I also expect to see more defects and exploits in Apple's operating systems. Operating systems are not particularly complex (they are as complex as one makes them) but development and maintenance requires discipline. One gets that discipline through constant development and constant monitoring of that development. It requires an appreciation of the importance of the software, and I'm not sure that Apple has that mindset.

If I'm right, we will see more and more problems with Apple software. (Slowly at first, and then all at once.) Recovery will require a change in Apple's management philosophy and probably the senior management team.

Sunday, September 8, 2024

Agile, Waterfall, and Risk

For some years (decades, really), software development has used an agile approach to project management. The Agile method sees short iterations that each focus on a single feature, with the entire team reviewing progress and selecting the feature for the next iteration. Over time, a complete system evolves. The advantage is that the entire team (programmers, managers, salespersons, etc.) learn about the business problem, the functions of the system, and the capabilities of the team. The team can change course (hence the name "agile") as they develop each feature.

Prior to Agile, for some years (decades, really), software development used the "waterfall" approach to project management. The Waterfall method starts with a set of requirements and a schedule, and moves through different phases for analysis, design, coding, testing, and deployment. The important aspect is the schedule. The Waterfall method promises to deliver a complete system on the specified date.

This last aspect of Waterfall is quite different from Agile. The Agile method makes no promise to deliver a completed system on a specific date. It does promise that each iteration ends with a working system that implements the features selected by the team. Thus, a system developed with Agile is always working -- although incomplete -- whereas a system developed with Waterfall is not guaranteed to work until the delivery date.

(It has been observed that while the Waterfall method promises a complete, working system on the specified delivery date, it is quite poor at keeping that promise. Many projects overrun both schedule and budget.)

Here is where risk comes into play.

With Agile, the risk is shared by the entire team, key among these are developers and managers. An agile project has no specified delivery date, but more often than not senior managers (those above the agile-involved managers) have a date in mind. (And probably a budget, too.) Agile projects can easily overrun these unstated expectations. When they do, the agile-involved managers are part of the group held responsible for the failure. Managers have some risk.

But look at the Waterfall project. When a waterfall project fails (that is, runs over schedule or budget) the managers have way to distance themselves from the failure. They can say (honestly) that they provided the developers with a list of requirements and a schedule (and a budget) and that the developers failed to meet meet the "contract" of the waterfall project. Managers can deflect the risk to the development team.

(For some reason, we rarely question the feasibility of the schedule, or the consistency and completeness of the requirements, or the budget assigned to the project. These are considered "good", and any delay or shortcoming is therefore the fault of the developers.)

Managers want to avoid risk -- or at least transfer it to another group. Therefore, I predict that in the commercial space, projects will slowly revert from Agile methods to Waterfall methods.

Thursday, August 1, 2024

Google search is broken, and we all suffer

Google has a problem. That problem is web search.

Google, the long-time leader in web search, recently modified its techniques to use artificial intelligence (AI). The attempt at AI-driven search has lead to embarrassing results. One person asked how to keep cheese on pizza, and Google suggested using glue. Another asked about cooking spaghetti, and Google recommended gasoline.

The problem was that Google pointed its AI search engine at the entire web, absorbing posts from various sources. Some of those posts contained text that was a joke or sarcastic. A human would be able to tell that the entries were not to be used in search results, but Google's algorithm isn't human.

Google has rediscovered the principle of "garbage into a computer system yields garbage output".

One might think that Google could simply "pull the plug" on the AI search and revert back to the older mechanisms it used in the past. But here too Google has a problem: the old search algorithms don't work (anymore).

Google started with a simple algorithm for search: count links pointing to the page. This was a major leap forward in search; previous attempts were curated by hand. Over the years, web designers have "gamed" the Google web crawler to move their web pages up in the results, and Google has countered with changes to their algorithm. The battle continues; there are companies that help with "Search Engine Optimization" or "SEO". Those optimizing companies have gotten quite good at tweaking web sites to appear high in searche results. But the battle is lost. Despite Google's size (and clever employees) the SEO companies have won, and the old-style Google search no longer shows meaningful results but mostly advertisement links.

SEO has changed Google search from a generic search engine into a sales lead tool. If you want to purchase something, Google is a great way to find a good price. But if you want something else, Google is much less useful that it used to be. It is no longer a tool for answers to general questions.

That means that search, for the internet, is broken.

It's not completely broken. In fact,"broken" is too strong of a word for the concept. Better choices might be "damaged" or "compromised", or even "inconsistent". Some searches work, and others don't.

Broken, damaged, or inconsistent, Google's search engine has suffered. Its reputation is reduced, and fewer people use it. That's a problem for Google, because the search results is a location to display advertisements, and advertisements are Google's major source of income.

A broken Google search is a problem for us all, in two ways.

First, with Google search broken, we (all) must now find alternative means of answering questions. AI might help for some -- although I don't recommend it for recipes -- and that can be a partial replacement. Other search engines (Bing, Yahoo) may work for now, but I expect that they will succumb to the same SEO forces that broke Google. With no single reliable source of information, we must now turn to multiple sources (stackexchange, Red Hat web pages, and maybe the local library) which means more work for us.

Secondly, the defeat of the Google whale to the SEO piranhas is another example of "this is why we cannot have nice things". It is the tragedy of the commons, with individuals acting selfishly and destroying a useful resource. Future generations will look back, possibly in envy, at the golden age of Google and a single source of reliable information.

Monday, July 22, 2024

CrowdStrike, Windows blue screens, and the future

A small problem with CrowdStrike, a Windows security application, has caused a wide-spread problem with thousands, perhaps millions, of PCs running Windows.

Quite a few folks have provided details about the problem, and how it happened.

Instead, I have some ideas about what will happen: what will happen at Microsoft, and what will happen at all of the companies that use CrowdStrike.

Microsoft long ago divided Windows into two spaces: one space for user programs and another space for system processes. The system space includes device drivers.

Applications in the user space can do some things, but not everything. They cannot, for example, interact directly with devices, nor can they access memory outside of their assigned range of addresses. If they do attempt to perform a restricted function, Windows stops the program -- before it causes harm to Windows or another application.

User-space applications cannot cause a blue screen of death.

If an error in CrowdStrike caused a blue screen of death (BSOD), then CrowdStrike must run in the system space. This makes sense, as CrowdStrike must access a lot of things to identify attacks, things normal applications do not look at. CrowdStrike runs with elevated privileges.

I'm guessing that Microsoft, as we speak, is thinking up ways to restrict third-party applications that must run with elevated privileges such as CrowdStrike. Microsoft won't force CrowdStrike into the user space, but Microsoft also cannot allow CrowdStrike to live in the system space where it can damage Windows. We'll probably see an intermediate space, one with more privileges than user-space programs but not all the privileges of system-space applications. Or perhaps application spaces with tailored privileges, each specific to the target application.

The more interesting future is for companies that use Microsoft Windows and applications such as CrowdStrike.

These companies are -- I imagine -- rather disappointed with CrowdStrike. So disappointed that they may choose to sue. I expect that management at several companies are already talking with legal counsel.

A dispute with CrowdStrike will be handled as a contract dispute. But I'm guessing the CrowdStrike, like most tech companies, specified arbitration in their contracts, and limited damages to the cost of the software.

Regardless of contract terms, if CrowdStrike loses, they could be in severe financial hardship. But if they prevail, they could also face a difficult future. Some number of clients will move to other providers, which will reduce CrowdStrike's income.

Other companies will start looking seriously at the contracts from suppliers, and start making adjustments. They will want the ability to sue in court, and they will want damages if the software fails. When the maintenance period renews, clients will want a different set of terms, one that imposes risk upon CrowdStrike.

CrowdStrike will have a difficult decision: accept the new terms or face further loss of business.

This won't stop at CrowdStrike. Client companies will review terms of contracts with all of their suppliers. The "CrowdStrike event" will ripple across the industry. Even companies like Adobe will see pushback to their current contract terms.

Supplier companies that agree to changes in contract terms will have to improve their testing and deployment procedures. Expect to see a wave of interest in process management, testing, verification, static code analysis, and code execution coverage. And, of course, consulting companies and tools to help in those efforts.

Client companies may also review the licenses for open source operating systems and applications. They may also attempt to push risk onto the open source projects. This will probably fail; open source projects make their software available at no cost, so users have little leverage. A company can choose to replace Python with C#, for example, but the threat of "we will stop using your software and pay you nothing instead of using your software and paying you nothing" has little weight.

Therefore shift in contracts will occur in the commercial space, at least not at first. It may change in the future, as changes in the commercial space become the norm.

Thursday, June 6, 2024

What to do with an NPU

Microsoft announced "Copilot PC", a new standard for hardware. It includes a powerful Neural Processing Unit (NPU) along with the traditional (yet also powerful) CPU and GPU. The purpose of this NPU is to support Microsoft's Copilot+, an application that uses "multiple state-of-the-art AI models ... to unlock a new set of experiences that you can run locally". It's clear that Microsoft will add generative AI to Windows and Windows applications. (It's not so clear that customers want generative AI or "a new set of experiences" on their PCs, but that is a different question.)

Let's put Windows to the side. What about Linux?

Linux is, if I may use the term, a parasite. It runs on hardware designed for other operating systems (such as Windows, macOS, or even Z/OS). I fully expect that it will run on these new "Copilot+ PCs", and when running, it will have access to the NPU. The question is: will Linux use that NPU for anything?

I suppose that before we attempt an answer, we should review the purpose of an NPU. A Neural Processing Unit is designed to perform calculations for a neural network. A neural network is a collection of nodes with connections between nodes. It has nothing to do with LANs or WANs or telecommunication networks.

The calculations of a neural network can be performed on a traditional CPU, but they are a poor match for the typical CPU. The calculations are a better match for a GPU, which is why so many people ran neural networks on them -- GPUs performed better than CPUs.

NPUs are better at the calculations than GPUs (and much better than CPUs), so if we have a neural network, its calculations would run fastest on an NPU. Neural Processing Units perform a specialized set of computations.

One application that uses those computations is the AI that we hear about today. And it may be that Linux, when detecting an NPU, will route computations to it, and those computations will be for artificial intelligence.

But Linux doesn't have to use an NPU for generative AI, or other commercial applications of AI. A Neural Network is, at its essence, a pattern-matching mechanism, and while AI as we know it today is a pattern-matching application (and therefore well-served by NPUs), it is not the only patter-matching application. It is quite possible (and I think probable) that the open-source community will develop non-AI applications that take advantage of the NPU.

I suspect that this development will happen in Linux and in the open source community, and not in Windows or the commercial market. Those markets will focus on the AI that is being developed today. The open source community will drive the innovation of neural network applications.

We are early in the era of neural networks. So early that I think we have no good understanding of what they can do, what they cannot do, and which of those capabilities match our personal or business needs. We have yet to develop the "killer app" of AI, the equivalent of the spreadsheet. "VisiCalc" made it obvious that computers were useful; once we had seen it, we could justify the purchase of a PC. We have yet to find the "killer app" for AI.


Thursday, May 16, 2024

Apple's Pentium moment

In the 1990s, as the market shifted from the 80486 processor to the newer Pentium processor, Intel had a problem. On some Pentium processors, a certain mathematical operation was incorrect. It was called the "FDIV bug". What made this a problem was that the error was detected only after a significant number of Pentium processors had been sold inside PCs.

Now that Apple is designing its own processors (not just the M-series for Mac computers but also the A-series for phones and tablets), Apple faces the risk of a similar problem.

It's possible that Apple will have a rather embarrassing problem with one of its processors. The question is: how will Apple handle it?

In my not-so-happy prediction, the problem will be more than an exploit that allows data to be extracted from the protected vault in the processor, or memory to be read across processes. It will be more severe. It will be a problem with the instruction set, much like Intel's FDIV problem.

If we assume that the situation will be roughly the same as the Intel problem, then we will see:

- A new processor (or a set of new processors) from Apple
- These processors will have been released; they will be in at least one product and perhaps more
- The problem will be rare, but repeatable. If one creates a specific sequence, one can see the problem

Apple may be able to correct it with an update. If it is, then Apple's course is easy: an apology and an update. Apple may take some minor damage to its reputation, which will fade over time.

Or maybe the problem cannot be fixed with an update. The error might be "hard-coded" into the chip. Apple now has a few options, all of them bad but some less bad than others.

It can fix the problem, build a new set of processors, and then assemble new products and offer free replacements. Replacing the defective units is expensive for Apple, in the short term. It probably creates the most customer loyalty, which can improve revenue and profits in the longer term.

Apple could build a new set of products and instead of offering free replacements, offer high trade-in values for the older units. Less expensive in the short term, but less loyalty moving forward.

I'm not saying that this will happen. I'm saying that it may happen. I have no connection with Apple (other than as a customer) and no insight into their design process and quality assurance procedures.

Intel, when faced with the FDIV bug, handled it poorly. Yet Intel survives today, so its response was not fatal. Let's see what Apple does.

Sunday, May 5, 2024

iPad gets an M4

There is a lot of speculation about Apple's forthcoming announcement. Much of it has to do with new models of the iPad and the use of a new M4 processor. Current models of iPad have M2 processors; Apple has not released an M3 iPad. People have tried to suss out the reason for Apple making such a jump.

Here's my guess: Apple is using the M4 processors because it has to. Or rather, using M3 processors in the new iPads has a cost that Apple doesn't want to pay.

I must say here that I am not employed by Apple, or connected with Apple, or with any of its suppliers. I have no contacts, no inside information. I'm looking at publicly available information and my experience with inventory management (which itself is quite limited).

My guess is based on the process of manufacturing processors. They are made in large batches, the larger the better ('better' as in 'lower unit costs').

Apple has a stock of M3 processors on hand. Possibly some outstanding orders for additional processors.

Apple also has projections for the sales of its various products, and therefore projections for the reduction of its inventory and the allocation of future orders. I'm pretty sure that Apple has gotten good at making these projections. It has projections for MacBooks, iMacs, iPhones, and iPads.

My guess is that Apple has enough M3 processors (on hand or in future orders) for the projected sales of MacBooks and iMacs, and that it does not have enough M3 processors for the sale of MacBooks, iMacs, and iPads.

Apple could increase its orders for M3 processors. But my second guess is that the minimum order quantity is much larger than the projected sales of iPads. (The iPad models have low sales numbers.) Therefore, ordering M3 processors for iPads means ordering a lot of M3 processors. Many more processors than are needed for iPad sales, and probably for the MacBook and iMac line. (The MacBooks and iMacs will switch to M4 processors soon, possibly in September.)

Apple doesn't want to over-order M3 processors and pay for processors that it will never use. Nor does it want to order a small batch, with the higher unit cost.

So instead, Apple puts M4 processors in iPads. The M4 production batches are just starting, and Apple can expect a number of future batches. Diverting a small number of M4 processors to the iPad is the least cost option here.

That's my idea for the reason of M4 processors in iPads. Not because Apple wants to use AI on the iPads, or make the iPad a platform suitable for development, or switch iPads to Mac OS. The decision is not driven by features, but instead by inventory costs.


Tuesday, April 23, 2024

Apple is ready for AI

I have been critical of Apple, and more specifically its designs with the M-series processors. My complaint is that the processors are too powerful, that even the simplest M1 processor is more than capable of handling tasks of an average user. (That is, someone who browses the web, reads and sends e-mail, and pays bills.)

The arrival of "AI" has changed my opinion. The engines that we call "artificial intelligence" require a great deal of processing, memory, and storage, which is just what the M-series processors have. Apple is ready to deploy AI on its next round of computers, powered by M4 processors. Those processors, merely speculative today, will most likely arrive in 2025 with companion hardware and software that includes AI-driven features.

Apple is well positioned for this. Their philosophy is to run everything locally. Applications run on the Mac, not in the cloud. Apps run on iPhones and iPads, not in the cloud. Apple can sell the benefits of AI combined with the benefits of privacy, as nothing travels across the internet.

This is different from the Windows world, which has seen applications and apps rely on resources in the cloud. Microsoft Office has been morphing, slowly into cloud-based applications. (There is a version one can install on a local PC, but I suspect that parts of that use cloud-based resources.)

I'm not sure how Microsoft and other application vendors will respond. Will they shift back to local processing? (Such a move would require a significant increase in processing power on the PC.) Will they continue to move to the cloud? (That will probably require additional security, and marketing, to convince users that their data is safe.)

Microsoft's response may be driven by the marketing offered by Apple. If Apple stresses privacy, Microsoft will (probably) counter with security for cloud-based applications. If Apple stresses performance, Microsoft may counter with cloud-based data centers and distributed processing.

In any case, it will be interesting to see the strategies that both companies use.

Tuesday, April 2, 2024

WFH and the real estate crisis

Over the past decades, companies (that is, employers) have shifted responsibilities (and risk) to their employees.

Employer companies replaced company-funded (and company-managed) pension plans with employee-funded (and employee-managed) 401-k retirement plans.

Employer companies have shifted the cost of medical insurance to employees. The company-run (and company-funded) benefit plan is a thing of the past. Today, the hiring process includes a form for the employee to select insurance options and authorize payment via payroll deduction.

Some companies have shifted other forms of risk to employees. Restaurants and fast-food companies, subject to large swings in demand during the day, have shifted staffing risk to employees. Instead of a steady forty-hour workweek with eight-hour shifts, employers now schedule employees with "just in time" methods, informing employees of their schedule one day in advance. Employees cannot plan for many activities, as they may (or may not) be scheduled to work in any future day.

In all of these changes, the employer shifted the risk to the employees.

Now we come to another form of risk: real estate. It may seem strange that real estate could be a risk. And it isn't; the risk is the loans companies have to buy real estate.

Many companies cannot afford the loans for their buildings. Here's why: A sizable number of companies have allowed employees to work from home (or locations of their own choosing), and away from the office. As a result, those companies need less office space than they needed in the past. So they rent less space.

It's not the tenant companies that have the risk of real estate loans -- it's the landlord companies. They made the loans and purchased the buildings.

But risk is risk, and it won't take long for landlord companies to shift this risk away from themselves. But this shift won't be easy, and it won't be like the previous shifts.

A building has two (or perhaps more) companies. One that owns the building (the landlord company), and a second (the tenant company) that leases space within. (The owner could be a group of companies in a joint effort. And a large building could have more than one tenant.)

But notice that this risk has two levels of corporations: the landlord company and the tenant company. The landlord company has employees, but they are not the employees who work in the building. Shifting the risk to them makes no sense.

The employees who work in the building are employees of the tenant company, and they have no direct connection to the landlord company. The landlord company cannot shift the risk to them, either.

Thus, the shift of risk (if a shift does occur) must move between the two companies. For the risk of real estate, the landlord company must shift the risk to its tenant companies.

That shift is difficult. It occurs not between an employer and employee, but between two companies. Shifting risk from employer to employee is relatively easy, due to the imbalance of power between the two. Shifting risk between companies is difficult: the tenant company can hire lawyers and fight the change.

If the owning company is successful, and does manage to shift the risk to its tenant company, then one might assume that the tenant company would want to shift the risk to its employees. That shift is also difficult, because there is little to change in the employment arrangement. Medical benefits and pension plans were easy to change, because employees were receiving something. With the risk of building ownership (or more specifically the risk of a lower value for the building) the employee is currently receiving... nothing. The have no share in the building, no part of the revenue, no interest in the transaction.

Savvy readers will have already thought of other ways of hedging the risk of real estate loans (or the risk of reduced demand for real estate). There are other ways; most involve some form of insurance. With them, the landlord company purchases a policy or some other instrument. The risk is shifted to a third company (the insurer) with payments.

I expect that the insurance option will be the one adopted by most companies. It works, it follows existing patterns of business, and it offers predictable payments to mitigate risk.

Sometimes you can shift risk to employees. Sometimes you can't.

Thursday, March 7, 2024

The doom of C++

The C++ programming language is successful. It has been popular for decades. It has been used in large, important systems. It has grown over the years, adding features that keep it competitive with other languages.

And it may be on its way out.

Other languages have challenged C++. Java was an early challenger, back in the 1990s. The latest challenger is Rust, and there is a good case for it. But I think the demise of C++ will not be caused by another language, but by the C++ language itself.

Or more specifically, changes to the language.

Consider a recent change to the C++ standard, to add variadic parameters to template functions.

I understand the need for variable -- excuse me, variadic -- parameters.

And I understand the desire by the C++ committee to minimize changes to the C++ syntax and grammar rules.

But code looks like this:

void dummy(auto&&...) {}

template<std::same_as<char> ...C>
void
expand(C...c)
{
  std::tuple<C...> tpl(c...);

  const char msg[] = { C(std::toupper(c))..., '\0' };
  dummy(msg, c...);
}

This code is not easy to read. In fact, I find it a candidate for the long-ago obfuscation contest.

This enhancement is not alone. Most changes to the C++ specification, over the past two decades have made C++ harder to read. The result is that we have lost the readability of the original C, and the early C++ syntax. This is a problem.

While it was possible to write obscure and difficult-to-read C code (and C++ code), it wasn't inevitable. It was, with care, possible to write code that was readable. Not merely readable by those who knew C or C++, but by almost anyone familiar with a programming language or the concepts of programming. (Although the concept of pointers was necessary.)

The changes to the C++ standard have resulted in code that is, in a word, ugly. This ugliness now is inevitable -- one cannot avoid it.

Programmers dislike two types of programming languages: wordy (think COBOL) and ugly (what C++ is becoming).

(Programmers also dislike programming languages that use the ':=' operator. That may fall under the category of 'ugly'.)

The demise of C++ won't be due to some new language, or government dictates to use memory-safe programming languages, or problems in applications.

Instead, C++ will be abandoned because it will be considered "uncool". As the C++ standards committee adds new features, it maintains compatibility at the cost of increasing ugliness of code. It is a trade-off that has long-term costs. Those costs may result in the demise of C++.

Thursday, February 15, 2024

Ad blockers and the future of the internet

YouTube has gotten a lot of notice recently. They have set procedures to ... discourage ... the use of ad blockers. Viewers with ad blockers are limited to three videos. The internet is up in arms, of course, because their favorite free video site is no longer free. (Views who subscribe to YouTube premium and pay a monthly fee are not shown ads and are not limited to three videos.)

But I think YouTube may be showing us the future.

Some thoughts on advertisements:

There are two reasons to use ads: one is to present an image to the viewer, the other is to get the viewer to click (or tap) on the link and go to the sponsor's web site. (I will ignore the installation of malware.)

Displaying the image is desired. Image-only advertisements are similar to ads in newspapers and magazines (the old, paper format of newspapers and magazines).

Users clicking on ads is also desired, as it presents products or services or subscriptions to the viewer, with the ability to complete the sale.

Advertisements are useful to the web site because they provide revenue. The advertiser pays the hosting web site a small fee for each display of the ad (each "impression") or each click-though, or each click-through that results in a sale. (Or some combination of those.)

I understand the use of advertisements. They provide revenue to the host web site, and the company running that web site needs to pay for the operation and upkeep of the site. Even a small, simple, static web site must be hosted on a server somewhere, and that server costs money to buy (or rent) and needs electricity and a network connection.

There are, of course, other ways to pay for a web site. One can limit content to subscribers who pay a monthly or annual fee. (The subscription model.) One can get a grant from a patron. One can ask viewers for contributions. (The public broadcast model.) One can run the web site as a hobby, paying for it oneself. Those methods are all valid but not always feasible.

A description of ad blockers: 

Ad blockers work in one of two configurations: a plug-in to a browser, and a process on a separate device that monitors network traffic and drops requests to web sites deemed to be "ad servers".

The browser plug-in is the most common. It sits in the browser and watches each outbound request. The plug-in has a list of "known ad servers" and requests to those servers are intercepted. The request is never made, and therefore the ad is not retrieved or displayed.

Browser-based ad blockers can be configured to allow some web pages to display ads. The user can establish an "allowed" list of web sites; these web sites are allowed to display ads and the plug-in will let requests from those web pages (or rather web pages loaded from those web sites) through to their destination servers.

The other form of ad blocker is a separate device, one that sits on the network but in its own server. It does not live in a browser (useful for browsers that don't allow ad blocking plug-ins) and other devices such as tablets and phones. The ad block server, like a browser plug-in, monitors outbound requests and intercepts those going to known ad servers.

The advantage of the ad block server is that it blocks ads to all devices on your local network. (Note that when you use a device on a different network, like at a coffee shop, the ad block server does not intercept requests and you will see advertisements.) An ad block server can be configured to allow requests to certain destinations, which is not quite the same as allowing requests for pages loaded from certain web sites. The ad block server knows only the destination address, not the web page that made the request, or even the application that made it.

Both types of ad blockers can let some advertisements through, even unwanted advertisements. Ad blockers use a list of "known ad servers"; requests to destinations on the list are intercepted. A new ad server, at a new address, won't be blocked -- at least not until the list of ad servers is updated. The provider of the ad blocker usually is the one to make that update.

Ad blockers don't block ads from the host web site. A web page can request additional text and images from its origin web site (and they usually do). Those requests can bring text and images useful to the viewer, or they can bring text and images that are advertisements. The ad blocker cannot examine the additional text and images and decide which is useful and which is advertisement. Therefore, all requests to the origin site are allowed.

So ad blockers work, but they are not perfect.

Now we get to the interesting question: Are ad blockers ethical? Are they moral? Is it wrong to use an ad blocker?

Some thoughts:

One argument is that they are ethical. The web browser lives on my computer (so the argument goes) and I have the right to choose what is and what is not displayed on my computer.

The counter argument is that advertisers pay for the display of ads, and if ads are not displayed (in significant numbers), then the advertisers will stop paying and the web site will lose revenue and possibly go out of business.

(A counter, counter argument runs along the lines of: Display ads all you want, I don't look at them and I never click on them, so the web page is merely wasting their time and effort by displaying ads. By using an ad blocker, I am saving the web site the effort of sending the advertisement to me. The argument is  tenuous at best.)

Where do we go from here?

I think YouTube is showing us the future. In its fight against ad blockers, YouTube acts in two ways. First, it limits the number of videos that any one person (using an ad blocker) may view. (Some have accused YouTube of deliberately slowing videos or reducing resolution, but these effects may have been accidental. Let's ignore them.)

YouTube also encourages viewers to subscribe to its premium level, which removes the ads. Some videos on YouTube have built-in advertisements much like the native ads of web pages. YouTube does not remove those, and ad blockers cannot detect and disable them. Overall, the number of ads is greatly reduced.

Web sites cost money to run, and that money must come from somewhere. If viewers block ads, then money will not come from ads, and the money must come from another source. Web sites without revenue from advertising must use a different revenue model.

Future historians may point to YouTube switching from advertisement revenue to subscription revenue and say "Here. Here is where the change from advertising to subscription revenue started. YouTube was the first."

Thursday, February 8, 2024

Spacial computing

There's been a lot of talk about Apple's new Vision Pro headset. I'm not sure that we in the general public (meaning those of us outside of Apple) know what to make of it. What can we do with it? How will it help us? I don't have answers to those questions.

I'm not sure that the folks at Apple have the answer to those questions, despite their impressive demo for the Vision Pro.

Apple uses the term "Spacial Computing" to separate the Vision Pro headset from other devices. This makes sense. A headset is different from a laptop or a smartphone. Not only is the hardware different, but the methods of interaction are different.

If we look at the history of computing, we can see that the different generations of computers used different methods of interaction.

Mainframe computers (IBM's 1401 and System/360) made computation commercially possible -- for large, well-funded organizations such as governments and national corporations. Interaction with mainframes was with punch cards and printouts. Programs read and processed data in batches, at scheduled times.

Minicomputers were a variation of mainframes. They made computing available to smaller (or at least less large) organizations. Yet they also changed computing. Interaction with minicomputers was through terminals, with either paper or screens similar to today's displays. People could interact with programs, instead of supplying all of the data up front.

Personal computers were a variation of minicomputers. They made computation possible for individuals (upper middle class individuals). Interaction with personal computers was not with terminals but with keyboards and built-in displays. Most displays had graphics. People could see charts and graphs, and later pictures.

(Even the IBM PC 5150 with the monochrome display adapter had character graphics. And that card was quickly replaced by the Hercules monochrome graphics adapter with full graphics.)

Laptop personal computers were a variation of personal computers. The software was usually the same as personal computers (but not always; some had custom software) and their displays were often smaller than those on "real" personal computers. Interaction was mostly the same (keyboard and display) They made computing portable. They also made networking common.

The connection between laptops and networking is muddled. Networking was available for desktop computers, usually as an option. And laptop computers existed before networks for desktop computers, but they were rarely used. The combination of laptop and network was powerful, and arrived with a more capable set of hardware for laptop computers. The trio of portable, network, and hardware forged a useful tool.

Smart phones were a combination of cell phone and laptop computer. (Or improved versions of personal digital assistants such as the Palm Pilot.) They made computing not merely portable but mobile -- an individual could do things while moving. They (specifically the iPhone with iTunes) eased the consumption of music and movies and games. Computing became entwined with entertainment. Interaction changed from keyboard and display to touchscreen and display, and sometimes voice.

Each generation of computing changed the nature of computing.

So where does that put Apple's Vision Pro?

I think that we can agree that the interaction with a headset will be different from the interaction with a smart phone. The screen is present but not touched. I expect that, initially, a set of gestures borrowed from smart phones will be used, and later new gestures will be invented for headsets. A smart phone with its small screen can display one app at a time.  The headset screen occupies a much larger section of our vision, so we can expect more things to happen at the same time.

I expect headsets to be used for entertainment and the consumption of information. Expanding today's video games to the larger screen of the Vision Pro seems a natural (although perhaps non-trivial) move. Beyond games, the analysis of data could use the "larger" screen to display more information in graphical form. Music is probably not a strong point for headsets, but music with video is likely to be popular.

Each generation of computing saw new applications, applications that made little sense for the previous generation. I think this will continue, and we will see new applications specific to the Vision Pro (or headsets in general). And just as the old generations of computing styles are still with us, they will continue to stay with us. The computing that they perform is useful -- and not always appropriate on later platforms. Some things work better on new systems, and some things work better on old systems.

I will be watching the innovations for headset computing. But just as I did not immediately run out and buy a laptop computer when they first came out, or a smart phone when they first came out, I won't be buying a Vision Pro -- at least not for a while.