Tuesday, April 23, 2024

Apple is ready for AI

I have been critical of Apple, and more specifically its designs with the M-series processors. My complaint is that the processors are too powerful, that even the simplest M1 processor is more than capable of handling tasks of an average user. (That is, someone who browses the web, reads and sends e-mail, and pays bills.)

The arrival of "AI" has changed my opinion. The engines that we call "artificial intelligence" require a great deal of processing, memory, and storage, which is just what the M-series processors have. Apple is ready to deploy AI on its next round of computers, powered by M4 processors. Those processors, merely speculative today, will most likely arrive in 2025 with companion hardware and software that includes AI-driven features.

Apple is well positioned for this. Their philosophy is to run everything locally. Applications run on the Mac, not in the cloud. Apps run on iPhones and iPads, not in the cloud. Apple can sell the benefits of AI combined with the benefits of privacy, as nothing travels across the internet.

This is different from the Windows world, which has seen applications and apps rely on resources in the cloud. Microsoft Office has been morphing, slowly into cloud-based applications. (There is a version one can install on a local PC, but I suspect that parts of that use cloud-based resources.)

I'm not sure how Microsoft and other application vendors will respond. Will they shift back to local processing? (Such a move would require a significant increase in processing power on the PC.) Will they continue to move to the cloud? (That will probably require additional security, and marketing, to convince users that their data is safe.)

Microsoft's response may be driven by the marketing offered by Apple. If Apple stresses privacy, Microsoft will (probably) counter with security for cloud-based applications. If Apple stresses performance, Microsoft may counter with cloud-based data centers and distributed processing.

In any case, it will be interesting to see the strategies that both companies use.

Tuesday, April 2, 2024

WFH and the real estate crisis

Over the past decades, companies (that is, employers) have shifted responsibilities (and risk) to their employees.

Employer companies replaced company-funded (and company-managed) pension plans with employee-funded (and employee-managed) 401-k retirement plans.

Employer companies have shifted the cost of medical insurance to employees. The company-run (and company-funded) benefit plan is a thing of the past. Today, the hiring process includes a form for the employee to select insurance options and authorize payment via payroll deduction.

Some companies have shifted other forms of risk to employees. Restaurants and fast-food companies, subject to large swings in demand during the day, have shifted staffing risk to employees. Instead of a steady forty-hour workweek with eight-hour shifts, employers now schedule employees with "just in time" methods, informing employees of their schedule one day in advance. Employees cannot plan for many activities, as they may (or may not) be scheduled to work in any future day.

In all of these changes, the employer shifted the risk to the employees.

Now we come to another form of risk: real estate. It may seem strange that real estate could be a risk. And it isn't; the risk is the loans companies have to buy real estate.

Many companies cannot afford the loans for their buildings. Here's why: A sizable number of companies have allowed employees to work from home (or locations of their own choosing), and away from the office. As a result, those companies need less office space than they needed in the past. So they rent less space.

It's not the tenant companies that have the risk of real estate loans -- it's the landlord companies. They made the loans and purchased the buildings.

But risk is risk, and it won't take long for landlord companies to shift this risk away from themselves. But this shift won't be easy, and it won't be like the previous shifts.

A building has two (or perhaps more) companies. One that owns the building (the landlord company), and a second (the tenant company) that leases space within. (The owner could be a group of companies in a joint effort. And a large building could have more than one tenant.)

But notice that this risk has two levels of corporations: the landlord company and the tenant company. The landlord company has employees, but they are not the employees who work in the building. Shifting the risk to them makes no sense.

The employees who work in the building are employees of the tenant company, and they have no direct connection to the landlord company. The landlord company cannot shift the risk to them, either.

Thus, the shift of risk (if a shift does occur) must move between the two companies. For the risk of real estate, the landlord company must shift the risk to its tenant companies.

That shift is difficult. It occurs not between an employer and employee, but between two companies. Shifting risk from employer to employee is relatively easy, due to the imbalance of power between the two. Shifting risk between companies is difficult: the tenant company can hire lawyers and fight the change.

If the owning company is successful, and does manage to shift the risk to its tenant company, then one might assume that the tenant company would want to shift the risk to its employees. That shift is also difficult, because there is little to change in the employment arrangement. Medical benefits and pension plans were easy to change, because employees were receiving something. With the risk of building ownership (or more specifically the risk of a lower value for the building) the employee is currently receiving... nothing. The have no share in the building, no part of the revenue, no interest in the transaction.

Savvy readers will have already thought of other ways of hedging the risk of real estate loans (or the risk of reduced demand for real estate). There are other ways; most involve some form of insurance. With them, the landlord company purchases a policy or some other instrument. The risk is shifted to a third company (the insurer) with payments.

I expect that the insurance option will be the one adopted by most companies. It works, it follows existing patterns of business, and it offers predictable payments to mitigate risk.

Sometimes you can shift risk to employees. Sometimes you can't.

Thursday, March 7, 2024

The doom of C++

The C++ programming language is successful. It has been popular for decades. It has been used in large, important systems. It has grown over the years, adding features that keep it competitive with other languages.

And it may be on its way out.

Other languages have challenged C++. Java was an early challenger, back in the 1990s. The latest challenger is Rust, and there is a good case for it. But I think the demise of C++ will not be caused by another language, but by the C++ language itself.

Or more specifically, changes to the language.

Consider a recent change to the C++ standard, to add variadic parameters to template functions.

I understand the need for variable -- excuse me, variadic -- parameters.

And I understand the desire by the C++ committee to minimize changes to the C++ syntax and grammar rules.

But code looks like this:

void dummy(auto&&...) {}

template<std::same_as<char> ...C>
void
expand(C...c)
{
  std::tuple<C...> tpl(c...);

  const char msg[] = { C(std::toupper(c))..., '\0' };
  dummy(msg, c...);
}

This code is not easy to read. In fact, I find it a candidate for the long-ago obfuscation contest.

This enhancement is not alone. Most changes to the C++ specification, over the past two decades have made C++ harder to read. The result is that we have lost the readability of the original C, and the early C++ syntax. This is a problem.

While it was possible to write obscure and difficult-to-read C code (and C++ code), it wasn't inevitable. It was, with care, possible to write code that was readable. Not merely readable by those who knew C or C++, but by almost anyone familiar with a programming language or the concepts of programming. (Although the concept of pointers was necessary.)

The changes to the C++ standard have resulted in code that is, in a word, ugly. This ugliness now is inevitable -- one cannot avoid it.

Programmers dislike two types of programming languages: wordy (think COBOL) and ugly (what C++ is becoming).

(Programmers also dislike programming languages that use the ':=' operator. That may fall under the category of 'ugly'.)

The demise of C++ won't be due to some new language, or government dictates to use memory-safe programming languages, or problems in applications.

Instead, C++ will be abandoned because it will be considered "uncool". As the C++ standards committee adds new features, it maintains compatibility at the cost of increasing ugliness of code. It is a trade-off that has long-term costs. Those costs may result in the demise of C++.

Thursday, February 15, 2024

Ad blockers and the future of the internet

YouTube has gotten a lot of notice recently. They have set procedures to ... discourage ... the use of ad blockers. Viewers with ad blockers are limited to three videos. The internet is up in arms, of course, because their favorite free video site is no longer free. (Views who subscribe to YouTube premium and pay a monthly fee are not shown ads and are not limited to three videos.)

But I think YouTube may be showing us the future.

Some thoughts on advertisements:

There are two reasons to use ads: one is to present an image to the viewer, the other is to get the viewer to click (or tap) on the link and go to the sponsor's web site. (I will ignore the installation of malware.)

Displaying the image is desired. Image-only advertisements are similar to ads in newspapers and magazines (the old, paper format of newspapers and magazines).

Users clicking on ads is also desired, as it presents products or services or subscriptions to the viewer, with the ability to complete the sale.

Advertisements are useful to the web site because they provide revenue. The advertiser pays the hosting web site a small fee for each display of the ad (each "impression") or each click-though, or each click-through that results in a sale. (Or some combination of those.)

I understand the use of advertisements. They provide revenue to the host web site, and the company running that web site needs to pay for the operation and upkeep of the site. Even a small, simple, static web site must be hosted on a server somewhere, and that server costs money to buy (or rent) and needs electricity and a network connection.

There are, of course, other ways to pay for a web site. One can limit content to subscribers who pay a monthly or annual fee. (The subscription model.) One can get a grant from a patron. One can ask viewers for contributions. (The public broadcast model.) One can run the web site as a hobby, paying for it oneself. Those methods are all valid but not always feasible.

A description of ad blockers: 

Ad blockers work in one of two configurations: a plug-in to a browser, and a process on a separate device that monitors network traffic and drops requests to web sites deemed to be "ad servers".

The browser plug-in is the most common. It sits in the browser and watches each outbound request. The plug-in has a list of "known ad servers" and requests to those servers are intercepted. The request is never made, and therefore the ad is not retrieved or displayed.

Browser-based ad blockers can be configured to allow some web pages to display ads. The user can establish an "allowed" list of web sites; these web sites are allowed to display ads and the plug-in will let requests from those web pages (or rather web pages loaded from those web sites) through to their destination servers.

The other form of ad blocker is a separate device, one that sits on the network but in its own server. It does not live in a browser (useful for browsers that don't allow ad blocking plug-ins) and other devices such as tablets and phones. The ad block server, like a browser plug-in, monitors outbound requests and intercepts those going to known ad servers.

The advantage of the ad block server is that it blocks ads to all devices on your local network. (Note that when you use a device on a different network, like at a coffee shop, the ad block server does not intercept requests and you will see advertisements.) An ad block server can be configured to allow requests to certain destinations, which is not quite the same as allowing requests for pages loaded from certain web sites. The ad block server knows only the destination address, not the web page that made the request, or even the application that made it.

Both types of ad blockers can let some advertisements through, even unwanted advertisements. Ad blockers use a list of "known ad servers"; requests to destinations on the list are intercepted. A new ad server, at a new address, won't be blocked -- at least not until the list of ad servers is updated. The provider of the ad blocker usually is the one to make that update.

Ad blockers don't block ads from the host web site. A web page can request additional text and images from its origin web site (and they usually do). Those requests can bring text and images useful to the viewer, or they can bring text and images that are advertisements. The ad blocker cannot examine the additional text and images and decide which is useful and which is advertisement. Therefore, all requests to the origin site are allowed.

So ad blockers work, but they are not perfect.

Now we get to the interesting question: Are ad blockers ethical? Are they moral? Is it wrong to use an ad blocker?

Some thoughts:

One argument is that they are ethical. The web browser lives on my computer (so the argument goes) and I have the right to choose what is and what is not displayed on my computer.

The counter argument is that advertisers pay for the display of ads, and if ads are not displayed (in significant numbers), then the advertisers will stop paying and the web site will lose revenue and possibly go out of business.

(A counter, counter argument runs along the lines of: Display ads all you want, I don't look at them and I never click on them, so the web page is merely wasting their time and effort by displaying ads. By using an ad blocker, I am saving the web site the effort of sending the advertisement to me. The argument is  tenuous at best.)

Where do we go from here?

I think YouTube is showing us the future. In its fight against ad blockers, YouTube acts in two ways. First, it limits the number of videos that any one person (using an ad blocker) may view. (Some have accused YouTube of deliberately slowing videos or reducing resolution, but these effects may have been accidental. Let's ignore them.)

YouTube also encourages viewers to subscribe to its premium level, which removes the ads. Some videos on YouTube have built-in advertisements much like the native ads of web pages. YouTube does not remove those, and ad blockers cannot detect and disable them. Overall, the number of ads is greatly reduced.

Web sites cost money to run, and that money must come from somewhere. If viewers block ads, then money will not come from ads, and the money must come from another source. Web sites without revenue from advertising must use a different revenue model.

Future historians may point to YouTube switching from advertisement revenue to subscription revenue and say "Here. Here is where the change from advertising to subscription revenue started. YouTube was the first."

Thursday, February 8, 2024

Spacial computing

There's been a lot of talk about Apple's new Vision Pro headset. I'm not sure that we in the general public (meaning those of us outside of Apple) know what to make of it. What can we do with it? How will it help us? I don't have answers to those questions.

I'm not sure that the folks at Apple have the answer to those questions, despite their impressive demo for the Vision Pro.

Apple uses the term "Spacial Computing" to separate the Vision Pro headset from other devices. This makes sense. A headset is different from a laptop or a smartphone. Not only is the hardware different, but the methods of interaction are different.

If we look at the history of computing, we can see that the different generations of computers used different methods of interaction.

Mainframe computers (IBM's 1401 and System/360) made computation commercially possible -- for large, well-funded organizations such as governments and national corporations. Interaction with mainframes was with punch cards and printouts. Programs read and processed data in batches, at scheduled times.

Minicomputers were a variation of mainframes. They made computing available to smaller (or at least less large) organizations. Yet they also changed computing. Interaction with minicomputers was through terminals, with either paper or screens similar to today's displays. People could interact with programs, instead of supplying all of the data up front.

Personal computers were a variation of minicomputers. They made computation possible for individuals (upper middle class individuals). Interaction with personal computers was not with terminals but with keyboards and built-in displays. Most displays had graphics. People could see charts and graphs, and later pictures.

(Even the IBM PC 5150 with the monochrome display adapter had character graphics. And that card was quickly replaced by the Hercules monochrome graphics adapter with full graphics.)

Laptop personal computers were a variation of personal computers. The software was usually the same as personal computers (but not always; some had custom software) and their displays were often smaller than those on "real" personal computers. Interaction was mostly the same (keyboard and display) They made computing portable. They also made networking common.

The connection between laptops and networking is muddled. Networking was available for desktop computers, usually as an option. And laptop computers existed before networks for desktop computers, but they were rarely used. The combination of laptop and network was powerful, and arrived with a more capable set of hardware for laptop computers. The trio of portable, network, and hardware forged a useful tool.

Smart phones were a combination of cell phone and laptop computer. (Or improved versions of personal digital assistants such as the Palm Pilot.) They made computing not merely portable but mobile -- an individual could do things while moving. They (specifically the iPhone with iTunes) eased the consumption of music and movies and games. Computing became entwined with entertainment. Interaction changed from keyboard and display to touchscreen and display, and sometimes voice.

Each generation of computing changed the nature of computing.

So where does that put Apple's Vision Pro?

I think that we can agree that the interaction with a headset will be different from the interaction with a smart phone. The screen is present but not touched. I expect that, initially, a set of gestures borrowed from smart phones will be used, and later new gestures will be invented for headsets. A smart phone with its small screen can display one app at a time.  The headset screen occupies a much larger section of our vision, so we can expect more things to happen at the same time.

I expect headsets to be used for entertainment and the consumption of information. Expanding today's video games to the larger screen of the Vision Pro seems a natural (although perhaps non-trivial) move. Beyond games, the analysis of data could use the "larger" screen to display more information in graphical form. Music is probably not a strong point for headsets, but music with video is likely to be popular.

Each generation of computing saw new applications, applications that made little sense for the previous generation. I think this will continue, and we will see new applications specific to the Vision Pro (or headsets in general). And just as the old generations of computing styles are still with us, they will continue to stay with us. The computing that they perform is useful -- and not always appropriate on later platforms. Some things work better on new systems, and some things work better on old systems.

I will be watching the innovations for headset computing. But just as I did not immediately run out and buy a laptop computer when they first came out, or a smart phone when they first came out, I won't be buying a Vision Pro -- at least not for a while.