Wednesday, February 22, 2023

Paying for social media

Twitter has implemented a monthly charge for its "Twitter Blue" service. Facebook (or Meta) has announced something similar.

Apple introduced its "Tracking Transparency" initiative (which allows users to disable tracking by apps) and that changed the market. Advertisers are apparently paying less to Facebook (and possibly Twitter) because of this change.

It was perhaps inevitable that Twitter and Facebook would look to replace that lost revenue. And where to look? To the users, of course! Thus the subscription fees were invented.

Twitter's fee is $8 per month, and Meta's is $12 per month. (Both are higher when purchased on an Apple device.)

Meta's price seems high. I suspect that Meta will introduce a second tier, with fewer features, and with a lower monthly rate.

Facebook and Meta must be careful. They may think that they are competing with streaming services and newspaper subscriptions. Streaming services have different pricing, from ad-supported services that charge nothing to Netflix and HBOmax that charge $15 per month (or thereabouts).

But newspapers and streaming services are different from social media. Netflix, HBOmax, and the other streaming services create content (or buy the rights to content) and provide it to viewers. Newspapers create content (or buy the rights) and present it to readers. For both, the flow of information is one-way: from the service to the user.

Social media operates differently. Users create the content, with posts and updates. That information is of interest to family, friends, and colleagues. The value to users is not merely in content, but in the network of connections. A social media site with lots of your friends is interesting to you; a site with only a few is less interesting, and a site with no friends is of no interest.

Meta and Twitter face a different challenge than Netflix and HBOmax. If streaming services raise prices or do other things to drive away customers, the value for the remaining customers remains the same. But if Facebook or Twitter drive away users, then they are reducing the value of the service to the remaining users. Meta and Twitter (and any other social media site) must act carefully when introducing changes.

I tend to think that these new fees are the result of necessity, and not of simple greed. That is, Twitter and Facebook need the revenue. If that is the case, then we users of web sites and social media may be in for more fees. It seems that simple, non-targeted advertising doesn't work for web sites, and targeted advertising (with no data sent to advertisers) doesn't work either.

Advertisements coupled with detailed user information did work, in that it provided enough revenue to web sites. That arrangement was ended by Apple's "Transparency in Tracking" initiative.

We're now in a "next phase" of social media, one in which users will pay for the service. (Or some users will pay, and other users will pay higher amounts for additional services, and some users may pay nothing.)

Thursday, February 16, 2023

Unstoppable cannon balls, immovable posts, and Apple

In the mid 20th Century, Martin Gardner wrote a series of articles for Scientific American. His column was called "Mathematical Games"; the content was less math and more puzzles, riddles, and brain teasers. One such brain teaser went something like this:

"Assume that there are unstoppable cannon balls. These cannon balls are different from the normal variety in that once shot from a cannon, they do not stop. They push aside any object in their way. Also assume that there are immovable posts. These posts are different from the normal variety in that they do not move, for any reason. Now, what happens when an unstoppable cannon ball strikes an immovable post?"

Readers of Mr. Gardner's columns had to wait for answers, which appeared in the magazine's next issue. I won't make you, dear readers, wait that long. The answer to the riddle of the unstoppable cannon ball and the immovable post is simple: they cannot exist together. If one has an unstoppable cannon ball, then by definition the universe cannot have an immovable post. Or, if one has an immovable post, then again by definition one cannot have an unstoppable cannon ball.

While that answer may be disappointing, it has a certain wisdom. That wisdom may help Apple.

With the introduction of the M1 and M2 processor lines, Apple has entered into the realm of brain teasers. They don't have an unstoppable cannon ball or an immovable post, but they have built similar things in their product line.

The problem for Apple is the Mac Pro computer. The Mac Pro is Apple's premium computer; it sports the best processor, the fastest disks, the speediest memory, and -- of course -- the highest price tag. But it has one thing that other computers in Apple's product line no longer have: the ability to replace components. The Mac Pro is the only computer that let's the user replace memory, add disks, and add GPU cards. Apple computers (not phones, not tablets) in the past have allowed for upgrades. My vintage Apple Powerbook G4 allows one to replace memory, disk drives, and battery. The original Macbook allowed for the same.

Over the years, Apple changed their products and gradually removed the ability to change components. Today's Macbook laptops and non-Pro Mac computers are all fully encased; there is no way to open them and swap components. (At least not for the average user.)

The M1 and M2 system-on-chip processors make upgrades or changes impossible. Everything is on the chip: CPU, GPU, memory, storage, and more.

The benefit of the everything-on-one-chip design is performance. When components are housed in separate chips (such as CPU in one, memory in another, and GPU in yet another) then one must provide connecting wires. These wires (or traces on the system board) run from one component to the next. Driving the signals across these wires requires extra circuitry - dedicated transistors to raise the voltage of signals from the on-chip levels to the levels for the system board. Corresponding receiver circuits adapt the signals from board-level voltages to on-chip voltages. Each of those drivers and receivers slows the signal. (It's not much, but at the frequencies of today's computers, those small delays add to significant delays.)

The distance from one component to another also causes delays. Again, each delay is small, but over the billions of operations, they add up.

Which brings us back to the unstoppable cannon ball and the immovable post.

The older designs with discrete components are the immovable post. By itself, this is not a problem.

With the system-on-chip designs of M1 and M2, Apple has built, essentially, an unstoppable cannon ball. They have left the universe of swappable components and entered the universe of system-on-chip.

You cannot have both. You cannot have a computer that has all components on a single chip, and still allows for pieces to be upgraded.

Now, you can have some computers in your line with swappable components, and others with system-on-chip designs. In that sense, you can have both.

But you cannot have a single computer with both. A computer is either totally integrated or it has replaceable components. Keep in mind that the total integration design has the much better performance.

Apple wants its Mac Pro to have replaceable components and the best performance of the line. Apple wants the Mac Pro to be the top of its product line, with the best performance (and the priciest of price tags). I don't see a way to make this happen.

The performance of Apple's M2 Ultra processor is good. Really good. Better than the old, Intel-based, swappable component Mac Pro. A new, Intel-based, swappable component Mac Pro (using the latest processors and memory chips) could be faster than the old one, but not by much. It *may* be a little faster than the M2 Ultra, but it won't be *much* faster. It certainly won't be the flagship product that Apple wants.

Apple can build computers based on the M1 or M2 processor, and they will have top performance, but they won't have replaceable components. (The unstoppable cannon ball.) Apple can build computers with replaceable components (either Intel or AMD processors, or discrete processors based on the M1/M2 CPU) but they won't have the performance. (The immovable post.)

The idea of a top-tier computer system with replaceable parts is now a thing of the past. It probably has always been a thing of the past, as high performance computers have always integrated as much as possible. The notion of replaceable parts came from the hobbyist market and the original IBM PC, which wisely traded performance for flexibility. In the 1980s, when we had a poor understanding of what we wanted from computers, flexibility was the better choice.

Today, we have very definite ideas about our computers. We don't need to experiment with different video cards and memory configurations. We don't need to add network cards to some but not all computers. (Our manufacturers also have much better processes, and computer components are much more reliable. Computers run, and we have little need to replace a failed component.)

Apple could offer a Mac computer that has replaceable parts. It would be a low-end computer, not the high-end Mac Pro. I suspect that Apple will not make such a computer. It would be more expensive to produce, have a larger support effort (customers making mistakes and asking questions), and have limited appeal in the Apple fan base.

But Apple cannot build a high-end Mac Pro with replaceable components. It won't have the performance, and the Mac Pro is all about performance.

I think that Apple will build a Mac Pro, but with the "Extreme" variant of an M2 (or possibly M3) processor. The Mac Pro will be the only computer in Apple's line with the "Extreme" variant; other computers will use the plain, "Pro", "Max", or "Ultra" version of its processors. The new Mac Pro won't have replaceable parts, but it will have superior performance. People may be surprised, but I won't be one of them.

Thursday, February 9, 2023

AI answers may improve traditional search

Isaac Asimov, the writer of science and science fiction, described his experience with publishing houses as a writer. People had warned him to stay away from the publishing world, telling him that it was full of unscrupulous opportunists who would take advantage of him. Yet his experience was a good one; the publishers, editors, and others he worked with were (for the most part) honest, hard-working, and ethical.

Asimov had a conjecture about this. He surmised that for some time prior to his arrival as a writer, the publishing industry did have a large number of unscrupulous opportunists, and they gave the industry a bad reputation. He further theorized that when he started as an author, those individuals had moved on to a different industry. Not because of his arrival, but because there was a newer, larger, and more lucrative industry to take advantage of individuals. It was the movie industry that provided a better "home" for those individuals. Once they saw that movies were the richer target, they abandoned the publishing industry, and left the ethical people (who really wanted to work in publishing) behind.

I don't recall that Asimov proved his conjecture, but it has a good feel to it.

What does this have to do with software? Well, not much for the programming world, but maybe a lot for the online search world.

Search engines (Google, Bing, Duck-duck-go, and others) make a valiant attempt to provide good results, but web sites use tricks to raise a web site's ranking in the search engines. The result is that today, in 2023, many searches work poorly. Searches to purchase something work fairly well, and some searches for answers (when does the Superbowl start) tend to be relevant, but many queries return results that are not helpful.

As I see it, web site operators, in their efforts to increase sales, have hired specialists to optimize their ranking in search engines, leading to an endless race of constantly outdoing their competition. The result is that search engines provide little in the way of "organic" lists and too many "sponsored" or "optimized" responses.

The situation with search engines is, perhaps, similar to the pre-Asimov era of publishing: full of bad operators that distort the product.

So what happens with the new AI-driven answer engines?

If people switch from the old search engines to the new answer engines, we can assume that the money will follow. That is, the answer engines will be popular, and lead to lots of ad revenue. When the revenue shifts from search engines to answer engines, the optimizations will also shift to answer engines. Which means that the efforts to game search engines will stop, and search engines can drift back to organic results.

This change occurs only if the majority of users switch to the answer engines. If a sizable number of people stay on the older search engines, then the gains from optimizing results will remain, and the optimization games will continue.

I'm hoping that most people do switch to the new answer engines, and a small number of people -- just enough for search engines to remain in business -- keep using the older engines.

Wednesday, February 1, 2023

To build and to maintain

I had the opportunity to visit another country recently (which one doesn't matter) and I enjoyed the warmer climate and the food. I also had the opportunity to observe another country's practices for building and maintaining houses, office buildings, roads, bridges, and other things.

The United States is pretty good at building things (roads, bridges, buildings, and such) and also good at maintaining them. The quality of construction and the practices for maintenance vary, of course, and overall governments and large corporations are better at them than small companies or individuals.

In the country I visited, the level of maintenance was lower. The culture of the country is such that people in the country are good at building things, but less concerned with maintaining them. This was apparent in things like signs in public parks: once installed they were left exposed to the elements where they faded and broke in the sun and wind.

My point is not to criticize the country or its culture, but to observe that maintaining something is quite different from building it.

That difference also applies to software. The practices of maintaining software are different from the practices of constructing software.

Software does not wear or erode like physical objects. Buildings expand and contract, develop leaks, and suffer damage. Software, stored in bits, does not expand and contract. It does not develop leaks (memory leaks aside), It is impervious to wind, rain, and fire. So why do I say that software needs maintenance?

I can make two arguments for maintenance of software. The first argument is a cyber-world analog of damage: The technology platform changes, and sometimes the software must change to adapt. A Windows application, for example, may have been designed for one version of Windows. Windows, though, is not a permanent platform; Microsoft releases new versions with new capabilities and other changes. While Microsoft makes a considerable effort to maintain compatibility, there are times when changes are necessary. Thus, maintenance is required.

The second argument is less direct, but perhaps more persuasive. The purpose of maintenance (for software) is to ensure that the software continues to run, possibly with other enhancements or changes. Yet software, when initially built, can be assembled via shortcuts and poor implementations -- what we commonly call "technical debt". Often, those choices were made to allow for rapid delivery.

Once the software is "complete" -- or at least functional, maintenance can be the act of reducing technical debt, with the goal of allowing future changes to be made quickly and reliably. This is not the traditional meaning of maintenance for software, yet it seems to correspond well with the maintenance of "real world" objects such as automobiles and houses. Maintenance is work performed to keep the object running.

If we accept this definition of maintenance for software, then we have a closer alignment of software with real-world objects. It also provides a purpose for maintenance; to ensure the long-term viability of the software.

Let's go back to the notions of building and maintaining. They are very different, as anyone who has maintained software (or a house, or an automobile).

Building a thing (software or otherwise) requires a certain set of skills and experience.

Maintaining that thing requires a different set of skills and experience. Which probably means that the work for maintenance needs a different set of management techniques, and a different set of measurements.

And building a thing in such a way that it can be maintained requires yet another set of skills and experience. And that implies yet another set of management techniques and measurements.

All of this may be intuitively obvious (like solutions to certain mathematics problems were intuitively obvious to my professors). Or perhaps not obvious (like solutions to certain math problems were to me). In either case, I think it is worth considering.

Monday, January 16, 2023

The end of more

From the very beginning, PC users wanted more. More pixels and more colors on the screen. More memory. Faster processors. More floppy disks. More data on floppy disks. (Later, it would be more data on hard disks.)

When IBM announced the PC/XT, we all longed for the space (and convenience) of its built-in hard drive. When IBM announced the PC/AT we envied those with the more powerful 80286 processor (faster! more memory! protected mode!). When IBM announced the EGA (Enhanced Graphics Adapter) we all longed for the higher-resolution graphics. With the PS/2, we wanted the reliability of 3.5" floppy disks and the millions of colors on a VGA display.

The desire for more didn't stop in the 1980s. We wanted the 80386 processor, and networks, and more memory, and faster printers, and multitasking. More programs! More data!

But maybe -- just maybe -- we have reached a point that we don't need (or want) more.

To quote a recent article in MacWorld:

"Ever since Apple announced its Apple silicon chip transition, the Mac Pro is the one Mac that everyone has anxiously been awaiting. Not because we’re all going to buy one–most of the people reading this (not to mention me, my editor, and other co-workers) won’t even consider the Mac Pro. It’s a pricey machine and the work that we do is handled just as well by any Mac in the current lineup".

Here's the part I find interesting:

"the work that we do is handled just as well by any Mac in the current lineup"

Let that sink in a minute.

The work done in the offices of MacWorld (which I assume is typical office work) can be handled by any of Apple's Mac computers. That means that the lowliest Apple computer can handle the work. Therefore, Macworld, being a commercial enterprise and wanting to reduce expenses, should be equipping its staff with the low-end MacBook Air or Mac mini PCs. To do otherwise would be wasteful.

It is not just the Apple computers that have outpaced computing needs. Low end Windows PCs also handle most office work. (I myself am typing this on a Dell desktop that was made in 2007.)

The move from 32-bit processing to 64-bit processing had a negligible affect on many computing tasks. Microsoft Word, for example, ran just as well in 32-bit Windows as it did in 64-bit Windows. The move to 64-bit processing did not improve word processing.

There are some who do still want more. People who play games want the best performance from not only video cards but also central processors and memory. Folks who edit video want performance and high-resolution displays.

But the folks who need, really need, high performance are a small part of the PC landscape. Many of the demanding tasks in computation can be handled better by cloud-based systems. It is only a few tasks that require local, high-performance processing.

The majority of PC users can get by with a low-end PC. The majority of PC users are content. One may look at a new PC with more memory or more pixels, but the envy has dissipated. We have enough colors, enough pixels, and enough storage.

If we have reached "peak more" in PCs, what does that mean for the future of PCs?

An obvious change is that people will buy PCs less frequently. With no urge to upgrade, people will keep their existing equipment longer. Corporations that buy PCs for employees may continue on a "replace every three years" schedule, but that was driven by depreciation rules and tax laws. Small mom-and-pop businesses will probably keep computers until a replacement is necessary (I suspect that they have been doing that all along). Some larger corporations may choose to defer PC replacements, noting that cash outlays for new equipment are still cash outlays, and should be minimized.

PC manufacturers will probably focus on other aspects of their wares. PC makers will strive for better battery life, durability, or ergonomic design. They may even offer Linux as an alternative to Windows.

It may be that our ideas about computing are changing. It may be that instead of local PCs that do everything, we are now looking at cloud computing (and perhaps older web applications) and seeing a larger expanse of computing. Maybe, instead of wanting faster PCs, we will shift our desires to faster cloud-based systems.

If that is true, then the emphasis will be on features of cloud platforms. They won't compete on pixels or colors, but they may compete on virtual processors, administration services, availability, and supported languages and databases. Maybe we won't be envious of new video cards and local memory, but envious instead of uptime and automated replication. 

Monday, January 9, 2023

After the GUI

Some time ago (perhaps five or six years ago) I watched a demo of a new version of Microsoft's Visual Studio. The new version (at the time) had a new feature: the command search box. It allowed the user to search for a command in Visual Studio. Visual Studio, like any Windows program, used menus and icons to activate commands. The problem was that Visual Studio was complex and had a lot of commands -- so many commands that the menu structure to hold them all was enormous, and searching for a command was difficult. Many times, users failed to find the command.

The command search box solved that problem. Instead of searching through menus, one could type the name of the command and Visual Studio would execute it (or maybe tell you the path to the command).

I also remember, at the time, thinking that this was not a good idea. I had the distinct impression that the command search box showed that the GUI paradigm had failed, that it worked up to a point of complexity but not beyond that point.

In one sense, I was right. The GUI paradigm does fail after a certain level of complexity.

But in another sense, I was wrong. Microsoft was right to introduce the command search box.

Microsoft has added the command search box to the online versions of Word and Excel. These command boxes work well, once you get acquainted with them. And you must get acquainted with them; some commands are available only through the command search box, and not through the traditional GUI.

Looking back, I can see the benefit of changing the user interface, and changing it in such a way as to make a new type of user interface.

The first user interface for personal computers was the command line. In the days of PC-DOS and CP/M-86, users had to type commands to invoke actions. There were some systems (such as the UCSD p-System) that used full-screen text displays as their interface, but these were rare. Most systems required the user to learn the commands and type them.

Apple's Macintosh and Microsoft's Windows used a GUI (Graphical User Interface) which provided the possible commands on the screen. Users could click on an icon to open a file, another icon to save the file, and a third icon to print the file. The icons were visible, and more importantly, they were the same across all programs. Rarely used commands were listed in menus, and one could quickly look through the menu to find a command.

Graphical User Interfaces with icons and buttons and menus worked, until they didn't. They were adequate for simple programs such as the early versions of Word and Excel, but they were difficult to use on complex programs that offered dozens (hundreds?) of commands.

The command search box addresses that problem. A program that uses the command search box, instead of displaying all possible commands in icons and buttons and menus, shows the commonly-used commands in the GUI and hides the less-used commands in the search box.

The search box is also rather intelligent. Enter a word or a phrase and the application shows a list of commands that are either what you want or close to it. It is, in a sense, a small search engine tuned to the commands for the application. As such, you don't have to remember the exact command.

This is a departure from the original concept of "show all possible actions". Some may consider it a refinement of the GUI; I think of it as a separate form of user interface.

I think that it is a separate form of interface because this concept could be applied to the traditional command line. (Command line interfaces are still around. Ask any user of Linux, or any admin of a server.) Today's command line interfaces are pretty much the same as the original ones from the 1970s, in that you must type the command from memory.

Some command shell programs now prompt you with suggestions to auto-complete a command. That's a nice enhancement. I think another enhancement could be something similar to the command search box of Microsoft Excel: a command that takes a phrase and reports matches. Such an option does not require graphics, so I think that this search-based interface is not tied to a GUI.

Command search boxes are the next step in the user interface. It follows the first two designs: command line (where you must memorize commands and type them exactly) and GUI (where you can see all of the commands in icons and menus). Command search boxes don't require every command to be visible (like in a GUI) and they don't require the user to recall each command exactly (like in a command line). They really are something new.

Now all we need is a name that is better than "command search box".

Monday, January 2, 2023

Southwest airlines and computers

Southwest Airlines garnered a lot of attention last week, A large winter storm caused delays on a large number of flights, a problem with which all of the airlines had to cope. But Southwest had a more difficult time of it, and people are now jumping to conclusions about Southwest and its IT systems.

Before I comment on the conclusions to which people are jumping, let me explain what I know about the problem.

The problem in Southwest's IT systems, from what I can tell, has little to do with the age of their programs or the programming languages that they chose. Instead, the problem is caused by a mix of automated and manual processes.

Southwest, like all airlines, must manage its aircraft and crews. For a large airline, this is a daunting task. Airplanes fly across the country, starting at one point and ending at a second point. Many times (especially for Southwest) the planes stop at intermediate points. Not only do airplanes make these transits, but crews do as well. The pilots and cabin attendants go along for the ride, so to speak.

Southwest, or any airline, cannot simply assign planes and crews at random. They must take into account various constraints. Flight crews, for example, can work for so many hours and then they must rest. Aircraft must be serviced at regular intervals. The distribution of planes (and crews) must be balanced -- an airline cannot end its business day with all of its aircraft and crews on the west coast, for example. The day must end with planes and crews positioned to start the next day.

For a very small airline (say one with two planes) this scheduling can be done by hand. For an airline with hundreds of planes, thousands of employees, and thousands of flights each day, the task is complex. It is no surprise that airlines use computers to plan the assignment of planes and crews. Computers can track all of the movements and ensure that constraints are respected by the plan.

But the task does not end with the creation of a set of flight assignments. During each day, random events can happen that delay a flight. Delays can be caused by headwinds, inclement weather, or sick passengers. (I guess crew members, being people, can get sick, too.)

Delays in one flight may mean delays in subsequent flights. Airlines may swap crews or planes from one planned flight to another, or they may simply wait for the late equipment. Whatever the reason, and whatever the change, the flight assignments have to be recalculated. (Much like a GPS system in your car recalculates the route when you miss an exit or a turn, except on a much larger scale.)

Southwest's system has two main components: an automated system and a manual process. The automated system handles the scheduling of aircraft and crews. The manual process handles the delays, and provides information to the automated system.

During the large winter storm, a large number of flights were delayed. So many flights were delayed that the manual process for updating information was overwhelmed -- people could not track and input the information fast enough to keep the automated system up to date.

A second problem happened on the automated side. So many people visited the web site (to check the status of flights) that it, too, could not handle all of the requests.

This is what I think happened. (At least, this makes sense to me.)

A number of people have jumped to the conclusion that Southwest's IT systems were antiquated and outdated, and that lead to the breakdown. Some people have jumped further and concluded that Southwest's management actively prevented maintenance and enhancements of their IT systems to increase profits and dividend payouts.

I'm not willing to blame Southwest's management, at least not without evidence. (And I have seen none.)

I will share these thoughts:

1. Southwest's IT systems -- even if they are outdated -- worked for years (decades?) prior to this failure.

2. All systems fail, given the right conditions.

One can argue that Southwest's system, a combination of automated and manual processes, could be redesigned to have more work handled by the automated side. It would require some way to track flights and record crews and planes arriving at a destination. Such changes are not trivial, and should be made with care.

One can argue that Southwest's IT systems use old programming techniques (and maybe even old programming languages), and Southwest should modernize their code. I find this argument unpersuasive, as newer programming languages and code written in those languages is not necessarily better (or more reliable) than the old code.

One can argue that Southwest's IT system could not scale up to handle the additional demand, and that Southwest should use cloud technologies to better meet variable demand. That is also a weak argument; moving to cloud technologies will not automatically make a system scalable.

Clearly this event was an embarrassment for Southwest, as well as a loss of some customer goodwill. (Not to mention the expense of refunds.) Given that a large winter storm could happen again (if not this year then possibly next year), Southwest may want to make adjustments to its scheduling systems and processes. But I would caution them against a large-scale re-write of their entire system. Such large projects tend to fail. Instead, I would recommend small, incremental improvements to their databases, their web sites, and their scheduling systems.

Whatever course Southwest chooses, I hope that it is executed with care, and with respect for the risks involved.