Wednesday, December 29, 2021

Something is rotten in the state of Apple

Apple recently announced a set of (rather large) retention payments for some engineers. (I assume that these are senior engineers.) The payments are in the form of restricted stock units which vest over a four-year period. Apple is making these payments to prevent employees from leaving for other companies. It's not clear, but these appear to be one-time payments. (That is, this is not a new, recurring payment to employees.)

This is not a good idea.

If employees are leaving Apple, that is a problem. And the problem has two (for this post) possible causes: employees are unhappy with wages, or employees are unhappy with something else.

If employees are unhappy with wages, it is quite likely that wages are not in line with other companies. If that is the problem, then the correct action is to adjust wages to competitive levels. A one-time payment (even spread over years) will not fix the problem.

I tend to think that Apple's wages are competitive.

If employees are unhappy with something other than wages, then a monetary payment does not fix the problem. Employees could he unhappy about many things: working conditions, opportunities to work on fun projects, opportunities for professional growth, political involvement of the company, or projects deemed unethical by employees. (And more!)

Monetary compensation does not fix any of those problems. The problems remain, and the unhappiness of employees remains.

I suspect that in the middle of the COVID-19 pandemic, Apple requested (demanded?) that employees return to the office, and a number of employees -- senior-level employees -- refused. Not only did they refuse, they refused with "and if you make me return to the office, I will leave Apple and work for someone else". (This is all speculation. I am not employed by Apple, nor have I spoken with any Apple employees.)

Apple has not one but two problems.

The first problem is that employees are leaving, and Apple has addressed that problem with bonus payments.

The second problem is that Apple thinks this tactic is a good one. (They must think it is a good idea. Otherwise they would not have done it.)

Payments of this type must have been discussed at the highest levels of Apple. I suspect that even members of the board of directors were consulted. Did anyone object to these payments?

In the end, Apple decided to pay employees to address issues that are not related to compensation. It is the wrong solution to the problem, and it will probably make the situation worse. Bonus payments were offered to some employees. The other employees will resent that. Bonus payments vest over four years. After the last payment, employees who were given these payments will see their compensation drop significantly. They may resent that.

Apple has a problem. Retention payments don't address the underlying issues. Is Apple addressing the underlying issues? Possibly. (Apple would not advertise that.)

Let's see what happens at Apple over the next year. I expect to hear about changes in the organization, with the resignations of several senior executives.

I also expect that Apple will continue on its current course for its products and services. Those are sound, and have a good reputation with customers. I see no need to change them, other than the typical improvements Apple makes each year.

Tuesday, December 21, 2021

Moving fast and going far are not the same thing

There is an old saying: If you want to go fast, go alone; if you want to go far, go in a group.

One significant difference between Apple and Microsoft is that Apple manages product lines and Microsoft manages an ecosystem. This difference is significant. Apple is, essentially, moving alone. It can (now) design its own hardware and software. Apple does still need raw materials, fabrication for its chips, manufacture of its cases and boxes, and assembly of components into finished goods. But Apple deals with two types of entities: suppliers (the companies that supply raw materials, chips, etc.) and customers (the people and companies that purchase computers and services).

Microsoft, in contrast, lives in an ecosystem that includes suppliers, PC manufacturers, developers, and customers (both individual and organizational). While Microsoft does design its Surface tablets and laptops, those tablets and laptops are a small part of the larger market. The laptops and desktops made by Dell, Lenovo, HP, and others are a large portion of the market.

Apple can move quickly, changing its processors from Intel to Apple-designed ARM in less than two years. Microsoft, on the other hand, must move more cautiously. It cannot dictate that Windows will shift from Intel to ARM because Microsoft does not control the manufacturers of PCs.

If Microsoft wants to shift personal computers from the current designs of discrete components to system-on-chip designs (and I believe that they do) then Microsoft must persuade the rest of the ecosystem to move in that direction. Such persuasion is not easy -- PC makers have lots invested in the current designs, and are familiar with gradual changes to improve PCs. For the past three decades, Microsoft has guided PC design through specifications that allow PCs to run Windows, and those specification have changed gradually: faster processors here, faster buss connections there, faster memory at some times, better interfaces to graphics displays at other times. The evolution of personal computers has been a slow, predictable process, with changes that can be absorbed into the manufacturing processes of the PC makers.

The Microsoft "empire" of PC design has been, for all intents and purposes, successful. For thirty years we have benefitted from computers in the office and in the home, and those computers have (for the most part) been usable and reliable.

Apple benefitted from that PC design too. The Intel-based Mac and MacBook computers were designed in the gravity field of Windows. Those Mac computers were Windows PCs, capable of running Windows (and Linux) because they used the same processors, video chips, and buss interfaces as Windows PCs. They had to use those chips; custom chips would be too expensive and risky to make.

Apple has now left that empire. It is free of the "center of gravity" that Windows provides in the market. Apple can now design its own processor, its own video chips, its own memory, its own storage. Apple is free! Free to move in any direction it likes, free to design any computer it wants.

I predict that Apple computers will move in their own direction, away from the standard design for Windows PCs. Each new generation of Apple computers will be less and less "Windows compatible". It will be harder and harder to run Windows (or Linux) on Apple hardware.

Microsoft has a new challenge now. They must answer Apple's latest M1 (and M2) system-on-chip designs. But they cannot upend the ecosystem. Nor can they abandon Intel and shift everything to ARM designs. Apple has leveraged its experience with its 'A' series chips in phones to build the 'M' series chips for computers. Microsoft doesn't have that experience, but it has something Apple doesn't: an ecosystem.

I predict that Microsoft will form alliances with other companies to build system-on-chip designs. Probably with IBM, to leverage virtual machine technology (and patents) and possibly Intel to leverage chip fabrication. (Intel recently announced that it was open to sharing its fabrication plants for non-Intel designs.)

[I hold stock in both Microsoft and IBM. That probably biases my view.]

Microsoft needs to build experience with system-on-chip designs, and alliances can provide that experience. But alliances require time, so I'm not expecting an announcement from Microsoft right away. The first system-on-chip designs may be tablets and simple laptops, possibly competing with Chromebooks. Those first simple laptops may take two years of negotiation, experimentation, design, assembly, and testing before anything is ready for market. (And even then, they may have a few problems.)

I think Microsoft can achieve the goal of system-on-chip designs. I think that they will do it with the combined effort of multiple companies. I think it will take time, and the very first products may be disappointing. But in the long run, I think Microsoft can succeed.

If you want to move fast, go alone; if you want to go far, go in a group.


Wednesday, December 15, 2021

Everyone who is not Apple

Apple has direct control over the design of their hardware and software, a situation that has not been seen in the history of personal computers. I expect that they will enjoy success -- at least for a while -- with new, powerful designs.

But what about everyone else? What about Microsoft, the maker of Windows, Office, Azure services, Surface tablets and laptops, and other things? What about Dell and Lenovo and Toshiba and HP, the makers of personal computers? What Google, the maker of Chromebooks and cloud services?

That's a big question, and it has a number of answers.

Microsoft has a number of paths forward, and will probably pursue several of them. For its Surface devices, it can design systems on a chip that correspond to Apple's M1 chips. Microsoft could use ARM CPUs; it has already ported Windows to ARM and offers the "Surface X" with ARM. Microsoft could design a system-on-a-chip that uses Intel CPUs; such would provide binary compatibility with current Windows applications. Intel chips generate more heat, but Microsoft has success with Intel chips in most of its Surface line, so a system-on-a-chip with Intel could be possible. These paths mirror the path that Apple has taken.

Microsoft, unlike Apple, has another possible way forward: cloud services. Microsoft could design efficient processors for the computers that run data centers, the computers that host virtual instances of Windows and Linux. Such a move would ease the shift of processing from laptops and desktop computers and the cloud. (Such a shift is possible today; system-on-chip designs make it more efficient.) Microsoft may work with Intel, or AMD, or even IBM to design and build efficient hardware for cloud data centers.

Manufacturers of personal computers may design their own system-on-chip answers to the M1 processor. Or they may form a consortium and design a common chip that can be used by all (still allowing for custom system-on-chip designs and the current discrete component designs). Microsoft has, for a long time, provided a reference document for the requirements of Windows, and system-on-chip designs would follow that set of requirements just as laptops and desktops today follow those requirements.

PC manufacturers do lose some control when they adopt a common design. A common design would be common, and available to all manufacturers. It prevents a manufacturer from enhancing the design by selecting better components. Rather than shift their entire product line to system-on-chip design, manufacturers will probably use the system-on-chip design for only some of the offerings, keeping some products with discrete designs (and enhancements to distinguish them from the competition).

Google does not have to follow the requirements for Windows; it has its own requirements for Chromebooks. System-on-chip design is a good fit for Chromebooks, which already use both Intel and ARM chips (and few users can see the difference). The performance improvement of system-on-chip design fits in nicely with Google's plan for games on Chromebooks. The increase in power allows for an increase in the sophistication of web-based apps.

I am willing to wait for Microsoft's response and for Google's response. I think we will see innovative designs and improvements to the computing experience. I expect Microsoft to push in two directions: system-on-chip designs for their Surface tablets, and cloud-based applications running on enhanced hardware. Google will follow a similar strategy, enhancing cloud hardware and improving the capabilities of Chromebooks.

Sunday, December 5, 2021

Apple all by themselves

Apple is in a unique position. At least for personal computers.

For the first time in the history of personal computers, Apple is designing the entire device, from hardware to software.

The earliest personal computers (the TRS-80, the Commodore PET, and even the Apple I and Apple II) were built with components made by other manufacturers. Processors, memory, displays, disk drives, and even power supplies were assembled into personal computers. The Apple II used the 6502 processor made by MOS Technologies (and second-sourced by others). The TRS-80 used Zilog's Z-80 processor. The original IBM PC used the 8088 processor from Intel, standard memory chips, and floppy disks (when it had them) from Tandem.

The manufacturers of personal computers was always about using "off the shelf" components to build a PC. It made sense, as it allowed a small company to sell computers and let other companies specialize in components.

Now, Apple is building their own system-on-a-chip that contains processor, memory, graphics processor, and storage. Apple designs the all of the electronics, the case, the power supply, and the display screen. Apple writes their own operating system and application programs.

Such a thing has never been seen with personal computers.

Which is not to day that such a thing has never been seen in computing. It has.

Prior to personal computers, the "we build everything" model was prevalent in computing. IBM used it for their mainframes. DEC used it for their minicomputers.

We cannot make comparisons between the mainframe age and the current age of computing. The markets are too different, the customers are too different, and the computers are too different. The fact that Apple designs everything about its computers is a happy correspondence to that earlier age.

But perhaps we can make some guesses about what Apple will do next.

For starters, they have a lot of control over the design of new products. In the old system, Apple had to use whatever processors or GPUs were available. The designs offered by "specialists" acted as a center of gravity for the market -- anyone using AMD's GPUs was working in a similar space.

Apple is no longer dependent on the designs of others. They do not have to use processors from Intel, or GPUs from AMD, or memory from another maker. Thus, Apple is able to implement its own designs. Those designs, we can reasonably expect, will deviate from the "center of gravity" of the old system. Apple can chart its own path.

Secondly, we can expect that Apple will design its products to help Apple. In particular, Apple will probably add diagnostics to its products, diagnostics that only Apple software will be able to access. The recent M1 MacBook Pro computers are suffering from memory allocation problems; we can expect that Apple will resolve this, at first with software, and later with enhancements to hardware to assist the software in identifying and reporting such issues.

Third, Apple can introduce new products on their schedule. Rather than wait for Intel or AMD to introduce new hardware, Apple can work on new hardware and release products when they are ready. I expect that Apple will release new Mac and MacBook computers every year -- although not every model. Perhaps Apple will use a three-year cycle, with updates to MacBook Air and iMac in one year, MacBook Pro and iMac Pro in a second year, and MacBook Mini and MacBook Max in the third year. (In the fourth year, Apple repeats with updates to MacBook Air and iMac.)

We should note that Apple is not completely independent. While they design the hardware, they do not manufacture it. Apple relies on outside companies to fabricate chips, build components, and assemble those components into complete units. Will Apple build its own fabrication plants? Or buy existing ones? I suspect not. Said plants are large and expensive (event for Apple's budget) and come with lots of environment issues. I expect Apple to leave that work to specialists.

All in all, Apple is in a good position to design its computers and sell them on its schedule.

The important question is: Will those computers be useful to their customers?

That's an idea for another time.

Sunday, November 21, 2021

CPU power and developer productivity

Some companies have issued M1-based Macbooks to developers. Their reason? To improve productivity.

Replacing "regular" PCs with M1-based Macbooks is nice, and it certainly provides developers with more CPU power, but does it really increase the productivity of developers?

In a very simple way, yes, it does. The more powerful Macbooks will let developers compile faster, build deployment packages faster, and run tests faster.

But the greater leaps in developer productivity were not performing steps faster.

Increases in developer productivity come not from raw CPU power, or more memory, or faster network connections, or higher-resolution displays. Meaningful increases come from better tools, which in turn are built on CPU power, memory, network connections, and displays.

The tools that have helped developers become productive are many. They include programming languages, compilers and interpreters, editors, debuggers, version control systems, automated test systems, and communication systems (such as e-mail or chat or streamed messages like Slack).

We build better tools when we have the computing power to support them. In the early days of computing, prior to the invention of the IDE (integrated development environment), the steps of editing and compiling were distinct and handled by separate programs.

Early editors could not hold the entire text of a long program in memory at once, so they had special commands to "page" through the file. (One opened the editor and started with the first "page" of text, made changes and got things right, and then moved to the next "page". It sounds like the common "page down" operation in modern editors, except that a "page" in the old editors was longer than the screen, and -- note this -- there was no "page up" operation. Paging was a one-way street. If you wanted to go back, you had to page to the end of the file, close the file, and then run the editor again.

Increased memory ended the need for "page" operations.

The first integrated environment may have been the UCSD P-System, which offered editing, compiling, and running of programs. The availability of 64K of memory made this system possible. Unfortunately, the CPUs at the time could not support the virtual processor (much like the later JVM for Java) and the p-System never achieved popularity.

The IBM PC, with a faster processor and more memory, made the IDE practical. In addition to editing and  compiling, there were add-ons for debugging.

The 80386 processor, along with network cards and cables, made Windows practical, which made network applications possible. That platform allowed for e-mail (at least within an organization) and that gave a boost to lots of employees, developers included. Networks and shared file servers also allowed for repositories of source code and version control systems. Version control systems were (and still are) an immense aid to programming.

Increases in computing power (CPU, memory, network, or whatever) let us build tools to be more effective as developers. An increase of raw power, by itself, is nice, but the payoff is in the better tools.

What will happen as a result of Apple's success with its M1 systems?

First, people will adopt the new, faster M1-based computers. This is already happening.

Second, competitors will adopt the system-on-a-chip design. I'm confident that Microsoft is already working on a design (probably multiple designs) for its Surface computers, and as a reference for other manufacturers. Google is probably working on designs for Chromebooks.

Third, once the system-on-a-chip design has been accepted in the Windows environment, people will develop new tools to assist programmers. (It won't be Apple, and I think it won't be Google either.)

Where-ever the source, what kinds of tools can we expect? That's an interesting question, and the answer is not obvious. But let us consider a few things:

First, the increase in power is in CPU and GPU capacity. We're not seeing an increase in memory or storage, or in network capacity. We can assume that innovations will be built on processing, not communication.

Second, innovations will probably help developers in their day-to-day jobs. Developers perform many tasks. Which are the tasks that need help? (The answer is probably not faster compiles.)

I have a few ideas that might help programmers.

One idea is to analyze the run-time performance of programs and identify "hot spots" -- areas in the code that take a lot of time. We already have tools to do this, but they are difficult to use and run the system in a slow mode, such that a simple execution can take minutes instead of seconds. (A more complex task can run for an hour under "analyze mode".) A faster CPU can help with this analysis.

Another idea is for debugging. The typical debugger allows a programmer to step through the code one line at a time, and to set breakpoints and run quickly to those points in the code. What most debuggers don't allow is the ability to go backwards. Often, a programming stepping through the code gets to a point that is know to be a problem, and needs to identify the steps that got the program to that point. The ability to "run in reverse" would let a programmer "back up" to an earlier point, where he (or she) could see the decisions and the data that lead to the problem point. Computationally, this is a difficult task, but we're looking at a significant increase in computational power, so why not?

A third possibility is the analysis of source code. Modern IDEs perform some analysis, often marking code that is syntactically incorrect. With additional CPU power, could we identify code that is open to other errors? We have tools to identify SQL injection attacks, and memory errors, and other poor practices. These tools could be added to the IDE as a standard feature.

In the longer term, we may see new programming languages emerge. Just as Java (and later, C#) took advantage of faster CPUs to execute byte-code (the UCSD p-System was a good idea, merely too early) new programming languages may do more with code. Perhaps we will see a shift to interpreted languages (Python and Ruby, or their successors). Or maybe we will see a combination of compiled and interpreted code, with some code compiled and other code interpreted at run-time.

More powerful computers let so do things faster, but more importantly, they let us do things differently. They expand the world of computation. Let's see how we use the new world given to us with system-on-a-chip designs.

Thursday, November 11, 2021

M1 is not the only option for productivity

A number of companies have announced that they are equipping their developers with M1 MacBooks, to improve performance of tasks such as builds and, I presume, tests.

The thinking runs along these lines: the tasks developers perform are important, some of these tasks take a long time, the new M1 MacBooks perform these tasks quickly, therefore providing developers with M1 MacBooks is an investment that improves productivity. (And an increase in productivity is a good thing.)

The supporting arguments often use the time to run a build, or the time to perform automated tests. The M1 MacBooks, according to the argument, can perform these tasks much faster than the current equipment. The implied benefits are often described as a reduction in expenses, which I believe is an incorrect result. (The company will continue to pay its developers, so their annual expenses will not change as a result of the new MacBooks -- except for the cost of the MacBooks.)

But there is another aspect to this "rush to faster computers" that I think has been overlooked. That aspect is cloud computing.

If one is using laptops or desktops with Windows, one can move that work to virtual instances of Windows in the cloud. Microsoft's "Windows 365" service offers Windows in the cloud, with different options for processor power, memory, and storage. One can rent a fast processor and get the same improvement in computing.

Let's look at some numbers. A new M1 MacBook Pro with a 14-inch screen costs $2000 and with a 16-inch screen costs $2500. (There are multiple configurations; these are the lowest prices.)

If those MacBooks last 3 years (a reasonable assumption) then the amortized costs are $56 per month or $69 per month.

Now let's consider an alternative: Windows virtual machines in the cloud. Microsoft's "Windows 365" offers different configurations for prices ranging from $31 per month to $66 per month.

Of course, one still needs a local PC to access the cloud-based Windows, so let's add that cost, too. But we don't need a high-end laptop: The local PC is simply a fancy terminal: a device to accept keystrokes and mouse clicks and send them to the cloud-based PC, and accept screen updates and display them to the user. We don't need a lot of processing power for that.

One can get a decent 14-inch laptop for $600 (less if you hunt for bargains) and a decent 15.6-inch laptop for about the same. Assuming a purchase cost of $600, the monthly addition is $17, which pushes the monthly costs for the cloud-based configuration to $73 or $86. That's a bit higher than the cost of the local MacBook, but not that much higher. And keep in mind that with Windows 365, Microsoft handles some tasks for you, such as updates.

I don't consider a cloud-based solution for MacBooks, because cloud-based MacBooks are different from cloud-based Windows PCs. Windows PCs are often virtualized instances running on high-end hardware; MacBooks in the cloud are Mac computers in a datacenter -- not virtualized instances. A MacBook in the cloud is really just a MacBook at a remote location.

My point is not that cloud-based Windows PCs are better than MacBooks, or that local MacBooks are better than local Windows PCs.

My point is that one has different options for computing. Local MacBooks are one option. Local Windows PCs are another option. Cloud-based Windows PCs are an option. (And if you insist, cloud-based Macs are an option.)

Some companies are pursuing a strategy of local MacBooks. That strategy may be good for them, It does not automatically follow that the same strategy is good for everyone. (Nor does it follow that the strategy is good for them; time will tell.)

My advice is to consider the different options for computing, review your needs and your finances, and select a strategy that works for you.  

Thursday, October 28, 2021

System-on-chips for everyone!

Apple has demonstrated that the system-on-chip design (seen in their new MacBooks, iMacs, and Mac Minis) is popular.

What does system-on-chip design mean for other forms of computing? Will other manufacturers adopt that design?

An obvious market for system-on-chip design is Chromebooks. (If they are not using it already.) Many Chromebooks already use ARM processors (others use Intel) and moving the ARM-based Chromebooks to ARM-based system-on-chip design is fairly straightforward. Chromebooks also have a narrow design specification, controlled by Google, which makes a system-on-chip design feasible. Google limits the variation of Chromebooks, so it may be that the entire Chromebook market could be served with three (or possibly four) distinct designs.

Chromebooks would benefit from system-on-chip designs in two ways: lower cost and higher performance. One may think performance is unimportant to Chromebooks because Chromebooks are merely hosts for the Chrome browser, but that is not true. The Chrome browser (indeed, any modern browser) must do a lot, from rendering HTML to running JavaScript to playing audio and video. They must also handle keystrokes and focus, tasks normally associated with an operating systems's window manager. In addition, browsers must now execute web-assembly (WASM) for some applications. Browsers are complex critters.

Google also has their eyes on games, and improved performance will allow more Chromebooks to run advanced games.

I think we can safely assume that Chromebooks will move to system-on-chip designs.

What about Windows PCs? Will they change to system-on-chip designs? Here I think the answer is not so obvious.

Microsoft sets hardware specifications for Windows. If you want to build a PC that runs Windows, you have to conform to those specifications. It is quite possible that Microsoft will design their own system-on-chip for PCs and use them in Microsoft's own Surface tablets and laptops. It is possible that they will make the design available to other manufacturers (Dell, Lenovo, etc.). Such a move would make it easier to build PCs that conform to Microsoft's specifications.

A system-on-chip design would possibly split designs for PCs into two groups: system-on-chip in one group and traditional discrete components in the other. System-on-chip designs work poorly with expansion slots, so PCs that use such a design would probably have no expansion slots -- not even one for a GPU. But many folks want GPUs, so they will prefer traditional designs. We may see a split market for Windows PCs, with customizable PCs using discrete components and non-upgradable PCs (similar to Chromebooks and Macbooks) using system-on-chip designs.

Such a split has already occurred in the Windows PC market. Laptop PCs tend to have limited options for upgrades (if any). Small desktop PCs also have limited options. Large desktops are the computers that still have expansion slots; these are the computers that let the owner replace components such as RAM and storage.

I think system-on-chip designs are the way of the future for most of our computers (laptops, desktops, phones, etc.). I think we'll see better performance, lower cost, and improved reliability. It's a move in a good direction.

Monday, October 18, 2021

What do we do about Mac Pro?

In all the excitement about Apple's new MacBook Pro computers, we have forgotten an obscure member of the Macintosh family: The Mac Pro.

I haven't looked carefully at the specifications, but it seems that the new MacBook Pro, equipped with the M1 Max processor and lots of memory, surpasses the performance of the Mac Pro computer.

And thinking about the Mac Pro, it seems that Apple has created a bit of a problem for itself. Or at least for the Mac Pro computer.

The Mac Pro is the one computer in Apple's product line that offers expansion slots. The other computers (MacBook, Mac Mini, and iMac) are slotless. The M1 versions of those computers are 'fixed' in that they cannot be upgraded with new hardware. What you buy is what you get... for the life of the computer.

Apple's system-on-a-chip design of the M1, M1 Pro, and M1 Max integrate everything into that chip. There are no off-chip functions. It contains CPU, GPU, memory, disk, and a handful of extra things. Computers based on M1 designs have no expansion slots.

But the Mac Pro still has expansion slots, slots that don't play well with system-on-chip designs.

What does Apple do about the Mac Pro?

I see three possible futures:

First, Apple may design an M1 system chip that supports expansion slots. I see little point to this, as the primary use of expansion slots is for GPUs, and Apple's integrated GPU provides superior performance. One would do better to purchase a MacBook Pro with the M1 Max chip.

Second, Apple may keep the Mac Pro with its current arrangement, either with an Intel processor or an ARM processor, and the expansion slots for external GPU cards. But I see little point to this, for the same reason as above: the MacBook Pro with the M1 Max processor is the better deal.

A third possibility is that Apple designs a new system-chip for the Mac Pro, one that doesn't support expansion slots but provides performance that surpasses the M1 Max. Such a design would be quite similar to the Mac Mini: a small, non-expandable block that requires keyboard, mouse, and display. (The Mac Mini Pro? The Mac Mini Max?)

That last option is equivalent to Apple discontinuing the Mac Pro. I think that is a definite possibility. The new M1 Max system-chip may be enough for even the hungriest of power users. The Mac Mini may be the new Mac Pro.


Sunday, October 17, 2021

The Next New Thing: Resilient Supply Chains

Lots of news stories talk about problems with "supply chains". Various events have disrupted supply chains in multiple industries. (Automobile manufacturers cannot get all the computer chips they need for their cars, limiting sales.)

It seems to me that the next phase is to increase the resiliency of supply chains. Manufacturers will want to ensure reliability of deliveries, to ensure the reliability of their sales.

But resiliency in a supply chain is not easy.

Let's take the example of laptop PCs. PCs have a number of components, and a vendor buys these components and assembles them into laptops. Those components include memory, disk drives (perhaps SSDs), keyboards, power adapters, and displays (and many more components, but you get the idea).

A naive approach to a resilient supply chain would be to find two (or more) sources for all of the components. Two makers of memory, two makers of disk drives, two makers of keyboards, etc. With multiple suppliers, you feel greater confidence in your supply. A disruption to one supplier might no affect the second, so if supplier A cannot meet your needs, you can call on supplier B.

This assumes that supplier A and supplier B have independent sources for their products. But that may not be true. If they supply you with disk drives, for example, they must buy various pieces to make a single drive: platters, read/write heads, arms to move the heads, actuator motors, etc. Suppose both suppliers get their disk platters from the same source? A disruption to that source would affect both suppliers, and (indirectly) the laptop vendor.

Resiliency -- true resiliency -- requires multiple sources all the way down the chain. Managing the multiple sources (and ensuring that the different sources are unique) requires information that is not generally available in today's contracts. Obtaining that information may not be easy, as suppliers may be unwilling to reveal their sources.

Convincing suppliers to coordinate their activities may require time and effort (and may run afoul of anti-trust and anti-competition laws) and may increase the cost of supplies. A large purchase with a single vendor may see a pricing discount based on volume; a purchase split across multiple suppliers may see lower discounts, and some suppliers may want higher prices. 

I expect manufacturers will work on resiliency, but say little about it. Resiliency doesn't play well in advertisements, as "green" or environmentally safe processes do, or fair treatment for workers.

In the end, resiliency is a trade-off of cost and effort for reliability.

Thursday, October 14, 2021

What Apple's new products tell us about Apple

Next Monday, Apple will make an announcement. Many expect a new version of the MacBook Pro, one based on a new M1X system-on-a-chip.

I have some ideas too.

I expect a new Macbook Pro, based on the M1X. I also expect that the announcement will focus on graphics: pictures, videos, and especially games.

I do expect some discussion of performance. Will Apple compare the new Macbook to its previous model? Or will it compare the new Macbook Pro to a Windows PC?

If the latter, will Apple compare it to top-of-the-line PCs? Or to comparably priced PCs? Or to "the most popular PCs on the market"? Each is a different comparison. Which Apple picks will tell us how Apple views competition to the Macbook. If they compare to top-of-the-line PCs, they signal performance. If they compare to comparably priced PCs, they signal value. If Apple compares its top laptop to "the most popular PCs", Apple signals... that it is selling on emotion and not reason.

Let's see what Apple does, and says, in this announcement.

Monday, October 11, 2021

More pixels is not always better


Apple, some time ago, introduced their new iMac computer. They made a point of showing five video streams open at the same time (for editing).

That seems a pretty specialized application.

Most folks probably want to read email, write a few documents, and work with a few spreadsheets. And maybe use a few web sites. Some folks will edit sound, some will edit video. A few of those may edit two video streams at the same time.

The number of people editing five video streams at the same time is a tiny fraction of the total number of users (of any brand of computer).

We should keep in mind that pixels are not free. The exist. They have to be manipulated. Our computers need the circuitry to drive the pixels, and the RAM to hold values for the pixels. That means more memory circuits, and power to drive those circuits. That power (in a laptop) comes from the battery, and more pixels and more circuits drawing current means a shorter battery life.

(The extra power may be small -- very small -- and it may be that we can save power by reducing screen brightness.)

The design of a computer is a series of trade-offs between capabilities, size, power, and cost. A computer with a very small physical size (such as the Apple Watch) can perform a number of tasks, but it will have a small display and a limited amount of memory. A capable computer with a powerful processor and lots of memory will require more space and a higher cost.

When Apple or any manufacturer designs a computer, they decide on various factors and make various trade-offs. Apple, with their iMac, has selected certain points on the cost/capability curve. Those decisions affect every purchaser of an iMac. A user who wants to edit five video streams at once may appreciate the large number of pixels and the powerful graphics processor. A different user, who has desires more mundane, won't use the video capabilities to their fullest extent, but has to pay for them anyway.

Of course, this logic applies to any consumer good, from computers to televisions, from washing machines to automobiles.

Apple knows this. Apple knows that the higher cost of the iMac will convince some people to buy other computers (some Apple, some other brands). Therefore, Apple is willing to give up those customers, in exchange for the bragging that was in their advertisement. They must think that the advertisement was the better deal.

I'm not convinced. While I have several Apple computers, I'm not looking to buy an new iMac (or a new Macbook, or a new Mac) in the near future. I am instead considering Microsoft's new Surface, either the tablet or the laptop version. Because more pixels is not always better.

Sunday, October 3, 2021

Windows and Linux are not the same

We like to think that operating systems are commodities, that Windows performs just as well as mac OS, and they both perform as well as Linux. I'm not sure about mac OS, but I can think of one significant difference between Windows and Linux, and that difference may affect the lifespan of the hardware.

Specifically, the difference in Windows and Linux may affect the hard disk drive, when it is an SSD (solid state disk). SSDs have a limited lifespan, in the number of reads and writes. This is important because Windows and Linux show different behavior with disk activity.

My experience is that Linux has minimal disk activity. Linux loads, creates a login session, does a few more things (I suspect that it runs 'apt' for an update) and then sits and waits. No disk activity.

Windows is quite different. It loads and creates a login session (just like Linux). But then it keeps doing things. Computers with disk activity lights show this activity. (Is Windows downloading updates from Microsoft servers? Checking for malware? I don't know. But it's doing something.) And it does this for at least 30 minutes.

That's before I log in to Windows, and before I launch any applications, or check my e-mail, or visit web sites.

After I log in, Windows does more. One can see the disk activity (on PCs that have status lights). When I check the CPU usage (as shown by Task Manager), I see lots of different tasks, many with vague names such as "Local Service".

Not all of this is caused by Microsoft. My work client has supplied a laptop that runs Splunk, McAfee, and a few other third-party applications (all installed by my client) and they wake up and do things every few minutes or so. All day long.

The immediate thought from this disk activity is: this cannot be good for SSDs. Each read operation and each write operation chips away at the lifespan of the SSD. (Old-style spinning hard disks are much less susceptible to this effect.)

The constant activity in Windows means that Windows will "consume" an SSD much quicker than Linux.

I certain that Microsoft is aware of this issue. I'm guessing that there is not much that they can do about it. Windows was designed to run lots of tasks on start-up, and throughout the day. (Also, it's not really Microsoft's problem. The fact that Windows "burns out" SSDs means that people will replace the disks, or possibly replace the whole PC. People will view this problem as a problem of hardware, not a problem with Windows.)

I tend to keep computers for a long time. For computers that run Windows, I look for systems that use the older hard disks and not SSDs. That's my strategy. Let's see how it works!

Tuesday, September 28, 2021

Chromebooks

Google has the Chromebook, a lightweight laptop that runs Chrome (and nothing else).

Why is Google the only supplier of Chromebooks? Or more specifically, why is it that only Chrome has this arrangement? Why is there no lightweight laptop for Windows that runs only Edge (and perhaps Remote Desktop and Powershell and nothing else)? Why is there no lightweight laptop that runs Apple's Safari browser (and nothing else)? Why is there no lightweight laptop that runs Firefox (and nothing else)? I recognize that hardware manufacturers, in coordination with Google, provide the Chromebook. Therefore, technically, Lenovo and Dell and Samsung (and others) are suppliers of the Chromebook. But you know what I mean.

Competitors to the Chromebook need three things: the browser, the operating system, and the hardware. None are trivial. All must work together.

Google has succeeded in building the complete stack (hardware, operating system, and browser) and also  provides web-based applications. Users of Chromebooks have the tools and they have destinations.

Can Microsoft build an equivalent stack? The apparent answer is "no". Microsoft tried first with the original Surface tablet (the "Surface RT"), second with "Windows S mode", and third with "Windows 10X". (All were not quite equivalent to the Chromebook, as they ran more than just the browser, but they did run a subset of Windows applications.) The first two were rejected by customers; the last was killed before it was released. Windows 11, with its requirements for powerful processors, will not be available for an inexpensive, lightweight, browser-centric experience. I doubt that Microsoft will introduce a new operating system (or maintain a slimmed version of Windows 10) for a low-margin market.

Can Apple build a lightweight laptop that runs only a browser? I think the strict answer is "yes". Apple has the technical talent to build such a stack. I'm not sure that Apple could convince users to switch from their current hardware to a lightweight laptop (a "SafariBook"?). I am confident that Apple makes more money by selling the current hardware (and apps to run on that hardware), so they have no incentive to switch customers to a Chromebook-like laptop. So while Apple could build and sell a "SafariBook", they won't. There is more profit in heavyweight laptops.

Mozilla is in a poor position to design and sell a browser-oriented laptop (a "MozillaBook"?). They have the brower, but not the operating system or the hardware. They need manufacturers such as Dell or Samsung to build the hardware, and those manufacturers may decline, fearing Google's wrath. They may be able to leverage Linux for an operating system, much as Google did, but it would be a significant (read that as "expensive") effort.

The makers of other browsers face a harder challenge than Mozilla faces. Not only do they need the operating system and the hardware, their browsers have tiny market share. Assuming that the expected customer base for a boutique-browser laptop would be their current user base, the development costs for a laptop would be difficult to amortize over the units sold. Laptops with the Opera browser, for example, would be more expensive that the typical Chromebook.

Amazon has done impressive things with its Kindle book readers and Fire tablets, but has not introduced a Chromebook-like laptop. Probably because its book readers and tablets lock users into the Amazon system for purchases of books and music, and that is not possible with a browser.

So my conclusion is that we're stuck with Google Chromebooks, with no hope for a competing product. Our choice is a Chromebook or a laptop with a full-sized operating system. My depressing forecast is that we will never see a competitor to the Chromebook.

I would be happy to be proven wrong.


Tuesday, September 21, 2021

Functional programming requires different discipline

I am worried about the programming industry. Specifically, I fear that we may have trapped ourselves in the object-oriented programming paradigm. I can explain, but it will take some time. Please bear with me.

Paul Graham has written several articles, most of them (if not all of them) thoughtful and thought-provoking. Some of them discuss programming languages. One of them (I cannot find it now) discussed the idea of programming languages and how different languages hold different constructs. If I remember correctly, Graham posited that older programming languages were simpler and modern languages had constructs that were not available in older languages.

If you "stacked" the languages from oldest (on the bottom) to newest (on the top) then programmers could pick their favorite language and look up (at newer languages) and down (at older languages). Graham commented that from such a view, the older languages made sense, because one's favorite language (from which one looked down) contained the things in the older languages. Older languages had constructs that were identical or similar to your language. They didn't have all of the constructs, but the ones that they did have were recognizable.

Conversely (Graham continued), looking up the stack at newer languages was less clear. Those newer languages had constructs and ideas that weren't in your favorite language, and weren't known to you. They looked weird.

I'm not sure I agree completely with this idea. If we start with a simple stack of assembly language, FORTRAN, COBOL, BASIC, Pascal, C, C++, and Java, we can see that yes, later languages have features that are not available in earlier languages. Pascal lets you define records and sets, which are not available in BASIC or FORTRAN. C++ and Java have classes and allow overloading of functions. But the view from new to old is not always clear. BASIC had GOTO, which has all but been removed from modern languages. COBOL had level-88 values, which do not exist in newer languages. FORTRAN-66 had the arithmetic IF statement with its three-way branching, which looks weird from any other language, old or new. The simple rule "old is familiar and newer is strange" doesn't hold.

Instead of programming languages, perhaps it is better to think about programming paradigms. These are broader and not tied to a single language. I can see four major programming paradigms: simple procedural programming, structured programming, object-oriented programming, and functional programming. We developed these paradigms in that sequence, with procedural programming in the 1960s (FORTRAN, COBOL, BASIC), structured programming in the 1970s (Pascal, C), object-oriented programming in the 1990s (C++, Java, C#), and functional programming in the 2000s (Haskell, Erlang). [The lists of languages are not complete, and the years reflect the popularity of the languages, not the initial design.]

My personal history with programming had all of these paradigms. Over time, I moved from one paradigm to another. I started with simple procedural programming (BASIC, FORTRAN) and later moved to structured programming (Pascal, C). That transition was fairly easy. I understood the concepts and had only to learn discipline to apply them.

Later, I moved to object-oriented programming (C++, Java, C#). Changing from structured programming to object-oriented programming was more difficult. That may have been due to age (I was fifteen years older) or perhaps the learning method (I learned structured programming in a classroom; object-oriented programming was on my own). It may be that object-oriented programming is that much harder that structured programming. But learn it I did.

Recently I have looked into functional programming, but only out of curiosity. I'm not using functional programming for any serious projects. (At least not yet.)

Notice that my progress was moving up the stack of programming paradigms. I started with the simplest paradigm (procedural) and moved to structured programming and then to object-oriented programming.

Functional programming is different from the other programming paradigms. Procedural programming is the base, upon which is built structured programming. Object-oriented programming is built on top of structured programming. The three of them make a nice little tower.

But functional programming is not an extension of object-oriented programming. It doesn't include the concepts of OOP (data-hiding and polymorphism). You don't build function-oriented programs out of classes. (I recognize that there are programming languages that combine functional programming and object-oriented programming, but they are hybrids. More on them later.)

Functional programming, in my view, is an alternative to structured programming. It sits by the side of our tower of paradigms, not on top. Just as structured programming imposed discipline on procedural programming, function programming imposes discipline on procedural programming -- but a different set of rules.

To transition from object-oriented programming to functional programming, one has to forget object-oriented programming, and then forget the discipline of structured programming, and then -- and only then -- can one learn the discipline of functional programming. (The hybrid of OOP and FP is another level on top of functional programming, much as OOP is another level on top of structured programming. We're building another tower of programming paradigms, with procedural at the bottom, then functional programming, and then object-oriented/functional programming on top.)

Let's get back to the transition from object-oriented programming to functional programming, by forgetting object-oriented programming and structured programming and building functional programming on top of procedural programming.

That transition is difficult, but not impossible. At least, not for me, because of the sequence in which I learned programming. I started with procedural programming, and then added structured programming and then object-oriented programming. In my brain, I can ignore the later additions and recall pure procedural programming. (I can still write programs in BASIC.)

But that is me. I suspect it holds for programmers who followed that same path.

What about other programmers?

For the past two decades, programmers have been taught object-oriented programming, and nothing else. They were not taught procedural programming, followed by structured programming, followed by object-oriented programming. No indeed. The primary programming languages for learning was C++, later replaced by Java, later replaced by C#, and today Python (and possibly JavaScript).

We have been teaching people how to write object-oriented programs, and nothing else. That's going to make the transition to functional programming extremely difficult. Modern-day programmers, who learned only object-oriented programming, cannot simply forget "the new bits" and revert to the old style or procedural programming.

One cannot simply jump from an object-oriented programming language into a functional programming language. For programmers who know object-oriented programming and nothing else, they cannot "forget the new stuff and learn the new bits" because everything, to them, is connected. Switching to functional programming is a complete reset. They have to forget everything (well, almost everything) and start over.

That's a big task.

I'm not saying that functional programming is wrong, or that we shouldn't use it. Nor am I saying that only the really smart programmers should attempt to move from object-oriented programming to functional programming.

My point is that the transition requires one to unlearn a lot of techniques, techniques that have become habits. The transition is a large change, and will require time, effort, and patience.

Friday, September 17, 2021

An iPad is not a Chromebook

Apple announced some new products this week. With the announcements, Apple made some claims about performance.

Apple compared the iPad to the Chromebook. Specifically, Apple claimed that the performance of the iPad was superior to the Chromebook.

The comparison is a misdirection. It doesn't make sense to compare an iPod to a Chromebook. And with changes in the Windows world, it doesn't make sense to compare MacBooks to Windows laptops, either.

Here's why: Apple tablets and Chromebooks use two different models of computing. Apple designs its applications to run on the local device. (Apple raised this point in their presentation, to emphasize privacy.) Chromebooks are designed to run apps on the web. For iPhones and iPads, the capabilities of the local processor is important. Apple needs hefty processors in its phones and tablets. For Chromebooks, its more important to look at the cloud servers and the network connection. Google needs hefty processors, but in the servers. The processor in the Chromebook need only be powerful enough to send and receive data, and to render the screen image.

Google isn't alone in moving processing to web servers (or cloud servers). Microsoft is doing the same thing with its Office products and applications such as Teams. The computing model for Windows started as local processing, back in the 1990s. Today, some processing occurs on the local system and some processing occurs in the cloud. 

More and more, comparing Apple laptops to Windows laptops and comparing Apple phones to Android phones is comparing apples to oranges. (If you can forgive the pun.)

The difference in computing models guides the design for hardware. Apple has to develop fast processors for its tablets and laptops -- all of the processing occurs there. Microsoft and Google don't have the same pressure, because they can shift heavy processing to cloud servers. Google shifts almost all processing to servers, and Microsoft is gradually redesigning its applications to take advantage of cloud servers. The result is that Microsoft and Google don't need superfast processors in laptops and tablets. (Some buyers of Windows PCs, especially gamers, may seek the fastest processor, but for the average tasks, the processor is unimportant.)

I'm a little confused by Apple's comparison of its new processors to Chromebooks. Apple made a big point about it, but it doesn't make sense. A better comparison would be Apple comparing the new phone and tablet to previous generations of Apple's iPhone and iPad.

Unless -- Perhaps the new processors in Apple's latest tablet and phone are not that much faster than the previous processors. Perhaps the new A15 processor is mildly faster that the A14. In that case, the comparison between A14 and A15 would be... unimpressive. It might be that comparing the new iPad to a Chomebook makes for better marketing. Apple can throw out impressive-sounding factoids such as "30% faster than the most popular Chromebook".

I'm sure that journalists and Apple enthusiasts will compare the new iPhones and iPads to their predecessors and report the results. We should see those results shortly after the new iPhones and iPads become available.

What if the third-party, apples-to-apples comparisons show that the new phones and tablets are only slightly faster than the previous generation? Should we abandon Apple equipment? Not at all. Apple equipment is still well-designed and well-engineered. But we should take a more skeptical view of the information that Apple provides in its hour-long "event" advertisements.

Looking ahead, we can expect a similar hour-long event (advertisement) for the next generation of Macbook laptops and Mac desktops. As with phones and tablets, comparing the performance of a Macbook to a Windows laptop is not always meaningful. (Especially if the Windows laptop uses a lot of server-based apps, or virtual workstations.) If Apple touts the performance of the new Macbook against a Windows laptop (or worse, a Chromebook) then I expect the performance improvement of the new Macs will be... unimpressive. On the other hand, if Apple compares apples to apples, and provides impressive performance comparisons of new Macbooks against older Macbooks, then we can be reasonably certain that the new Macs perform much better.

Let's see what happens at the next Apple advertising event.

Thursday, September 9, 2021

Remote work and employer credibility

Companies (many of them) want employees to return to the office. Employees (many of them) want to continue working from home. There is a danger here, because managers and workers are looking at different things, and that difference can lead to a loss of confidence and even a loss of credibility.

Managers see little reason to delay returning to the office. (This was before the latest wave of Covid from the "delta" variant.) A common reason given is to "improve productivity". There may be some gains with people working in an office, but the argument is lacking. I suspect senior managers, knowing that the accountants are looking at expenses, know that they risk losing buildings and office space. (If the office is empty, why continue to pay for it?) But let's go with the "improve productivity" argument.

Employees feel differently. They think that they are more productive working from home. Not having commutes in the morning and evening help this perception. Shifting back to the office means that employees will have to wake earlier, drive (or take the bus) to the office, possibly pay tolls or parking, and then make the same trip in the evening. They lose perhaps two hours each day to the commute. Their productivity drops, as they would be doing the same work but in ten hours, not eight.

So when managers say "everyone must come back to the office" and employees ask "why" and managers say "for productivity" there is a definite discrepancy. Employers may see a modest gain in productivity (and a possible reduction in expense as they cancel contracts for conferencing software) but employees see a large decrease in productivity (due to commute times) and an increase in expenses (gasoline, transit fare, tolls, lunches). Employees also have to wear nicer clothes -- no more t-shirts and torn jeans!

I suspect that few senior managers are considering the change from an employee's point of view. Most probably think that "going back to work-in-the-office" is easy, as employees were doing this prior to the Covid-19 pandemic.

I also suspect that few employees are thinking in terms of economics, and instead simply have the feeling that work is better and more productive with "work-from-home".

The discrepancy about productivity remains, whether we analyze it via numbers or via emotions. And that discrepancy is a problem. Employers claim that "work in the office" gives better productivity, and employees thing "work from home" gives the better productivity. When senior managers call workers back to the office and claim "higher productivity in the office", employees don't believe them.

The worst position a management team could take is probably the position of "we can't operate with employees in remote locations". That is patently false, as the company has been doing just that for more than a year.

But the softer "improved productivity" argument also has problems. Employees don't see it that way, and once employees don't believe one thing senior managers claim, they start to question everything that senior managers claim.

I think that managers can avoid this loss of credibility. I think it is managers that must take steps to avoid the problem, not employees.

First, managers must recognize that they are asking employees to shoulder the costs of commuting to the office. They must also recognize that while employees were willing to bear these costs prior to the pandemic, they did not have to pay them while working from home, and asking them to come to the office means paying those costs again. The change from work-from-home to work-in-the-office is not cost-free to employees.

Second, managers should recognize that some employees are more productive while working from home, and working from home can be an effective way to contribute to the company. Some employees may welcome a return to the office, and may be more productive there. That does not necessarily hold for all employees. If managers care about productivity, they will work with employees and craft policies that maximize productivity.

Third, managers must recognize that business relationships, including employer-employee relationships, are built on trust. They must act in ways that establish and nurture that trust. They must maintain credibility. If they don't, they face a significant loss of productivity, either through attrition or through apathy or even hostility.

It's nice to think that we can simply go back to the way things were before Covid-19. The world has changed, and we cannot go back. We must move forward, with the knowledge that the work-from-home has given us.

Monday, September 6, 2021

Employee productivity and remote work

The first question is: how does one know that employees, working remotely, are in fact working?

It's not a trivial question, and the answer is far from simple. Too many managers, I suspect, think that their remote employees are "goofing off" during the time they claim to be working. Some managers may believe that employees are working only part time (yet getting paid for a full time job). A few managers may suspect their employees of working multiple jobs at the same time.

It is probable that all of the above scenarios are true.

The second and third questions are: Do managers care? And should managers care?

Managers probably do care. One can hear the complaints: "Employees are goofing off!" or "Employees are not working when they should be!". Or the shrill: "How dare my employees work for someone else!"

All of these complaints condense into the statement "That's not fair!"

Managers and companies have, naturally, installed mechanisms to prevent such abuse of remote work. They monitor employees via webcam, or -- in a particularly Orwellian turn -- use AI to monitor employees via webcam.

It strikes me that these measures assume that they can obtain good results (employees working) by eliminating bad behaviors (employees walking the dog or making a cup of coffee). They subtract the bad and assume that the remains is good. Theoretically, that sums.

But do managers want merely an employee sitting at a computer for an eight-hour shift? (And doing nothing else?) If so, I am willing to work for such companies. I am quite capable of sitting at a computer for hours. I can even appear to be busy, typing and clicking.

Managers will answer that they want more than that. Managers will (rightly) say that they need results, actual work from the employee that provides value to the company.

Can managers quantify these results? Can they provide hard numbers of what they expect employees to do? For some jobs, yes. (Call centers rate employees by number of calls taken and resolved, for example.) Other jobs are less easy to measure. (Programming, for example. Does one measure lines of code? Number of stories? Number of stories weighted by size? Does quality of the code count? Number of defects found during testing?)

It's easy to take the approach of "remove the bad behaviors and hope for the best". One can purchase tools to monitor employees. (It's probably easy to justify such purchases to budget committees.)

But perhaps some of the effort and expense of monitoring bad behavior could be redirected to measuring good results. Business is, after all, about results. If Programmer A produces the same results as Programmer B, but in 75 percent of the time, why not let Programmer A have some time to make coffee?

Another thought:

Lots of employees in the tech world are paid as salaried employees. Compensation by salary (as opposed to hourly wages) implies that the employee has some discretion over their work.

If employers push the monitoring tools, employees may decide that their compensation should be hourly, instead of salary. Companies won't like that arrangement, as they have been using salaried compensation to extract unpaid overtime from workers. But workers have some say in the issue. If employers focus on hours worked, then employees will focus on hours worked.

Monday, August 30, 2021

Apple employees and iCloud

Recent stories about Apple employees have exposed some of the conditions that Apple requires for employment. One of those conditions is to link one's personal iCloud account to the Apple-assigned employee iCloud account.

Why do this? It seems an odd request, to merge one's personal iCloud account with the corporate iCloud account. Apple has not explained the reason for this policy (I did not ask Apple for comment) and Apple employees have not stated a reason either.

I have an idea. It's not that Apple is being evil. Instead, Apple is trying to keep secrets, secret.

Apple, more than most companies, is concerned with corporate secrets. It carefully guards its designs for new products. It limits communication between employees to only those groups that are part of the work assignment.

Apple also requires employees to agree to the "Apple can search you or your equipment whenever on Apple property" clause in its hiring contracts.

Apple does not want secrets to leak out, and it takes steps to keep secrets, well, secret.

I expect that Apple requires employees to use the corporate e-mail system and not personal e-mail accounts such as Yahoo, GMail, or Microsoft Outlook. Apple can scan messages in its internal e-mail system and identify leaked information, but it cannot scan external services. It probably blocks those web sites on its internal network.

Apple probably also blocks social media services such as Facebook and Twitter. When you're at work for Apple, you're at work for Apple, and not allowed to use such sites. Information could be shared via those sites, something Apple wants to prevent.

File sharing sites such as DropBox, BOX, and Microsoft OneDrive are other services that could allow for the sharing of information. Apple probably blocks those sites, too.

Apple may go as far as preventing people from attaching USB drives to their work computers.

So Apple has built an electronic fence around employees, to keep corporate date inside and not allow designs for new products to escape. That all makes sense.

The one problem in this scheme is iCloud.

Apple cannot block iCloud. They use it to share data and collaborate on projects. Blocking iCloud would isolate employees and limit the sharing of information to e-mail, which is far less effective for sharing data. iCloud can let multiple people work on the same file, at the same time, and always with the latest version of the file. Forcing people to share data via e-mail would eliminate real-time collaboration and allow for older versions of files -- and the confusion that would result.

So Apple must use iCloud. But iCloud allows people to switch from one iCloud account to another, and employees could switch from the corporate iCloud account to their personal account, send files out (just what Apple does not want), and then switch back to the corporate iCloud account. This is a "hole" in the security "fence".

The answer to this last problem is simple: require employees to link their personal iCloud to the corporate iCloud account. I imagine that their personal account, once linked, becomes part of the monitoring process for the corporate iCloud account, and closes the "hole" in the "fence".

This is why (I think) Apple requires employees to link their personal iCloud account to their corporate iCloud account. Linking accounts is a hack to close a hole in Apple's security.


Friday, August 27, 2021

Replacing a language

Do programming languages ever get replaced? Languages are introduced, and some rise and fall in popularity (and some rise after a fall), but very few ever disappear completely.

There are some languages that have disappeared. Two are FLOW-MATIC and NELIAC, languages from the 1950s.

Some languages have fallen to very low levels of usage. These include BASIC, PL/I, PL/M, and DIBOL.

Some languages never die (or at least haven't died yet). The go-to examples are COBOL and FORTRAN. Organizations continue to use them. Fortran is used for some parts of the R implementation, so anyone using R is in a sense using Fortran. We can include SQL in this list of "immortal" programming languages.

Some languages are replaced with new versions: C# version 6 replaced a very similar C# version 5, for example. VB.NET replaced VB6. And while there are a few people who use the old Visual Basic 6, they are a very small number.

The replacement of a language isn't so much about the language as it is about popularity. Several groups measure the popularity of languages: Tiobe, Stack Overflow, the Popularity of Programming Languages (PYPL)... They all rank the languages and show pretty tables and charts.

The mechanics of popularity is such that the "winners" (the most popular languages) are shown, and the "losers" (the less popular languages) are omitted from the table, or shown in a small note at the bottom. (Not unlike the social scene in US high schools.)

If we use this method, then any language that enters the "top 10" or "top 40" (or "top whatever") chart is popular, and any language that is not in the list is not popular. A language that never enters the "top" list is never really replaced, because it never "made it".

A language that does enter the "top" list and then falls out of it has been replaced. That language may still be used, perhaps by thousands, yet it is now considered a "loser".

For this method (referring to the Tiobe index), languages that have been replaced include: Objective-C, COBOL, Prolog, and Ada. They were popular, once. But now they are not as popular as other languages. (I can almost hear the languages protesting a la Norma Desmond: "We are big! It's the programs that got small!")

Sometimes we can state that one specific language replaced another. VB.NET replace VB6, because Microsoft engineered and advertised VB.NET to replace VB6. The same holds for Apple and Swift replacing Objective-C. But can we identify a specific language that replaced COBOL? Or a single language that replaced FORTRAN? Did C++ replace C? Is Rust replacing C++?

We can certainly say that, for a specific project, a new language replaces an older one. Perhaps we can say the same for an organization, if they embark on a project to re-write their code in a new language. I'm not sure that we can make the assertion for the entire IT industry. IT is too large, with too many companies and too many individuals, to verify such a claim in absolute terms. All we can rely on is popularity.

But popularity is not the measure of a language. It doesn't measure the language's capabilities, or its reliability, or its ability to run on multiple platforms.

We don't care about popularity for the technical aspects of the language. We care about the popularity of a language for us, for ourselves.

Product managers care about the popularity of a language because of staffing. A popular language will have lots of people who know it, and therefore the manage will have a (relatively) easy time of finding candidates to hire. An obscure language will have few candidates, and they may demand a higher wage.

Individuals care about the popularity of a language because it means that there will be lots of companies hiring for that skill. Few companies will hire for an obscure programming language.

This essay has meandered from replacing a language to popularity of languages to the concerns of hiring managers and candidates. That's probably not a coincidence. Economic activity drives a lot of behavior; I see no reason that programming should be exempt. When thinking about a programming language, think about the economics, because that will contribute a lot to the language's life.

Tuesday, August 24, 2021

After the PC

Is the M1 Mac a PC? That is, is the new Macbook a personal computer?

To answer that question, let's take a brief trip through the history of computing devices.

Mainframes were the first commercially viable electronic computing devices. They were large and consisted of a processing unit and a few peripherals. The CPU was the most important feature.

Minicomputers followed mainframes, and had an important difference: They had terminals as peripherals. The most important feature was not the CPU but the number of terminals.

PCs were different from minicomputers in that they had integrated video and and integrated keyboard. They did not have terminals. In this sense, M1 Macs are PCs.

Yet in another sense, M1 Macs are something different: They cannot be modified.

IBM PCs were designed for modification. The IBM PC had slots for accessory cards. The information for those slots was available. Other manufacturers could design and sell accessory cards. Consumers could open the computer and add hardware. They could pick their operating system (IBM offered DOS, CP/M-86, and UCSD p-System).

The openness of the IBM PC was somewhat unusual. Apple II (Apple ][) computers were not openable. Macs were not openable (you needed a special tool). Other computers of the era were typically built in a way that discouraged modifications.

The IBM PC set the standard for personal computers, and that standard was "able to be opened and modified".

The M1 Macs are systems on a chip. The hardware cannot be modified. The CPU, memory, disk (well, let's call it "storage", to use the old mainframe-era term) are all fixed. Nothing can be replaced or upgraded.

In this sense, the M1 Mac is not a PC. (Of course, if it is not a PC, then what do we call it? We need a name. "System-on-a-chip" is too long, "SoC" sounds like "sock" and that won't do, either.)

I suspect that the folks at Apple will be happy to refer to their products with a term other than "PC". Apple fans, too. But I don't have a suitable, generic, term. Apple folks might suggest the term "Mac", as in "I have a Mac in my backpack", but the term "Mac" is not generic. (Unless Apple is willing to let the term be generic. If so, when I carry my SoC Chromebook, I can still say "I have a Mac in my backpack." I doubt Apple would be happy with that.)

Perhaps the closest thing to the new Apple M1 Macs is something so old that it predates the IBM PC: The electronic calculator.

Available in the mid-1970s, electronic calculators were quite popular. Small and yet capable of numeric computations, they were useful for any number of people. Like the Apple M1 Mac, they were designed to remain unopened by the user (except perhaps to replace batteries) and they were not modifiable.

So perhaps the Apple M1 Macbooks are descendants of those disco-era calculators.

* * * * *

I am somewhat saddened by the idea that personal computers have evolved into calculators, that PCs are not modifiable. I gained a lot of experience with computers by modifying them: adding memory, changing disk drives, or installing new operating systems.

A parallel change occurred in the automobile industry. In the 1950s, lots of people bought cars and tinkered with them. They replaced tires and shock absorbers, adjusted carburetors, installed superchargers and turbochargers, and replaced exhaust pipes. But over time, automobiles become more complex and more computerized, and now very few people get involved with their cars. (There are some enthusiastic car-hackers, but they are few in number.)

We lost something with that change. We lost the camaraderie of learning together, of car clubs and amateur competitions.

We lost the same thing with the change in PCs, from open, modifiable systems to closed, optimized boxes.

Wednesday, August 18, 2021

Apple's trouble with CSAM is from its approach to new features

Apple's method, its motus operandi, is to design a product or process and present it to its customers. There is no discussion (at least none outside of Apple), there is no gathering of opinion, there is no debate. Apple decides, implements, and sells. Apple dictates, customers follow.

We can see this behavior in Apple's hardware.

Apple removed the floppy disk from the Macintosh. Apple removed the CD drive from the iMac. Apple removed the audio port on the iPhone.

We can also see this behavior in Apple's software.

Apple designs the UIs for iOS and iPadOS and has very definite ideas about how things ought to be.

This method has served Apple well. Apple designs products, and customers buy them.

Yet this method failed with CSAM.

Apparently, some customers don't like being dictated to.

One aspect of the CSAM change is that it is a change to an existing service. If Apple changes the design of the iPhone (removing the audio port, for example), customers still have their old audio-port-equipped iPhone. They can choose to replace that iPhone with a later model -- or not.

If Apple had introduced a new file storage service, and made CSAM scanning part of that service, customers could choose to use that new service -- or not.

But Apple made a change to an existing service (scanning photos in iCloud), and existing iPhones (if Apple scans the photos on the phone). That presents a different calculation to customers. Instead of choosing to use a new service or not, customers must now choose to stop using a service, or continue to use the modified service. Changing from one cloud storage service to another is a larger task. (And possibly not possible with iPhones and iPads.)

Customers don't have the option of "buying in" to the new service. The changes for CSAM are not something that one can put off for a few months. One cannot look at the experience of other customers and then decide to participate.

No, this change is Apple flexing its muscles, altering the deal that was already established. ("Pray that I do not alter it further," quipped Darth Vader in a movie from a long time ago.)

I think that the negative reaction to Apple's CSAM strategy is not so much the scanning of photos, but the altering of the existing service. I think that people now realize that Apple can, and will, alter services. Without the consent, or even the advice, of the customers. I think that is what is bothering people.

Sunday, August 15, 2021

COBOL and Elixir

Someone has created a project to transpile (their word) COBOL to Elixir. I have some thoughts on this idea. But first, let's look at the example they provide.

A sample COBOL program:

      >>SOURCE FORMAT FREE
IDENTIFICATION DIVISION.
PROGRAM-ID. Test1.
AUTHOR. Mike Binns.
DATE-WRITTEN. July 25th 2021
DATA DIVISION.
WORKING-STORAGE SECTION.
01 Name     PIC X(4) VALUE "Mike".
PROCEDURE DIVISION.

DISPLAY "Hello " Name

STOP RUN.

This is "hello, world" in COBOL. Note that it is quite longer than equivalent programs in most languages. Also note that while long, it is still readable. Even a person who does not know COBOL can make some sense of it.

Now let's look at the same program, transpiled to Elixr:

defmodule ElixirFromCobol.Test1 do
  @moduledoc """
  author: Mike Binns
  date written: July 25th 2021
  """

  def main do
    try do
      do_main()
    catch
      :stop_run -> :stop_run
    end
  end 

  def do_main do
    # pic: XXXX
    var_Name = "Mike"
    pics = %{"Name" => {:str, "XXXX", 4}}
    IO.puts "Hello " <> var_Name
    throw :stop_run
  end
end

That is ... a lot of code. More than the code for the COBOL version! Some of that is due to the exception of "stop run" which in this small example seems to be excessive. Why wrap the core function inside a main() that simply exists to trap the exception? (There is a reason. More on that later.)

I'm unsure of the reason for this project. If it is a side project made on a whim, and used for the entertainment (or education) of the author, then it makes sense.

But I cannot see this as a serious project, for a couple of reasons.

First, the produced Elixir code is longer, and in my opinion less readable, than the original COBOL code. I may be biased here, as I am somewhat familiar with COBOL and not at all familiar with Elixir, so I can look at COBOL code and say "of course it does that" but when I look at Elixir code I can only guess and think "well, maybe it does that". Elixir seems to follow the syntax for modern scripting languages such as Python and Ruby, with some unusual operators.

Second, the generated Elixir code provides some constructs which are not used. This is, perhaps, an artifact of generated code. Code generators are good, up to a point. They tend to be non-thinking; they read input, apply some rules, and produce output. They don't see the bigger picture. In the example, the transpiler has produced code that contains the variable "pics" which contains information about the COBOL programs PICTURE clauses, but this "pics" variable is not used.

The "pics" variable hints at a larger problem, which is that the transpiled code is not running the equivalent program but is instead interpreting data to achieve the same output. The Elixir program is, in fact, a tuned interpreter for a specific COBOL program. As an interpreter, its performance will be less than that of a compiled program. Where COBOL can compile code to handle the PICTURE clauses, the Elixir code must look up the PICTURE clause at runtime, decode it, and then take action.

My final concern is the most significant. The Elixir programming language is not a good match for the COBOL language. Theoretically, any program written in a Turing-complete language can be re-written in a different Turing-complete language. That's a nice theory, but in practice converting from one language to another can be easy or can be difficult. Modern languages like Elixir have object-oriented and structured programming constructs. COBOL predates those constructs and has procedural code and global variables.

We can see the impedance mismatch in the Elixir code to catch the "stop run" exception. A COBOL program may contain "STOP RUN" anywhere in the code. The Elixir transpiler project has to build extra code to duplicate this capability. I'm not sure how the transpiler will handle global variables, but it will probably be a method that is equally tortured. Converting code from a non-structured language to a structured programming language is difficult at best and results in odd-looking code.

My point here is not to insult or to shout down the transpiler project. It will probably be an educational experience, teaching the author about Elixir and probably more about COBOL.

My first point is that programs are designed to match the programming language. Programs written in object-oriented languages have object-oriented designs. Programs written in functional languages have functional designs. Programs written in non-structured languages have... non-structured designs. The designs from one type of programming language do not translate readily to a programming language of a different type.

My second point is that we assume that modern languages are better than older languages. We assume that object-oriented languages like C++, C#, and Java are better than (non-OO) structured languages like Pascal and Fortran-77. Some of us assume that functional languages are better than object-oriented languages.

I'm not so sure about those assumptions. I think that object oriented languages are better at some tasks that mere structured languages, and older structured-only languages are better at other tasks. Object-oriented languages are useful for large systems; they let us organize code into classes and functions, and even larger constructs through inheritance and templates. Dynamic languages like Python and Ruby are good for some tasks but not others.

And I must conclude that even older, non-functional, non-dynamic, non-object-oriented, non-structured programming languages are good for some tasks.

One analogy of programming languages is that of a carpenter's toolbox: full of various tools for different purposes. COBOL, one of the oldest languages, might be considered the hammer, one of the oldest tools. Hammers do not have the ability of saws, drills, tape measures, or levels, but carpenters still use them, when the task is appropriate for a hammer.

Perhaps we can learn a thing or two from carpenters.

Sunday, August 8, 2021

Apple and the photographs

There has been a lot of discussion about Apple's plan to identify photographs that contain illegal material. Various comments have been made on the "one in one trillion" estimate for false positives. Others have commented on the ethics of such a search.

One idea I have not seen discussed is the reason for Apple to do this. Why would Apple choose to identify these images? Why now?

It doesn't help sell Apple products.

It doesn't help sell Apple services.

It doesn't improve Apple's reputation.

Yet Apple made the effort to identify illegal photographs, which included requirements, design, coding, and testing. It cost Apple money to do this.

If Apple gains no revenue, gains no reputation, has no material gain at all, then why should they do it? Why make the effort and why expend the money?

Perhaps there is a gain, but one that Apple has not revealed to us. What unrevealed reason could Apple have to examine and identify photographs on Apple equipment? (Or more specifically, in Apple iCloud?)

The one thought that comes to mind is an agreement with law enforcement agencies. In such an agreement, Apple scans photographs and reports illegal material to law enforcement agencies. In exchange, Apple gets... what? Something from the government? Payment? Or perhaps they don't get something from the government, such as a lawsuit or regulatory interference.

I'm speculating here. I have no knowledge of Apple's motivations. Nor do I have any knowledge of such a deal between government and Apple -- or any company. (Any keep in mind that there are governments other than the US government that may make such requests.)

But if there is a deal, then perhaps Apple is not the last company to perform such action. We may see other companies announce similar efforts to identify illegal material. Worse, we may learn that some companies have implemented such programs without informing their customers.

Thursday, August 5, 2021

The roaring success of Windows 365

Microsoft announced Windows 365, its "Windows as a Service" offering that lets one (if one is a business) create and use virtual Windows desktops. And just as quickly, Microsoft announced that it was suspending new accounts, because too many had signed up for the service.

A few thoughts:

First (and obvious) is that Microsoft underestimated the demand for Windows 365. Microsoft hosts the service on its Azure framework, and if the demand for Windows 365 outstrips the available servers, then it is popular indeed.

Second, (and perhaps less obvious) is that Microsoft is (probably) kicking themselves for pricing the service as they did. With strong demand, Microsoft apparently "left money on the table" and could have charged higher rates.

Third, (and also not so obvious) is that Microsoft's business customers (that is, most businesses) really want to move from their current arrangement for PC hardware to a different one. They either want to move to Windows 365 or they want to try it -- which indicates that they are not happy with their current PC platform. (That current platform might be physical, real-world PCs on employee desks, or it might be virtual PCs accessed by Remote Desktop or Citrix or some other mechanism.)

The desire for customers to try a different solution is, in my mind, a warning for Microsoft. It means that customers are not happy with the current state of Windows and its support -- new versions, upgrades, security patches, and administration. It could mean that customers will entertain other solutions, including Linux and Apple.

A few other thoughts:

With demand from the business community much stronger than expected, Microsoft will probably focus on business customers. In other words, I expect Microsoft to offer no comparable service for individuals or families -- at least not for the next two years. I expect the cost for a consumer product is higher than the cost for a commercial product, and the revenue for a consumer product is lower than the revenue for a business product.

Microsoft may leverage demand for the Windows 365 service to spur Windows 11 sales. They have announced Windows 365 with access from various devices, and the opportunity is to provide additional services for Windows 11 clients. (Better security, better integration between access computer and virtual desktop, and possibly lower costs.)

Finally, there is the outside chance that Microsoft will provide a special edition of Windows 11, one that is stripped down and usable only to access Windows 365. (Much like ChromeOS is suitable only to run Chrome.) Microsoft may even design and sell hardware to support a Windows 11 "C" mode ("C" for "Connectivity"?).

The strong demand for Windows 365 shows that lots of people are interested in it. Microsoft won't ignore that. Be prepared for more announcements for this new service.

Wednesday, July 28, 2021

Linux is a parasite, and it may be our future

Linux is a parasite.

So is Unix.

The first Unix ran on a DEC PDP-7. But DEC did not sell PDP-7s to run Unix; it sold them to run its own operating system called "DECsys".

Later Unix versions ran on PDP-11s. But DEC did not sell PDP-11s to run Unix; it sold them to run later operating systems called RSX-11, TSX-11, CTS-11, and RSTS.

DEC's minicomputers were simple, compared to today's PCs. They would load and run just about any program. On many models, the loader program (what we would call the bootstrap code) was entered by hand on a front panel.

There was no trusted platform, no TPM, no signed code. It was easy to load Unix onto a DEC minicomputer. The success of Unix was due, in part, to the openness of those minicomputers.

But to be honest, Unix was a parasite. It took advantage of the hardware that was available.

Linux is in the same way a parasite on PCs. PCs are sold to run Windows. (Yes, a few are sold with Linux. But PCs are designed to run Windows, and the cast majority are sold with Windows.)

PC hardware has been, from the original IBM PC, open and well-documented. Linux took advantage of that openness, and has enjoyed a modicum of success.

Linux is a parasite on Apple PCs too, taking advantage of the hardware that Apple designed.

But the life of a parasite is not easy.

As Apple changes its hardware and bolsters security, it becomes harder to run Linux on an Apple PC. It is possible to run Linux on an M1 MacBook. I expect that the effort will increase over the next few years, as Apple introduces more changes to defend against malware.

Microsoft is making similar changes to Windows and the PC platform. Microsoft designs and builds a small number of PCs, and issues a specification for the hardware to run Windows. That specification is changing to defend against malware. Those changes also make it harder to install Linux.

Will we see a day when it is impossible to install Linux on a PC? Or on a Macbook? I think we will, probably with Apple equipment first. Devices such as the iPhone and Apple Time Capsule require signed code to boot an operating system, and Apple is not divulging the signing keys. It is not possible to install Linux on them. I think a similar fate awaits Apple's Macbooks and iMac lines. Once that happens, Linux will be locked out of Apple hardware.

Chromebooks look for code signed by Google, although in developer mode they can boot code that has been signed by others. (The Chromebook boot code looks for a signed kernel, but it doesn't care who signed it.)

Microsoft is moving towards signed code. Windows version 11 will require signed code and a TPM (Trusted Platform Module) in the PC. There are ways to load Linux on these PCs, so Linux has not yet been locked out.

I think Microsoft recognizes the contributions that Linux makes to the ecosystem, and is taking steps to ensure that Linux will be available on future PCs. Apple, I think, sees no benefit from Linux and is willing to lock Linux out of Apple devices. Microsoft sees value in letting Linux run on PCs; Apple doesn't.

It might be that Microsoft is preparing a radical change. It may be that Microsoft is getting ready to limit Windows to virtual systems, and drop support for "real" PCs. The new "Windows 365" product (virtual computers running Windows accessible from a browser) could be the future of Windows.

In this fantasy world I am constructing, Microsoft provides Windows on virtual hardware and not anywhere else. Access to Windows is available via browser, but one must acquire the hardware and operating system to run the browser. That could be an old PC running an old (in the future) version of Windows 10 or Windows 11, or it could mean a Chromebook running ChromeOS, or it could mean a desktop PC running Linux.

This would be a big change -- and I'm not saying that it will happen, only that it may happen -- and it would have profound affects on the IT world. There are some thoughts that come to mind:

First, performance becomes less important for the physical PC running the browser. The heavy CPU work is on the server side. The PC hosting the browser is a fancy terminal, displaying the results of the computation but not performing the computation. The race for speed shifts to the servers hosting the virtual instances of Windows. (And there is less pressure to update local PCs every three years.)

Second, the effort to develop and support Windows drops significantly. A lot of work for Microsoft is maintaining compatibility with hardware. Windows works with just about every piece of hardware going back decades: printers, video cards, disk drives, camera, phones, ... you name it, Windows supports it. If Microsoft shifts to a virtual-server-only version of Windows, a lot of that work disappears from Microsoft's queue. The work doesn't vanish; it shifts to the people building the non-virtual PCs that run the browsers. But the work (and the expense) does vanish from Microsoft's accounts.

Third, this change is one that Apple cannot follow. Apple has built its strategy of privacy on top of a system of local processing -- a secure box. They don't send data to remote servers -- doing so would let your personal data escape the secure box. It has no way to offer virtual instances of macOS that correspond to Windows 365 without breaking that secure box. (And just as Windows 365 allows for longer lifespans of local PCs, virtual macOS would allow for longer lifespans of Macs and Macbooks -- something that Apple would prefer not to see, as they rely on consumers replacing their equipment every so often.)

If Microsoft does make this change, the prospects for Linux improve. If Microsoft pulls Windows off of the market, then PC manufacturers must offer something to run on their hardware. That something cannot be macOS, and it certainly won't be FreeDOS. (As good as it is, FreeDOS is not what we need.)

The operating system that comes with PCs may be Linux, or a variant of Linux built for laptop makers. There could be two versions: a lightweight version that is close to ChromeOS (just enough to run a browser) and a heavier version that is close to today's Linux distros.

If Microsoft makes this change -- and again, I'm not sure that they will -- then we really could see "the year of the Linux desktop". Oh, and it would mean that Linux would no longer be a parasite.

Tuesday, July 20, 2021

Debugging

Development consists of several tasks: analysis, design, coding, testing, and deployment are the typical tasks listed for development. There is one more: debugging, and that is the task I want to talk about.

First, let me observe that programmers, as a group, like to improve their processes. Programmers write the compilers and editors and operating systems, and they build tools to make tasks easier.

Over the years, programmers have built tools to assist in the tasks of development. Programmers were unhappy with machine coding, so they wrote assemblers which converted text codes to numeric codes. They were unhappy with those early assemblers because they still had to compute locations for jump targets, so they wrote symbolic assemblers that did that work.

Programmers wrote compilers for higher-level languages, starting with FORTRAN and FLOW-MATIC and COBOL. We've created lots of languages, and lots of compilers, since.

Programmers created editors to allow for creation and modification of source code. Programmers have created lots of editors, from simple text editors that can run on a paper-printing terminal to the sophisticated editors in today's IDEs.

Oh, yes, programmers created IDEs (integrated development environments) too.

And tools for automated testing.

And tools to simplify deployment.

Programmers have made lots of tools to make the job easier, for every aspect of development.

Except debugging. Debugging has not changed in decades.

There are three techniques for debugging, and they have not changed in decades.

Desk check: Not used today. Used in the days of mainframe and batch processing, prior to interactive programming. To "desk check" a program, one looks at the source code (usually on paper) and checks it for errors.

This technique was replaced by tools such as lint and techniques such as code reviews and pair programming.

Logging: Modify the code to print information to a file for later examination. Also know as "debug by printf()".

This technique is in use today.

Interactive debugging: This technique has been around since the early days of Unix. It was available in 8-bit operating systems like CP/M (the DDT program). The basic idea: Run the program with the debugger, pausing the execution it at some point. The debugger keeps the program loaded in memory, and one can examine or modify data. Some debuggers allow you to modify the code (typically with interpreted languages).

This technique is in use today. Modern IDEs such as Visual Studio and PyCharm provide interactive debuggers.

Those are the three techniques. They are fairly low-level technologies, and require the programmer to keep a lot of knowledge in his or her head.

These techniques gave us Kernighan's quote:

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

— The Elements of Programming Style, 2nd edition, chapter 2

These debugging techniques are the equivalent of assemblers. They allow programmers to do the job, but put a lot of work on the programmers. They assist with the mechanical aspect of the task, but not the functional aspect. A programmer, working on a defect and using a debugger, usually follow the following procedure:

- understand the defect
- load the program in the debugger
- place some breakpoints in the source code, to pause execution at points that seem close to the error
- start the program running, wait for a breakpoint
- examine the state of the program (variables and their contents)
- step through the program, one line at a time, to see which decisions are made ('if' statements)

This process requires the programmer to keep a model of the program inside his or her head. It requires concentration, and interruptions or distractions can destroy that model, requiring the programmer to start again.

I think that we are ready for a breakthrough in debugging. A new approach that will make it easier for the programmer.

That new approach, I think, will be innovative. It will not be an incremental improvement on the interactive debuggers of today. (Those debuggers are the result of 30-odd years of incremental improvements, and they still require lots of concentration.)

The new debugger may be something completely new, such as running two (slightly different) versions of the same program and identifying the points in the code where execution varies.

Or possibly new techniques for visualizing the data of the program. Today's debuggers show us everything, with limited ways to specify items of interest (and other items that we don't care about and don't want to see).

Or possibly visualization of the program's state, which would be a combination of variables and executed statements.

I will admit that the effort to create a debugger (especially a new-style debugger) is hard. I have written two debuggers in my career: one for 8080 assembly language and another for an interpreter for BASIC. Both were challenges, and I was not happy with the results for either of them. I suspect that to write a debugger, one must be twice as clever as when writing the compiler or interpreter.

Yet I am hopeful that we will see a new kind of debugger. It may start as a tool specific to one language. It may be for an established language, but I suspect it will be for a newer one. Possibly a brand-new language with a brand-new debugger. (I suspect that it will be an interpreted language.) Once people see the advantages of it, the idea will be adopted by other language teams.

The new technique may be so different that we don't call it a debugger. We may give it a new name. So it may be that the new debugger is not a debugger at all.