Wednesday, December 29, 2021

Something is rotten in the state of Apple

Apple recently announced a set of (rather large) retention payments for some engineers. (I assume that these are senior engineers.) The payments are in the form of restricted stock units which vest over a four-year period. Apple is making these payments to prevent employees from leaving for other companies. It's not clear, but these appear to be one-time payments. (That is, this is not a new, recurring payment to employees.)

This is not a good idea.

If employees are leaving Apple, that is a problem. And the problem has two (for this post) possible causes: employees are unhappy with wages, or employees are unhappy with something else.

If employees are unhappy with wages, it is quite likely that wages are not in line with other companies. If that is the problem, then the correct action is to adjust wages to competitive levels. A one-time payment (even spread over years) will not fix the problem.

I tend to think that Apple's wages are competitive.

If employees are unhappy with something other than wages, then a monetary payment does not fix the problem. Employees could he unhappy about many things: working conditions, opportunities to work on fun projects, opportunities for professional growth, political involvement of the company, or projects deemed unethical by employees. (And more!)

Monetary compensation does not fix any of those problems. The problems remain, and the unhappiness of employees remains.

I suspect that in the middle of the COVID-19 pandemic, Apple requested (demanded?) that employees return to the office, and a number of employees -- senior-level employees -- refused. Not only did they refuse, they refused with "and if you make me return to the office, I will leave Apple and work for someone else". (This is all speculation. I am not employed by Apple, nor have I spoken with any Apple employees.)

Apple has not one but two problems.

The first problem is that employees are leaving, and Apple has addressed that problem with bonus payments.

The second problem is that Apple thinks this tactic is a good one. (They must think it is a good idea. Otherwise they would not have done it.)

Payments of this type must have been discussed at the highest levels of Apple. I suspect that even members of the board of directors were consulted. Did anyone object to these payments?

In the end, Apple decided to pay employees to address issues that are not related to compensation. It is the wrong solution to the problem, and it will probably make the situation worse. Bonus payments were offered to some employees. The other employees will resent that. Bonus payments vest over four years. After the last payment, employees who were given these payments will see their compensation drop significantly. They may resent that.

Apple has a problem. Retention payments don't address the underlying issues. Is Apple addressing the underlying issues? Possibly. (Apple would not advertise that.)

Let's see what happens at Apple over the next year. I expect to hear about changes in the organization, with the resignations of several senior executives.

I also expect that Apple will continue on its current course for its products and services. Those are sound, and have a good reputation with customers. I see no need to change them, other than the typical improvements Apple makes each year.

Tuesday, December 21, 2021

Moving fast and going far are not the same thing

There is an old saying: If you want to go fast, go alone; if you want to go far, go in a group.

One significant difference between Apple and Microsoft is that Apple manages product lines and Microsoft manages an ecosystem. This difference is significant. Apple is, essentially, moving alone. It can (now) design its own hardware and software. Apple does still need raw materials, fabrication for its chips, manufacture of its cases and boxes, and assembly of components into finished goods. But Apple deals with two types of entities: suppliers (the companies that supply raw materials, chips, etc.) and customers (the people and companies that purchase computers and services).

Microsoft, in contrast, lives in an ecosystem that includes suppliers, PC manufacturers, developers, and customers (both individual and organizational). While Microsoft does design its Surface tablets and laptops, those tablets and laptops are a small part of the larger market. The laptops and desktops made by Dell, Lenovo, HP, and others are a large portion of the market.

Apple can move quickly, changing its processors from Intel to Apple-designed ARM in less than two years. Microsoft, on the other hand, must move more cautiously. It cannot dictate that Windows will shift from Intel to ARM because Microsoft does not control the manufacturers of PCs.

If Microsoft wants to shift personal computers from the current designs of discrete components to system-on-chip designs (and I believe that they do) then Microsoft must persuade the rest of the ecosystem to move in that direction. Such persuasion is not easy -- PC makers have lots invested in the current designs, and are familiar with gradual changes to improve PCs. For the past three decades, Microsoft has guided PC design through specifications that allow PCs to run Windows, and those specification have changed gradually: faster processors here, faster buss connections there, faster memory at some times, better interfaces to graphics displays at other times. The evolution of personal computers has been a slow, predictable process, with changes that can be absorbed into the manufacturing processes of the PC makers.

The Microsoft "empire" of PC design has been, for all intents and purposes, successful. For thirty years we have benefitted from computers in the office and in the home, and those computers have (for the most part) been usable and reliable.

Apple benefitted from that PC design too. The Intel-based Mac and MacBook computers were designed in the gravity field of Windows. Those Mac computers were Windows PCs, capable of running Windows (and Linux) because they used the same processors, video chips, and buss interfaces as Windows PCs. They had to use those chips; custom chips would be too expensive and risky to make.

Apple has now left that empire. It is free of the "center of gravity" that Windows provides in the market. Apple can now design its own processor, its own video chips, its own memory, its own storage. Apple is free! Free to move in any direction it likes, free to design any computer it wants.

I predict that Apple computers will move in their own direction, away from the standard design for Windows PCs. Each new generation of Apple computers will be less and less "Windows compatible". It will be harder and harder to run Windows (or Linux) on Apple hardware.

Microsoft has a new challenge now. They must answer Apple's latest M1 (and M2) system-on-chip designs. But they cannot upend the ecosystem. Nor can they abandon Intel and shift everything to ARM designs. Apple has leveraged its experience with its 'A' series chips in phones to build the 'M' series chips for computers. Microsoft doesn't have that experience, but it has something Apple doesn't: an ecosystem.

I predict that Microsoft will form alliances with other companies to build system-on-chip designs. Probably with IBM, to leverage virtual machine technology (and patents) and possibly Intel to leverage chip fabrication. (Intel recently announced that it was open to sharing its fabrication plants for non-Intel designs.)

[I hold stock in both Microsoft and IBM. That probably biases my view.]

Microsoft needs to build experience with system-on-chip designs, and alliances can provide that experience. But alliances require time, so I'm not expecting an announcement from Microsoft right away. The first system-on-chip designs may be tablets and simple laptops, possibly competing with Chromebooks. Those first simple laptops may take two years of negotiation, experimentation, design, assembly, and testing before anything is ready for market. (And even then, they may have a few problems.)

I think Microsoft can achieve the goal of system-on-chip designs. I think that they will do it with the combined effort of multiple companies. I think it will take time, and the very first products may be disappointing. But in the long run, I think Microsoft can succeed.

If you want to move fast, go alone; if you want to go far, go in a group.


Wednesday, December 15, 2021

Everyone who is not Apple

Apple has direct control over the design of their hardware and software, a situation that has not been seen in the history of personal computers. I expect that they will enjoy success -- at least for a while -- with new, powerful designs.

But what about everyone else? What about Microsoft, the maker of Windows, Office, Azure services, Surface tablets and laptops, and other things? What about Dell and Lenovo and Toshiba and HP, the makers of personal computers? What Google, the maker of Chromebooks and cloud services?

That's a big question, and it has a number of answers.

Microsoft has a number of paths forward, and will probably pursue several of them. For its Surface devices, it can design systems on a chip that correspond to Apple's M1 chips. Microsoft could use ARM CPUs; it has already ported Windows to ARM and offers the "Surface X" with ARM. Microsoft could design a system-on-a-chip that uses Intel CPUs; such would provide binary compatibility with current Windows applications. Intel chips generate more heat, but Microsoft has success with Intel chips in most of its Surface line, so a system-on-a-chip with Intel could be possible. These paths mirror the path that Apple has taken.

Microsoft, unlike Apple, has another possible way forward: cloud services. Microsoft could design efficient processors for the computers that run data centers, the computers that host virtual instances of Windows and Linux. Such a move would ease the shift of processing from laptops and desktop computers and the cloud. (Such a shift is possible today; system-on-chip designs make it more efficient.) Microsoft may work with Intel, or AMD, or even IBM to design and build efficient hardware for cloud data centers.

Manufacturers of personal computers may design their own system-on-chip answers to the M1 processor. Or they may form a consortium and design a common chip that can be used by all (still allowing for custom system-on-chip designs and the current discrete component designs). Microsoft has, for a long time, provided a reference document for the requirements of Windows, and system-on-chip designs would follow that set of requirements just as laptops and desktops today follow those requirements.

PC manufacturers do lose some control when they adopt a common design. A common design would be common, and available to all manufacturers. It prevents a manufacturer from enhancing the design by selecting better components. Rather than shift their entire product line to system-on-chip design, manufacturers will probably use the system-on-chip design for only some of the offerings, keeping some products with discrete designs (and enhancements to distinguish them from the competition).

Google does not have to follow the requirements for Windows; it has its own requirements for Chromebooks. System-on-chip design is a good fit for Chromebooks, which already use both Intel and ARM chips (and few users can see the difference). The performance improvement of system-on-chip design fits in nicely with Google's plan for games on Chromebooks. The increase in power allows for an increase in the sophistication of web-based apps.

I am willing to wait for Microsoft's response and for Google's response. I think we will see innovative designs and improvements to the computing experience. I expect Microsoft to push in two directions: system-on-chip designs for their Surface tablets, and cloud-based applications running on enhanced hardware. Google will follow a similar strategy, enhancing cloud hardware and improving the capabilities of Chromebooks.

Sunday, December 5, 2021

Apple all by themselves

Apple is in a unique position. At least for personal computers.

For the first time in the history of personal computers, Apple is designing the entire device, from hardware to software.

The earliest personal computers (the TRS-80, the Commodore PET, and even the Apple I and Apple II) were built with components made by other manufacturers. Processors, memory, displays, disk drives, and even power supplies were assembled into personal computers. The Apple II used the 6502 processor made by MOS Technologies (and second-sourced by others). The TRS-80 used Zilog's Z-80 processor. The original IBM PC used the 8088 processor from Intel, standard memory chips, and floppy disks (when it had them) from Tandem.

The manufacturers of personal computers was always about using "off the shelf" components to build a PC. It made sense, as it allowed a small company to sell computers and let other companies specialize in components.

Now, Apple is building their own system-on-a-chip that contains processor, memory, graphics processor, and storage. Apple designs the all of the electronics, the case, the power supply, and the display screen. Apple writes their own operating system and application programs.

Such a thing has never been seen with personal computers.

Which is not to day that such a thing has never been seen in computing. It has.

Prior to personal computers, the "we build everything" model was prevalent in computing. IBM used it for their mainframes. DEC used it for their minicomputers.

We cannot make comparisons between the mainframe age and the current age of computing. The markets are too different, the customers are too different, and the computers are too different. The fact that Apple designs everything about its computers is a happy correspondence to that earlier age.

But perhaps we can make some guesses about what Apple will do next.

For starters, they have a lot of control over the design of new products. In the old system, Apple had to use whatever processors or GPUs were available. The designs offered by "specialists" acted as a center of gravity for the market -- anyone using AMD's GPUs was working in a similar space.

Apple is no longer dependent on the designs of others. They do not have to use processors from Intel, or GPUs from AMD, or memory from another maker. Thus, Apple is able to implement its own designs. Those designs, we can reasonably expect, will deviate from the "center of gravity" of the old system. Apple can chart its own path.

Secondly, we can expect that Apple will design its products to help Apple. In particular, Apple will probably add diagnostics to its products, diagnostics that only Apple software will be able to access. The recent M1 MacBook Pro computers are suffering from memory allocation problems; we can expect that Apple will resolve this, at first with software, and later with enhancements to hardware to assist the software in identifying and reporting such issues.

Third, Apple can introduce new products on their schedule. Rather than wait for Intel or AMD to introduce new hardware, Apple can work on new hardware and release products when they are ready. I expect that Apple will release new Mac and MacBook computers every year -- although not every model. Perhaps Apple will use a three-year cycle, with updates to MacBook Air and iMac in one year, MacBook Pro and iMac Pro in a second year, and MacBook Mini and MacBook Max in the third year. (In the fourth year, Apple repeats with updates to MacBook Air and iMac.)

We should note that Apple is not completely independent. While they design the hardware, they do not manufacture it. Apple relies on outside companies to fabricate chips, build components, and assemble those components into complete units. Will Apple build its own fabrication plants? Or buy existing ones? I suspect not. Said plants are large and expensive (event for Apple's budget) and come with lots of environment issues. I expect Apple to leave that work to specialists.

All in all, Apple is in a good position to design its computers and sell them on its schedule.

The important question is: Will those computers be useful to their customers?

That's an idea for another time.

Sunday, November 21, 2021

CPU power and developer productivity

Some companies have issued M1-based Macbooks to developers. Their reason? To improve productivity.

Replacing "regular" PCs with M1-based Macbooks is nice, and it certainly provides developers with more CPU power, but does it really increase the productivity of developers?

In a very simple way, yes, it does. The more powerful Macbooks will let developers compile faster, build deployment packages faster, and run tests faster.

But the greater leaps in developer productivity were not performing steps faster.

Increases in developer productivity come not from raw CPU power, or more memory, or faster network connections, or higher-resolution displays. Meaningful increases come from better tools, which in turn are built on CPU power, memory, network connections, and displays.

The tools that have helped developers become productive are many. They include programming languages, compilers and interpreters, editors, debuggers, version control systems, automated test systems, and communication systems (such as e-mail or chat or streamed messages like Slack).

We build better tools when we have the computing power to support them. In the early days of computing, prior to the invention of the IDE (integrated development environment), the steps of editing and compiling were distinct and handled by separate programs.

Early editors could not hold the entire text of a long program in memory at once, so they had special commands to "page" through the file. (One opened the editor and started with the first "page" of text, made changes and got things right, and then moved to the next "page". It sounds like the common "page down" operation in modern editors, except that a "page" in the old editors was longer than the screen, and -- note this -- there was no "page up" operation. Paging was a one-way street. If you wanted to go back, you had to page to the end of the file, close the file, and then run the editor again.

Increased memory ended the need for "page" operations.

The first integrated environment may have been the UCSD P-System, which offered editing, compiling, and running of programs. The availability of 64K of memory made this system possible. Unfortunately, the CPUs at the time could not support the virtual processor (much like the later JVM for Java) and the p-System never achieved popularity.

The IBM PC, with a faster processor and more memory, made the IDE practical. In addition to editing and  compiling, there were add-ons for debugging.

The 80386 processor, along with network cards and cables, made Windows practical, which made network applications possible. That platform allowed for e-mail (at least within an organization) and that gave a boost to lots of employees, developers included. Networks and shared file servers also allowed for repositories of source code and version control systems. Version control systems were (and still are) an immense aid to programming.

Increases in computing power (CPU, memory, network, or whatever) let us build tools to be more effective as developers. An increase of raw power, by itself, is nice, but the payoff is in the better tools.

What will happen as a result of Apple's success with its M1 systems?

First, people will adopt the new, faster M1-based computers. This is already happening.

Second, competitors will adopt the system-on-a-chip design. I'm confident that Microsoft is already working on a design (probably multiple designs) for its Surface computers, and as a reference for other manufacturers. Google is probably working on designs for Chromebooks.

Third, once the system-on-a-chip design has been accepted in the Windows environment, people will develop new tools to assist programmers. (It won't be Apple, and I think it won't be Google either.)

Where-ever the source, what kinds of tools can we expect? That's an interesting question, and the answer is not obvious. But let us consider a few things:

First, the increase in power is in CPU and GPU capacity. We're not seeing an increase in memory or storage, or in network capacity. We can assume that innovations will be built on processing, not communication.

Second, innovations will probably help developers in their day-to-day jobs. Developers perform many tasks. Which are the tasks that need help? (The answer is probably not faster compiles.)

I have a few ideas that might help programmers.

One idea is to analyze the run-time performance of programs and identify "hot spots" -- areas in the code that take a lot of time. We already have tools to do this, but they are difficult to use and run the system in a slow mode, such that a simple execution can take minutes instead of seconds. (A more complex task can run for an hour under "analyze mode".) A faster CPU can help with this analysis.

Another idea is for debugging. The typical debugger allows a programmer to step through the code one line at a time, and to set breakpoints and run quickly to those points in the code. What most debuggers don't allow is the ability to go backwards. Often, a programming stepping through the code gets to a point that is know to be a problem, and needs to identify the steps that got the program to that point. The ability to "run in reverse" would let a programmer "back up" to an earlier point, where he (or she) could see the decisions and the data that lead to the problem point. Computationally, this is a difficult task, but we're looking at a significant increase in computational power, so why not?

A third possibility is the analysis of source code. Modern IDEs perform some analysis, often marking code that is syntactically incorrect. With additional CPU power, could we identify code that is open to other errors? We have tools to identify SQL injection attacks, and memory errors, and other poor practices. These tools could be added to the IDE as a standard feature.

In the longer term, we may see new programming languages emerge. Just as Java (and later, C#) took advantage of faster CPUs to execute byte-code (the UCSD p-System was a good idea, merely too early) new programming languages may do more with code. Perhaps we will see a shift to interpreted languages (Python and Ruby, or their successors). Or maybe we will see a combination of compiled and interpreted code, with some code compiled and other code interpreted at run-time.

More powerful computers let so do things faster, but more importantly, they let us do things differently. They expand the world of computation. Let's see how we use the new world given to us with system-on-a-chip designs.

Thursday, November 11, 2021

M1 is not the only option for productivity

A number of companies have announced that they are equipping their developers with M1 MacBooks, to improve performance of tasks such as builds and, I presume, tests.

The thinking runs along these lines: the tasks developers perform are important, some of these tasks take a long time, the new M1 MacBooks perform these tasks quickly, therefore providing developers with M1 MacBooks is an investment that improves productivity. (And an increase in productivity is a good thing.)

The supporting arguments often use the time to run a build, or the time to perform automated tests. The M1 MacBooks, according to the argument, can perform these tasks much faster than the current equipment. The implied benefits are often described as a reduction in expenses, which I believe is an incorrect result. (The company will continue to pay its developers, so their annual expenses will not change as a result of the new MacBooks -- except for the cost of the MacBooks.)

But there is another aspect to this "rush to faster computers" that I think has been overlooked. That aspect is cloud computing.

If one is using laptops or desktops with Windows, one can move that work to virtual instances of Windows in the cloud. Microsoft's "Windows 365" service offers Windows in the cloud, with different options for processor power, memory, and storage. One can rent a fast processor and get the same improvement in computing.

Let's look at some numbers. A new M1 MacBook Pro with a 14-inch screen costs $2000 and with a 16-inch screen costs $2500. (There are multiple configurations; these are the lowest prices.)

If those MacBooks last 3 years (a reasonable assumption) then the amortized costs are $56 per month or $69 per month.

Now let's consider an alternative: Windows virtual machines in the cloud. Microsoft's "Windows 365" offers different configurations for prices ranging from $31 per month to $66 per month.

Of course, one still needs a local PC to access the cloud-based Windows, so let's add that cost, too. But we don't need a high-end laptop: The local PC is simply a fancy terminal: a device to accept keystrokes and mouse clicks and send them to the cloud-based PC, and accept screen updates and display them to the user. We don't need a lot of processing power for that.

One can get a decent 14-inch laptop for $600 (less if you hunt for bargains) and a decent 15.6-inch laptop for about the same. Assuming a purchase cost of $600, the monthly addition is $17, which pushes the monthly costs for the cloud-based configuration to $73 or $86. That's a bit higher than the cost of the local MacBook, but not that much higher. And keep in mind that with Windows 365, Microsoft handles some tasks for you, such as updates.

I don't consider a cloud-based solution for MacBooks, because cloud-based MacBooks are different from cloud-based Windows PCs. Windows PCs are often virtualized instances running on high-end hardware; MacBooks in the cloud are Mac computers in a datacenter -- not virtualized instances. A MacBook in the cloud is really just a MacBook at a remote location.

My point is not that cloud-based Windows PCs are better than MacBooks, or that local MacBooks are better than local Windows PCs.

My point is that one has different options for computing. Local MacBooks are one option. Local Windows PCs are another option. Cloud-based Windows PCs are an option. (And if you insist, cloud-based Macs are an option.)

Some companies are pursuing a strategy of local MacBooks. That strategy may be good for them, It does not automatically follow that the same strategy is good for everyone. (Nor does it follow that the strategy is good for them; time will tell.)

My advice is to consider the different options for computing, review your needs and your finances, and select a strategy that works for you.  

Thursday, October 28, 2021

System-on-chips for everyone!

Apple has demonstrated that the system-on-chip design (seen in their new MacBooks, iMacs, and Mac Minis) is popular.

What does system-on-chip design mean for other forms of computing? Will other manufacturers adopt that design?

An obvious market for system-on-chip design is Chromebooks. (If they are not using it already.) Many Chromebooks already use ARM processors (others use Intel) and moving the ARM-based Chromebooks to ARM-based system-on-chip design is fairly straightforward. Chromebooks also have a narrow design specification, controlled by Google, which makes a system-on-chip design feasible. Google limits the variation of Chromebooks, so it may be that the entire Chromebook market could be served with three (or possibly four) distinct designs.

Chromebooks would benefit from system-on-chip designs in two ways: lower cost and higher performance. One may think performance is unimportant to Chromebooks because Chromebooks are merely hosts for the Chrome browser, but that is not true. The Chrome browser (indeed, any modern browser) must do a lot, from rendering HTML to running JavaScript to playing audio and video. They must also handle keystrokes and focus, tasks normally associated with an operating systems's window manager. In addition, browsers must now execute web-assembly (WASM) for some applications. Browsers are complex critters.

Google also has their eyes on games, and improved performance will allow more Chromebooks to run advanced games.

I think we can safely assume that Chromebooks will move to system-on-chip designs.

What about Windows PCs? Will they change to system-on-chip designs? Here I think the answer is not so obvious.

Microsoft sets hardware specifications for Windows. If you want to build a PC that runs Windows, you have to conform to those specifications. It is quite possible that Microsoft will design their own system-on-chip for PCs and use them in Microsoft's own Surface tablets and laptops. It is possible that they will make the design available to other manufacturers (Dell, Lenovo, etc.). Such a move would make it easier to build PCs that conform to Microsoft's specifications.

A system-on-chip design would possibly split designs for PCs into two groups: system-on-chip in one group and traditional discrete components in the other. System-on-chip designs work poorly with expansion slots, so PCs that use such a design would probably have no expansion slots -- not even one for a GPU. But many folks want GPUs, so they will prefer traditional designs. We may see a split market for Windows PCs, with customizable PCs using discrete components and non-upgradable PCs (similar to Chromebooks and Macbooks) using system-on-chip designs.

Such a split has already occurred in the Windows PC market. Laptop PCs tend to have limited options for upgrades (if any). Small desktop PCs also have limited options. Large desktops are the computers that still have expansion slots; these are the computers that let the owner replace components such as RAM and storage.

I think system-on-chip designs are the way of the future for most of our computers (laptops, desktops, phones, etc.). I think we'll see better performance, lower cost, and improved reliability. It's a move in a good direction.