Tuesday, March 22, 2022

Apple has a software problem

Apple has a problem. Specifically, Apple has a software problem. More specifically, Apple has a problem between its hardware and its software. The problem is that Apple's hardware is getting bigger and more powerful at a much faster rate than its software.

Why is better hardware a problem? By itself, better hardware isn't a problem. But hardware doesn't exist by itself -- we use hardware with software. Faster, more powerful hardware requires bigger, more capable software.

One might think that fast hardware is a good thing, regardless of software. Faster hardware runs software faster, right? And faster is always a good thing, right? Well, not always.

For software that operates on a large set of data, such that the user is waiting for the result, yes, faster hardware is better. But faster hardware is only better if the user is waiting. Software that operates in "batch mode" or with limited interaction with the user is improved with better hardware. But software that interacts with the user, software that must wait for the user to do something, isn't necessarily improved.

Consider the word processor, a venerable tool that has been with us since the introduction of the personal computer. Word processors spend most of their time waiting for the user to press a key. This was true even with computers from prior to the IBM PC. In the forty-odd years since, hardware has gotten much, much faster but word processors have not gotten much more complicated. (There was a significant increase in complexity when we shifted from DOS and its character-based display and printing to Windows and graphics-based display and printing, but very little otherwise.)

The user experience for word processors has changed little in that time. Faster hardware has not improved the experience. If we limited computers to word processors, there would be no need for a processor more powerful than the Intel 80386. By extension, if you needed a computer for word processing (and nothing else) today's bottom-of-the-line, cheap, minimal computer would be more than enough for you. There is no point in spending on a premium computer (or even a mediocre one) because the minimal computer can do the job adequately.

The same logic applies to spreadsheets. And e-mail. And web browsers. Computers have gotten better faster than these programs and their data have gotten bigger.

The computers I use for my day-to-day and even unusual tasks are old PCs, ranging from five to twenty years in age. All of them are fast enough for what I need to do. An aged Dell Inspiron N5010 runs Windows 10 and lets me use Zoom and Teams to join virtual meetings. I could replace it with a modern laptop, but the experience would be the same! Why should I bother?

A premium computer is needed only for those tasks that perform complex operations on  large sets of data. And this is where Apple fails to provide the tools to justify its powerful (and expensive) hardware.

Apple is focussed on hardware, and it does a terrific job of designing and manufacturing powerful computers. But software is another story. Apple develops applications and then seems to lose interest in them. It built Pages, Numbers, and Keynote, and has made precious few improvements -- other than recompiling for ARM processors, or adding support for things like AirPlay. It hasn't added features.

The same goes for applications such as GarageBand, iTunes, and FaceTime. Even Xcode.

Apple has even let utilities such as grep and sed in MacOS (excuse me, "mac os". Or is it "macos"?) are aging with no updates. The corresponding GNU utilities have been modified and improved in various ways, to the point that developers now recommend the installation of the GNU utilities on Apple computers.

Apple may be waiting for others to build the applications that will take advantage of the latest Mac computers. I'm not sure that many will want to do that.

Building applications to leverage the new Apple processors may seem a no-brainer. But there are a number of disincentives.

First, Apple may build their own version of the application, and compete with the vendor. Independent vendors may be reluctant to enter a market when Apple is a possible competitor.

Second, developing applications to take advantage of the M1 architecture requires a lot of time and effort. The application must be multithreaded -- single-threaded applications cannot fully leverage the multiple cores on the M1. Designing, coding, and testing such an application is a lot of work.

Third, the market is limited. Applications developed to take advantage of the M1 processor are, well, developed for the M1 processor. They won't run on Windows PCs. You can't cross-build to run on Windows PCs, because PC processors are much slower than the Apple processors. The set of potential customers is limited to those who have the upper-end Apple computers. That's a small market (compared to Windows) and the potential revenue may not cover the development costs.

That is an intimidating set of disincentives.

So if Apple isn't building applications to leverage the upper-end Macs, and third-party developers aren't building applications to leverage upper-end Macs, then ... no one is building them! The result being that there are no applications (or very few) to take advantage of the higher-end processors.

I expect sales of the M1-based Macs to be robust, for a short time. But as people realize that their experience has not improved (except, perhaps for "buttery smooth" graphics) they will hesitate for the next round of new Macs (and iPads, and iPhones). As customers weigh the benefits and costs of new hardware, some will decide that their current hardware is good enough. (Just as my 15-year-old PCs are good enough for my simple word processing and spreadsheet needs.) If Apple introduces newer systems with even faster processors, more people will look at the price tags and decide to wait before upgrading.

Apple is set up to learn an important lesson: High-performance hardware is not enough. One needs the software to offer solutions to customers. Apple must work on its software.


Monday, March 14, 2022

Developer productivity is not linear

The internet has recently castigated an anonymous CEO who wanted his developers to work extra hours (without an increase in pay, presumably). Most of the reactions to the post have focussed on the issue of fairness. (And a not insignificant number of responses have been... less than polite.)

I want to look at a few other aspects of this issue.

I suspect that the CEO thinks he wants employees to have the same enthusiasm for the company that he himself has. He may want employees to put in extra hours and constantly think about improvements and new ideas, just as he does. But while the CEO (who is probably the founder) may be intrinsically motivated to work hard, employees may not share that same intrinsic motivation. And the CEO may be financially motivated to make the company as successful as possible (due to his compensation and stock holdings), but employees are typically compensated through wages and nothing else. (Some companies do offer bonuses; we have no information about compensation in this internet meme.) A salary alone is not enough to make employees work longer hours.

But its not really motivation, or enthusiasm, or a positive attitude that the CEO wants.

What the CEO wants, I think, is an increase in productivity. Making developers work extra, uncompensated hours is, in one sense, an increase in productivity. The developers will contribute more to the project, at no additional cost to the company. Thus, the company sees an increase in productivity (an increase in output with no increase in inputs).

Economists will point out that the extra work does have a cost, and that cost is borne by the employees. Thus, while the company doesn't pay for it, the cost does exist. It is an "externality" in the language of economists.

But these are small mistakes. The CEO is making bigger mistakes.

One big mistake is the thinking that developer productivity is linear. That is, thinking that if a developer can do so much (let's call it N) in one hour, then that same developer can do 2N in 2 hours, 4N in 4 hours, and so forth. This leads to the conclusion that by working an extra 2 hours each day (from 8 hours to 10 hours) then the developer provides 10N units of productivity instead of 8N units. This is not true; developers get tired like anyone else and their productivity wanes as they tire. Even if all of the hours are paid, developer productivity can vary.

Beyond simply tiring, the productivity of a developer will vary throughout a normal workday, and good developers recognize the times that are best for the different tasks of analysis, coding, and debugging.

A second big mistake is thinking that the quality of a developer's work is constant, and unaffected by the number of hours worked. This is similar to the first mistake, but somewhat different. The first mistake is about quantity; this mistake is about quality. The act of writing (or fixing) a computer program is a creative one, requiring a programmer to draw on knowledge and experience to devise correct, workable, and efficient solutions. Programmers who are tired make mistakes. Programmers who are under stress from personal issues make mistakes. (Programmers who are under stress from company-generated issues also make mistakes.) Asking programmers to work longer hours increases mistakes and reduces quality.

If one wants to increase the productivity of developers, there are multiple ways to do it. Big gains in productivity can come from automated tests, good tools (including compilers, version control, and debuggers), clear and testable requirements, and respect in the office. But these changes require time to implement, some require expenditures, and most have delayed returns on the investments.

Which brings us to the third big mistake: That gains in productivity can occur rapidly and for little to no cost. The techniques and tools that are effective at improving productivity require time and sometimes money.

Asking programmers to work longer hours is easy to implement, can be implemented immediately, and requires no expenditure (assuming you do not pay for the extra time). It requires no additional tools, no change to process, and no investment. On the surface, it is free. But it does have costs -- it costs you the goodwill of your team, it can cause an increase in errors in the code, and it can encourage employees to seek employment elsewhere.

There is little to be gained, I think, by saying impolite things to the CEO who wants longer hours from his employees. Better to look to our own houses, and our own practices and procedures. Are we using good tools? How can we improve productivity? What changes will it require?

Improving our productivity is a big task -- as big as we choose to make it. Let's use proven techniques and respect our employees and colleagues. I think that will give us better results.

Tuesday, March 8, 2022

About those new Macs

Apple's presentation today talks about three new products: the new iPhone SE, the new iPad Air, and the new Mac Studio desktop (with a new display, so maybe that is four new products).

The iPhone SE is pretty much what you would expect in a low-end Apple phone. It still uses the A-series chips.

The iPad Air uses an M-series chip, and that is interesting. Using the M-series chip brings the iPad Air closer to the Mac line of computers. I expect that, in the somewhat-distant future, Apple will replace the iMac with the iPad line. Apple already lets one run iPhone and iPad apps on Macbooks; some day Apple may let an M-series Mac run apps for an M-series iPad.

The new M1 Ultra chip and the new Apple Studio desktop and display received the most time in the presentation.

The M1 Ultra chip is a pairing of two M1 Max chips. Instead of simply placing two M1 Max chips on a board, Apple has connected them at the chip level. The connection allows for rapid transfer of data, and results in a powerful processor.

Interestingly, the M1 Ultra design does not follow the pattern of the M1, M1 Pro, and M1 Max chips, which are expansions of the base model. Apple may have run into difficulties in building a single-chip successor to the M1 Max. The pairing of two M1 Max chips feels like a compromise, getting a faster processor on an aggressive schedule.

The Mac Studio is a computer that I was expecting: an expanded version of the Mac Mini. The Mac Studio  has a beefier processor, more memory, and more ports than the Mac Mini, but the same basic design. The Mac Studio is simply a big brother to the Mac Mini, and nothing like the Mac Pro.

Apple hinted at a replacement for the Mac Pro, and my prediction for its successor stands. That is, I expect the replacement for the Mac Pro to be a more powerful Mac Mini (or a more powerful Mac Studio). It will not be like the current Mac Pro with slots for GPU cards. (There is no point, as the built-in GPU of the M1 processor provides more computational power than an external card.) It will most certainly have a processor more powerful than the M1 Ultra. That processor may be a double M1 Ultra (or four M1 Pro processor ganged together); what Apple calls it is anyone's guess. ("M1 Double Ultra"? "M1 Ultra Max"? "M1 Ultravox"?)

The new Mac Studio is a processor for specific purposes. Apple's presentation focussed on video applications, which are of interest to a limited market. For typical PC users who work in the pedestrian world of documents and spreadsheets, the new Mac Studio offers little -- the low-end Mac systems are capable, and additional processing power is simply wasted as the computer waits for the user.

Apple's presentation, and its concentration on video applications, shows a blind spot in their thinking. Apple made no announcements about changes to operating systems or application software. Another company, when announcing more powerful hardware, would also introduce more powerful software that takes advantage of that hardware. Apple could have introduced improved versions of applications such as Pages and Numbers, or features to share processing among Apple devices, or AI-like capabilities for improved security and privacy... but they did not. Perhaps they have those changes lined up for a future presentation, but I suspect that they simply don't have them. Their thinking seems to be to wow their customers with hardware alone. That, I think, may be a mistake.

Tuesday, March 1, 2022

With Chrome OS Flex, Look Before You Leap

Google made news with its "Chrome OS Flex" offering, which turns a PC into a Chromebook.

Some like the idea, seeing a way to reduce licensing costs. Others like the idea because it offers simpler administration. Yet others see it as a way of using older PCs that cannot migrate to Windows 11. 

Before committing to a conversion, consider:

Chrome OS Flex may not run on your PCs Chrome OS Flex works on some PCs but not all PCs. Google has a list of supported PCs, and the list is rather thin. Google rates target PCs with one of three classifications: "Certified", "Expect minor issues", and "Expect major issues". Google does not explain the difference between major and minor, but let's assume that major issues would be such that the Chrome experience would be poor and not productive.

Microsoft has a large knowledge base of hardware and device drivers. Google may be building such a knowledge base, but its current set of knowledge is much smaller than Microsoft's. The result is that Chrome OS Flex can run on a limited number of PC models.

Your employees may dislike the idea The introduction of new technology is tricky from a management perspective. Some employees will welcome Chrome OS Flex, and others will want to remain on the old, familiar system. If your roll-out is limited, some of the employees in the "stay on the old OS" group will feel relieved, and others may feel left out.

My recommendation is to communicate your plans well in advance, and focus on the ideas of efficiency and reduced costs. Avoid the notion of Chrome OS as a reward or a perk, and talk about it as simply another tool for the office.

Google is not Microsoft Switching from Windows to Chrome OS Flex means changing a core relationship from Microsoft to Google. Microsoft has a long history of supporting technologies and products; Google has the opposite. (There are web sites dedicated to the "Google graveyard".)

Google may drop the Google OS Flex offering at any time, and not provide a successor product. (If they do, your best path forward may be to replace the PCs running Google OS Flex with Chromebooks, which should provide the same capabilities as the PCs.)

Look before you leap My point is not to dissuade you from Google's Chrome OS Flex offering. Rather, I suggest that you consider carefully the benefits and risks of such a move. As part of your evaluation, I suggest a pilot project, moving some PCs (and employees) to the new OS. I also suggest that you compile an inventory of applications that run locally -- that is on your PCs, not on the web or in the cloud. Those applications cannot run on Chrome OS Flex, or on regular Chromebooks.

It may be possible to replace local applications with web-based applications, or cloud-based applications, but such replacements are projects themselves. You may want to start with a pilot project for Chrome OS Flex, and then migrate PC-based applications to the web or cloud, and then migrate other employees to Chrome OS. Or not -- a hybrid solution with some PCs running Windows (or mac os) and other PCs running Chrome OS Flex is possible.

Whatever your choose to do, I suggest that you think, communicate, evaluate, and then decide.