Tuesday, March 22, 2022

Apple has a software problem

Apple has a problem. Specifically, Apple has a software problem. More specifically, Apple has a problem between its hardware and its software. The problem is that Apple's hardware is getting bigger and more powerful at a much faster rate than its software.

Why is better hardware a problem? By itself, better hardware isn't a problem. But hardware doesn't exist by itself -- we use hardware with software. Faster, more powerful hardware requires bigger, more capable software.

One might think that fast hardware is a good thing, regardless of software. Faster hardware runs software faster, right? And faster is always a good thing, right? Well, not always.

For software that operates on a large set of data, such that the user is waiting for the result, yes, faster hardware is better. But faster hardware is only better if the user is waiting. Software that operates in "batch mode" or with limited interaction with the user is improved with better hardware. But software that interacts with the user, software that must wait for the user to do something, isn't necessarily improved.

Consider the word processor, a venerable tool that has been with us since the introduction of the personal computer. Word processors spend most of their time waiting for the user to press a key. This was true even with computers from prior to the IBM PC. In the forty-odd years since, hardware has gotten much, much faster but word processors have not gotten much more complicated. (There was a significant increase in complexity when we shifted from DOS and its character-based display and printing to Windows and graphics-based display and printing, but very little otherwise.)

The user experience for word processors has changed little in that time. Faster hardware has not improved the experience. If we limited computers to word processors, there would be no need for a processor more powerful than the Intel 80386. By extension, if you needed a computer for word processing (and nothing else) today's bottom-of-the-line, cheap, minimal computer would be more than enough for you. There is no point in spending on a premium computer (or even a mediocre one) because the minimal computer can do the job adequately.

The same logic applies to spreadsheets. And e-mail. And web browsers. Computers have gotten better faster than these programs and their data have gotten bigger.

The computers I use for my day-to-day and even unusual tasks are old PCs, ranging from five to twenty years in age. All of them are fast enough for what I need to do. An aged Dell Inspiron N5010 runs Windows 10 and lets me use Zoom and Teams to join virtual meetings. I could replace it with a modern laptop, but the experience would be the same! Why should I bother?

A premium computer is needed only for those tasks that perform complex operations on  large sets of data. And this is where Apple fails to provide the tools to justify its powerful (and expensive) hardware.

Apple is focussed on hardware, and it does a terrific job of designing and manufacturing powerful computers. But software is another story. Apple develops applications and then seems to lose interest in them. It built Pages, Numbers, and Keynote, and has made precious few improvements -- other than recompiling for ARM processors, or adding support for things like AirPlay. It hasn't added features.

The same goes for applications such as GarageBand, iTunes, and FaceTime. Even Xcode.

Apple has even let utilities such as grep and sed in MacOS (excuse me, "mac os". Or is it "macos"?) are aging with no updates. The corresponding GNU utilities have been modified and improved in various ways, to the point that developers now recommend the installation of the GNU utilities on Apple computers.

Apple may be waiting for others to build the applications that will take advantage of the latest Mac computers. I'm not sure that many will want to do that.

Building applications to leverage the new Apple processors may seem a no-brainer. But there are a number of disincentives.

First, Apple may build their own version of the application, and compete with the vendor. Independent vendors may be reluctant to enter a market when Apple is a possible competitor.

Second, developing applications to take advantage of the M1 architecture requires a lot of time and effort. The application must be multithreaded -- single-threaded applications cannot fully leverage the multiple cores on the M1. Designing, coding, and testing such an application is a lot of work.

Third, the market is limited. Applications developed to take advantage of the M1 processor are, well, developed for the M1 processor. They won't run on Windows PCs. You can't cross-build to run on Windows PCs, because PC processors are much slower than the Apple processors. The set of potential customers is limited to those who have the upper-end Apple computers. That's a small market (compared to Windows) and the potential revenue may not cover the development costs.

That is an intimidating set of disincentives.

So if Apple isn't building applications to leverage the upper-end Macs, and third-party developers aren't building applications to leverage upper-end Macs, then ... no one is building them! The result being that there are no applications (or very few) to take advantage of the higher-end processors.

I expect sales of the M1-based Macs to be robust, for a short time. But as people realize that their experience has not improved (except, perhaps for "buttery smooth" graphics) they will hesitate for the next round of new Macs (and iPads, and iPhones). As customers weigh the benefits and costs of new hardware, some will decide that their current hardware is good enough. (Just as my 15-year-old PCs are good enough for my simple word processing and spreadsheet needs.) If Apple introduces newer systems with even faster processors, more people will look at the price tags and decide to wait before upgrading.

Apple is set up to learn an important lesson: High-performance hardware is not enough. One needs the software to offer solutions to customers. Apple must work on its software.


Monday, March 14, 2022

Developer productivity is not linear

The internet has recently castigated an anonymous CEO who wanted his developers to work extra hours (without an increase in pay, presumably). Most of the reactions to the post have focussed on the issue of fairness. (And a not insignificant number of responses have been... less than polite.)

I want to look at a few other aspects of this issue.

I suspect that the CEO thinks he wants employees to have the same enthusiasm for the company that he himself has. He may want employees to put in extra hours and constantly think about improvements and new ideas, just as he does. But while the CEO (who is probably the founder) may be intrinsically motivated to work hard, employees may not share that same intrinsic motivation. And the CEO may be financially motivated to make the company as successful as possible (due to his compensation and stock holdings), but employees are typically compensated through wages and nothing else. (Some companies do offer bonuses; we have no information about compensation in this internet meme.) A salary alone is not enough to make employees work longer hours.

But its not really motivation, or enthusiasm, or a positive attitude that the CEO wants.

What the CEO wants, I think, is an increase in productivity. Making developers work extra, uncompensated hours is, in one sense, an increase in productivity. The developers will contribute more to the project, at no additional cost to the company. Thus, the company sees an increase in productivity (an increase in output with no increase in inputs).

Economists will point out that the extra work does have a cost, and that cost is borne by the employees. Thus, while the company doesn't pay for it, the cost does exist. It is an "externality" in the language of economists.

But these are small mistakes. The CEO is making bigger mistakes.

One big mistake is the thinking that developer productivity is linear. That is, thinking that if a developer can do so much (let's call it N) in one hour, then that same developer can do 2N in 2 hours, 4N in 4 hours, and so forth. This leads to the conclusion that by working an extra 2 hours each day (from 8 hours to 10 hours) then the developer provides 10N units of productivity instead of 8N units. This is not true; developers get tired like anyone else and their productivity wanes as they tire. Even if all of the hours are paid, developer productivity can vary.

Beyond simply tiring, the productivity of a developer will vary throughout a normal workday, and good developers recognize the times that are best for the different tasks of analysis, coding, and debugging.

A second big mistake is thinking that the quality of a developer's work is constant, and unaffected by the number of hours worked. This is similar to the first mistake, but somewhat different. The first mistake is about quantity; this mistake is about quality. The act of writing (or fixing) a computer program is a creative one, requiring a programmer to draw on knowledge and experience to devise correct, workable, and efficient solutions. Programmers who are tired make mistakes. Programmers who are under stress from personal issues make mistakes. (Programmers who are under stress from company-generated issues also make mistakes.) Asking programmers to work longer hours increases mistakes and reduces quality.

If one wants to increase the productivity of developers, there are multiple ways to do it. Big gains in productivity can come from automated tests, good tools (including compilers, version control, and debuggers), clear and testable requirements, and respect in the office. But these changes require time to implement, some require expenditures, and most have delayed returns on the investments.

Which brings us to the third big mistake: That gains in productivity can occur rapidly and for little to no cost. The techniques and tools that are effective at improving productivity require time and sometimes money.

Asking programmers to work longer hours is easy to implement, can be implemented immediately, and requires no expenditure (assuming you do not pay for the extra time). It requires no additional tools, no change to process, and no investment. On the surface, it is free. But it does have costs -- it costs you the goodwill of your team, it can cause an increase in errors in the code, and it can encourage employees to seek employment elsewhere.

There is little to be gained, I think, by saying impolite things to the CEO who wants longer hours from his employees. Better to look to our own houses, and our own practices and procedures. Are we using good tools? How can we improve productivity? What changes will it require?

Improving our productivity is a big task -- as big as we choose to make it. Let's use proven techniques and respect our employees and colleagues. I think that will give us better results.

Tuesday, March 8, 2022

About those new Macs

Apple's presentation today talks about three new products: the new iPhone SE, the new iPad Air, and the new Mac Studio desktop (with a new display, so maybe that is four new products).

The iPhone SE is pretty much what you would expect in a low-end Apple phone. It still uses the A-series chips.

The iPad Air uses an M-series chip, and that is interesting. Using the M-series chip brings the iPad Air closer to the Mac line of computers. I expect that, in the somewhat-distant future, Apple will replace the iMac with the iPad line. Apple already lets one run iPhone and iPad apps on Macbooks; some day Apple may let an M-series Mac run apps for an M-series iPad.

The new M1 Ultra chip and the new Apple Studio desktop and display received the most time in the presentation.

The M1 Ultra chip is a pairing of two M1 Max chips. Instead of simply placing two M1 Max chips on a board, Apple has connected them at the chip level. The connection allows for rapid transfer of data, and results in a powerful processor.

Interestingly, the M1 Ultra design does not follow the pattern of the M1, M1 Pro, and M1 Max chips, which are expansions of the base model. Apple may have run into difficulties in building a single-chip successor to the M1 Max. The pairing of two M1 Max chips feels like a compromise, getting a faster processor on an aggressive schedule.

The Mac Studio is a computer that I was expecting: an expanded version of the Mac Mini. The Mac Studio  has a beefier processor, more memory, and more ports than the Mac Mini, but the same basic design. The Mac Studio is simply a big brother to the Mac Mini, and nothing like the Mac Pro.

Apple hinted at a replacement for the Mac Pro, and my prediction for its successor stands. That is, I expect the replacement for the Mac Pro to be a more powerful Mac Mini (or a more powerful Mac Studio). It will not be like the current Mac Pro with slots for GPU cards. (There is no point, as the built-in GPU of the M1 processor provides more computational power than an external card.) It will most certainly have a processor more powerful than the M1 Ultra. That processor may be a double M1 Ultra (or four M1 Pro processor ganged together); what Apple calls it is anyone's guess. ("M1 Double Ultra"? "M1 Ultra Max"? "M1 Ultravox"?)

The new Mac Studio is a processor for specific purposes. Apple's presentation focussed on video applications, which are of interest to a limited market. For typical PC users who work in the pedestrian world of documents and spreadsheets, the new Mac Studio offers little -- the low-end Mac systems are capable, and additional processing power is simply wasted as the computer waits for the user.

Apple's presentation, and its concentration on video applications, shows a blind spot in their thinking. Apple made no announcements about changes to operating systems or application software. Another company, when announcing more powerful hardware, would also introduce more powerful software that takes advantage of that hardware. Apple could have introduced improved versions of applications such as Pages and Numbers, or features to share processing among Apple devices, or AI-like capabilities for improved security and privacy... but they did not. Perhaps they have those changes lined up for a future presentation, but I suspect that they simply don't have them. Their thinking seems to be to wow their customers with hardware alone. That, I think, may be a mistake.

Tuesday, March 1, 2022

With Chrome OS Flex, Look Before You Leap

Google made news with its "Chrome OS Flex" offering, which turns a PC into a Chromebook.

Some like the idea, seeing a way to reduce licensing costs. Others like the idea because it offers simpler administration. Yet others see it as a way of using older PCs that cannot migrate to Windows 11. 

Before committing to a conversion, consider:

Chrome OS Flex may not run on your PCs Chrome OS Flex works on some PCs but not all PCs. Google has a list of supported PCs, and the list is rather thin. Google rates target PCs with one of three classifications: "Certified", "Expect minor issues", and "Expect major issues". Google does not explain the difference between major and minor, but let's assume that major issues would be such that the Chrome experience would be poor and not productive.

Microsoft has a large knowledge base of hardware and device drivers. Google may be building such a knowledge base, but its current set of knowledge is much smaller than Microsoft's. The result is that Chrome OS Flex can run on a limited number of PC models.

Your employees may dislike the idea The introduction of new technology is tricky from a management perspective. Some employees will welcome Chrome OS Flex, and others will want to remain on the old, familiar system. If your roll-out is limited, some of the employees in the "stay on the old OS" group will feel relieved, and others may feel left out.

My recommendation is to communicate your plans well in advance, and focus on the ideas of efficiency and reduced costs. Avoid the notion of Chrome OS as a reward or a perk, and talk about it as simply another tool for the office.

Google is not Microsoft Switching from Windows to Chrome OS Flex means changing a core relationship from Microsoft to Google. Microsoft has a long history of supporting technologies and products; Google has the opposite. (There are web sites dedicated to the "Google graveyard".)

Google may drop the Google OS Flex offering at any time, and not provide a successor product. (If they do, your best path forward may be to replace the PCs running Google OS Flex with Chromebooks, which should provide the same capabilities as the PCs.)

Look before you leap My point is not to dissuade you from Google's Chrome OS Flex offering. Rather, I suggest that you consider carefully the benefits and risks of such a move. As part of your evaluation, I suggest a pilot project, moving some PCs (and employees) to the new OS. I also suggest that you compile an inventory of applications that run locally -- that is on your PCs, not on the web or in the cloud. Those applications cannot run on Chrome OS Flex, or on regular Chromebooks.

It may be possible to replace local applications with web-based applications, or cloud-based applications, but such replacements are projects themselves. You may want to start with a pilot project for Chrome OS Flex, and then migrate PC-based applications to the web or cloud, and then migrate other employees to Chrome OS. Or not -- a hybrid solution with some PCs running Windows (or mac os) and other PCs running Chrome OS Flex is possible.

Whatever your choose to do, I suggest that you think, communicate, evaluate, and then decide.

Thursday, February 17, 2022

My guesses about the Metaverse

Facebook is committed to the Metaverse. They are so committed that they changed the name of the company from "Facebook" to "Meta".

But what, exactly, is the metaverse? Facebook -- excuse me, Meta -- has provided only vague descriptions.

I have a few ideas. I start with some assumptions:

First, the metaverse, for Meta, will be a source of income. Meta will make money -- somehow -- with the metaverse offering.

Second, that income will probably come from advertising. Advertising is what Meta knows. I expect them to use that expertise.

Third, the metaverse will run on Meta devices, and not Apple phones (or Android phones). Meta will do this to avoid the Apple tax that it collects on transactions, and to collect data on its users. Apple's recent moves to increase privacy on its phones will provide an incentive for Meta to build its own platform.

Fourth, the platform will be an "immersive" (we're going to see that word a lot, I fear) one that uses an over-the-eyes display, headphones, and a microphone. There may be a few other pieces, but the display, headphones, and microphone are the important parts.

Given those assumptions, what will we see in the metaverse?

To sell advertising, Meta needs users. It needs users who spend a lot of time on the platform. The more time a user spends on metaverse, the more opportunities Meta has to show them advertisements. Therefore, the content on metaverse will be designed to attract and retain attention.

I expect that the metaverse will be closer to a video game than a web page. Instead of text and photographs, the metaverse will rely on animation and sound.

But the metaverse won't look and feel like a typical video game. Video games require too much attention, and if one is concentrating on the game then one is not paying attention to advertisements. (Also, not everyone wants to play action-packed video games.)

I think metaverse will have a mix of fast-paced and slow-paced attractions. It may have video games (especially multi-player video games), and it may have pastoral activities such as a walk in a virtual park. (A walk which you can take with friends, and in which you can meet people.) It can have real locations and fictional locations. One could visit the Eiffel Tower in Paris, for example, or the Grand Canyon. Or maybe one could visit a completely imaginary place such as Middle Earth and Hobbiton.

Metaverse may even have group sessions for things like virtual yoga classes or virtual bird watching.

How will Meta build all of these virtual locations? My guess is that they will build some, and rely on others to build more. They may ask game companies, who have experience with virtual locations, to build games for the metaverse or to assist others to build virtual-world counterparts to real-world locations such as museums, tourist destinations, or fantasy worlds.

Advertisements will be video, too. Instead of static text that pops up, and instead of simple photographs, advertisements will be video, and interactive, and fit into the current virtual world. They may be delivered by avatars. When walking through a virtual park, one may encounter a talking squirrel that mentions a movie, or a book, or a restaurant.

That's my vision of the metaverse. Meta has some challenges for this effort.

One challenge is getting other content providers on board. The creation of a virtual world is a significant effort, much higher than building a web page.

Another is interaction. The equipment needed to access metaverse (over-the-eyes display, headphones, and microphone in my guess) allows for limited input. Voice recognition seems the least clumsy approach, although I'm not sure that the technology is quite ready. (Also, a room full of people all on the metaverse and speaking into their microphones will be ... noisy.) Another approach is gestures, but over-the-eyes displays are limited to turning and perhaps nodding and shaking. For complex input, something more is needed. I'm not sure what that will be.

The biggest challenge may be non-technical. Facebook was successful because of the network effect. Once a person joined, they sent e-mails to their friends, asking them to join. Facebook got big from this effect. So big that it surpassed its predecessor, MySpace (which surpassed its predecessor, Friendster).

Facebook, at the moment has a lot of users but little in the way of goodwill. It will take a lot of convincing to get people to join this new metaverse. Meta will handle this with... advertising.

So those are my guesses. The metaverse will be a platform for delivering advertisements, attracting users with interactive video content. Meta will use their own platform, bypassing Apple and Google and their taxes, restrictions, and rules.

Let's see what Meta delivers!

Sunday, January 23, 2022

Will Microsoft Change Windows to Linux

People, from time to time, ask about Microsoft changing from Windows to Linux. When they do, lots of people respond. The responses fall into two general categories: Microsoft will switch to Linux because it is the superior operating system, and Microsoft will stick with Windows because it is the superior operating system.

The rebuttals are always -- always -- in the technical realm. Linux is better at this, and Windows is better at that.

I have a different response.

Microsoft will switch from Windows to Linux if, and when, it is Microsoft's interest to switch.

In the 1990s and 2000s, Windows was a key part of its strategy. Microsoft sold software (or licenses for software, which amounts to the same thing) and it used Windows as a base for its other products. Office ran on Windows (and versions for Mac OS, which were a special case). SQL Server ran on Windows. Internet Explorer ran on Windows. Outlook ran on Windows, and talked to Exchange, which also ran on Windows. Visual Studio ran on Windows. SourceSafe ran on Windows (and Unix, because it had been developed by an independent company and sold to Microsoft).

During that period, Microsoft would never consider switching from Windows to Linux. Such a move would destroy Microsoft's strategy of "everything on Windows".

Today, Microsoft offers services that extend beyond Windows, and some of them use Linux. Azure provides cloud services. One can provision Linux servers as well as Windows servers (and pay Microsoft for both). Microsoft has less incentive to force customers to use Windows.

In addition, Microsoft is moving its apps into the cloud and onto the web. One can open and edit Word documents and Excel spreadsheets in a browser. (The online versions of Word and Excel are limited compared to the locally-installed versions. I expect the online versions to improve over time.) Microsoft has also created a cloud-based, web version of Visual Studio Code, which lets programmers collaborate across multiple operating systems.

Microsoft has dropped the "everything on Windows" strategy in favor of a "sell services and subscriptions" strategy. It doesn't require Windows to be at the center of the customer experience.

Will Microsoft replace Windows with Linux? The proper way to look at the question is not in the technical realm, but in the financial realm. If Microsoft can make more money with Linux than Windows, it should (and probably will) offer Linux.

Windows provides an income stream, in the form of licenses. Microsoft is moving from a "buy once until you upgrade" approach to an annual subscription. The latter is more predictable, for both Microsoft and customers, and seems to provide higher revenue to Microsoft. But the point is that Windows provides income to Microsoft.

Windows is also an expense for Microsoft. The development, maintenance, and support for Windows requires time and effort in significant quantities.

The question then becomes: which is the higher number? Does revenue cover expenses (and then some)? Or does Windows cost more to maintain than it brings in revenue?

The current capabilities of Microsoft's cloud-based web applications are such that locally-installed applications provide more to the customer. Some day that may change. Until it does, those advantages translate to incentives to support Windows.

Technical arguments can be fun. They can also be heated. But they are not the way to convince Microsoft to switch to Linux. Or to stay with Windows. The decision is a financial one, not a technical one.


Wednesday, January 12, 2022

Successful programming languages

The IT world has seen a number of programming languages. Some became popular and some did not.

With more than half a century of experience, we can see some patterns in languages that became popular. First, let's review some of the popular programming languages.

FORTRAN and COBOL were successful because they met a business need (or two, perhaps). The primary need was for a programming language that was easier to understand than assembly language. The secondary need was for a programming language that could be used across computers of different manufacturers, allowing companies to move programs from one vendor's hardware to another. Both FORTRAN and COBOL met those needs.

BASIC was modestly successful on minicomputers and wildly successful on personal computers. It was possible to run BASIC on the small PCs (sometimes with as little as 4K of memory!). It was easy to use, and amateur programmers could write and run programs relatively easily. It filled the technical space of timesharing on minicomputers, and the technical space of personal computers.

C became popular in the Unix world because it was included in Unix distributions. If you ran Unix, you most likely programmed in C. One could say that it was pushed upon the world by AT&T, the owners of Unix.

SQL became successful in the 1980s, just as databases became available and popular. Prior to databases, computer systems offered "file managers" and "file access libraries" which allowed basic operations on records but not tables. Each library had its own set of capabilities and its own API. SQL provided a common set of operations and a common API. If allowed businesses to move easily from one database to another,  guaranteed a core set of operations, and permitted a broad set of programmers to work on multiple systems.

C++ became popular because it solved a problem with C programs, namely the organization of large programs. C++ offered object-oriented concepts of classes, inheritance, encapsulation, and polymorphism, each of which helps organize code.

Visual Basic became popular because it provided an easy way to write programs for Windows. Microsoft's earlier Visual C++ required knowledge of the Windows API and lots of discipline. Visual Basic required neither, hiding the Windows API from the programmer and providing safety in the programming language.

Objective-C became popular because Apple used it for programming applications for Macintosh computers. (Later, when Apple switched to the Swift programming language, interest in Objective-C plummeted.)

Java became popular because it promised that people could write programs once and then run them anywhere. It did a good job of delivering on that promise, too.

C# is Microsoft's version of Java (its second version, after Visual J++) and is popular only in that Microsoft pushes it. If Microsoft were to disappear overnight, interest in C# would drop dramatically.

Swift is Apple's language for development of applications for the iPhone, iPad, and other Apple products. It is successful because Apple pushes it upon the world.

JavaScript became popular because it was ubiquitous. Much like BASIC on all PCs, JavaScript was in all browsers, and the combination of HTML, the DOM, and JavaScript allowed for web applications with powerful processing in the browser.

Python became popular because it was a better Perl. Python had a simpler syntax and also had object-oriented programming built in.

Notice that, with the exception of Rust replacing C or C++, these new languages become popular in new spaces. They don't replace an existing popular language. BASIC didn't replace COBOL or FORTRAN, it became popular in the new spaces of timesharing and personal computers. C# didn't replace Java; it joined Visual Basic in the Microsoft space and slowly gained in popularity as Microsoft supported it more than Visual Basic.

So we can see that there are a few reasons that languages become popular:

  • A vendor pushes it
  • It solves a commonly-recognized business problem
  • It fills a technical space

If we accept these as the reasons that languages become popular, we can make some predictions about new languages that become popular. We can say that, if a major vendor pushed a language for its projects (a vendor such as Amazon, for example) then that language would become popular.

Or, we could say that a new technical space would allow a new language to become popular.

Or, if there is a commonly recognized business problem with our current set of programming languages, a new language that solves that problem would become popular.

So what do we see?

If we accept that C and C++ have problems (memory management, buffer overflows) and we accept that those problems are commonly recognized, then we can see the replacement of C and C++ with a different language, one that addresses those problems. The prime contender for that is Rust. We may see a gradual shift from C and C++ programming to Rust, as more and more people develop confidence in Rust and develop a fear of memory management and buffer overrun issues.

One technical space that could provide an opportunity for a new programming language is the Internet of Things (IoT). Appliances and devices must communicate with each other and with servers. I suspect that the IoT space is in need more of protocols than of programming languages, but perhaps there is room for a programming language that works with new protocols to establish trusted connections and, more importantly, trusted updates.

A third area is a teaching language. BASIC was designed, initially, for people not familiar with programming. Pascal was designed to teach structured programming. Do we have a language designed to teach object-oriented programming? To teach functional programming? Do we even have a language to teach basic programming? BASIC and Pascal have long been discarded by the community, and introductory courses now often use Java, JavaScript, or Python which are all rather complex languages. A new, simple language could gain popularity.

As we develop new devices for virtual reality, augmented reality, and other aides, we may need new programming languages.

Those are the areas that I think will see new programming languages.