Wednesday, July 28, 2021

Linux is a parasite, and it may be our future

Linux is a parasite.

So is Unix.

The first Unix ran on a DEC PDP-7. But DEC did not sell PDP-7s to run Unix; it sold them to run its own operating system called "DECsys".

Later Unix versions ran on PDP-11s. But DEC did not sell PDP-11s to run Unix; it sold them to run later operating systems called RSX-11, TSX-11, CTS-11, and RSTS.

DEC's minicomputers were simple, compared to today's PCs. They would load and run just about any program. On many models, the loader program (what we would call the bootstrap code) was entered by hand on a front panel.

There was no trusted platform, no TPM, no signed code. It was easy to load Unix onto a DEC minicomputer. The success of Unix was due, in part, to the openness of those minicomputers.

But to be honest, Unix was a parasite. It took advantage of the hardware that was available.

Linux is in the same way a parasite on PCs. PCs are sold to run Windows. (Yes, a few are sold with Linux. But PCs are designed to run Windows, and the cast majority are sold with Windows.)

PC hardware has been, from the original IBM PC, open and well-documented. Linux took advantage of that openness, and has enjoyed a modicum of success.

Linux is a parasite on Apple PCs too, taking advantage of the hardware that Apple designed.

But the life of a parasite is not easy.

As Apple changes its hardware and bolsters security, it becomes harder to run Linux on an Apple PC. It is possible to run Linux on an M1 MacBook. I expect that the effort will increase over the next few years, as Apple introduces more changes to defend against malware.

Microsoft is making similar changes to Windows and the PC platform. Microsoft designs and builds a small number of PCs, and issues a specification for the hardware to run Windows. That specification is changing to defend against malware. Those changes also make it harder to install Linux.

Will we see a day when it is impossible to install Linux on a PC? Or on a Macbook? I think we will, probably with Apple equipment first. Devices such as the iPhone and Apple Time Capsule require signed code to boot an operating system, and Apple is not divulging the signing keys. It is not possible to install Linux on them. I think a similar fate awaits Apple's Macbooks and iMac lines. Once that happens, Linux will be locked out of Apple hardware.

Chromebooks look for code signed by Google, although in developer mode they can boot code that has been signed by others. (The Chromebook boot code looks for a signed kernel, but it doesn't care who signed it.)

Microsoft is moving towards signed code. Windows version 11 will require signed code and a TPM (Trusted Platform Module) in the PC. There are ways to load Linux on these PCs, so Linux has not yet been locked out.

I think Microsoft recognizes the contributions that Linux makes to the ecosystem, and is taking steps to ensure that Linux will be available on future PCs. Apple, I think, sees no benefit from Linux and is willing to lock Linux out of Apple devices. Microsoft sees value in letting Linux run on PCs; Apple doesn't.

It might be that Microsoft is preparing a radical change. It may be that Microsoft is getting ready to limit Windows to virtual systems, and drop support for "real" PCs. The new "Windows 365" product (virtual computers running Windows accessible from a browser) could be the future of Windows.

In this fantasy world I am constructing, Microsoft provides Windows on virtual hardware and not anywhere else. Access to Windows is available via browser, but one must acquire the hardware and operating system to run the browser. That could be an old PC running an old (in the future) version of Windows 10 or Windows 11, or it could mean a Chromebook running ChromeOS, or it could mean a desktop PC running Linux.

This would be a big change -- and I'm not saying that it will happen, only that it may happen -- and it would have profound affects on the IT world. There are some thoughts that come to mind:

First, performance becomes less important for the physical PC running the browser. The heavy CPU work is on the server side. The PC hosting the browser is a fancy terminal, displaying the results of the computation but not performing the computation. The race for speed shifts to the servers hosting the virtual instances of Windows. (And there is less pressure to update local PCs every three years.)

Second, the effort to develop and support Windows drops significantly. A lot of work for Microsoft is maintaining compatibility with hardware. Windows works with just about every piece of hardware going back decades: printers, video cards, disk drives, camera, phones, ... you name it, Windows supports it. If Microsoft shifts to a virtual-server-only version of Windows, a lot of that work disappears from Microsoft's queue. The work doesn't vanish; it shifts to the people building the non-virtual PCs that run the browsers. But the work (and the expense) does vanish from Microsoft's accounts.

Third, this change is one that Apple cannot follow. Apple has built its strategy of privacy on top of a system of local processing -- a secure box. They don't send data to remote servers -- doing so would let your personal data escape the secure box. It has no way to offer virtual instances of macOS that correspond to Windows 365 without breaking that secure box. (And just as Windows 365 allows for longer lifespans of local PCs, virtual macOS would allow for longer lifespans of Macs and Macbooks -- something that Apple would prefer not to see, as they rely on consumers replacing their equipment every so often.)

If Microsoft does make this change, the prospects for Linux improve. If Microsoft pulls Windows off of the market, then PC manufacturers must offer something to run on their hardware. That something cannot be macOS, and it certainly won't be FreeDOS. (As good as it is, FreeDOS is not what we need.)

The operating system that comes with PCs may be Linux, or a variant of Linux built for laptop makers. There could be two versions: a lightweight version that is close to ChromeOS (just enough to run a browser) and a heavier version that is close to today's Linux distros.

If Microsoft makes this change -- and again, I'm not sure that they will -- then we really could see "the year of the Linux desktop". Oh, and it would mean that Linux would no longer be a parasite.

Tuesday, July 20, 2021

Debugging

Development consists of several tasks: analysis, design, coding, testing, and deployment are the typical tasks listed for development. There is one more: debugging, and that is the task I want to talk about.

First, let me observe that programmers, as a group, like to improve their processes. Programmers write the compilers and editors and operating systems, and they build tools to make tasks easier.

Over the years, programmers have built tools to assist in the tasks of development. Programmers were unhappy with machine coding, so they wrote assemblers which converted text codes to numeric codes. They were unhappy with those early assemblers because they still had to compute locations for jump targets, so they wrote symbolic assemblers that did that work.

Programmers wrote compilers for higher-level languages, starting with FORTRAN and FLOW-MATIC and COBOL. We've created lots of languages, and lots of compilers, since.

Programmers created editors to allow for creation and modification of source code. Programmers have created lots of editors, from simple text editors that can run on a paper-printing terminal to the sophisticated editors in today's IDEs.

Oh, yes, programmers created IDEs (integrated development environments) too.

And tools for automated testing.

And tools to simplify deployment.

Programmers have made lots of tools to make the job easier, for every aspect of development.

Except debugging. Debugging has not changed in decades.

There are three techniques for debugging, and they have not changed in decades.

Desk check: Not used today. Used in the days of mainframe and batch processing, prior to interactive programming. To "desk check" a program, one looks at the source code (usually on paper) and checks it for errors.

This technique was replaced by tools such as lint and techniques such as code reviews and pair programming.

Logging: Modify the code to print information to a file for later examination. Also know as "debug by printf()".

This technique is in use today.

Interactive debugging: This technique has been around since the early days of Unix. It was available in 8-bit operating systems like CP/M (the DDT program). The basic idea: Run the program with the debugger, pausing the execution it at some point. The debugger keeps the program loaded in memory, and one can examine or modify data. Some debuggers allow you to modify the code (typically with interpreted languages).

This technique is in use today. Modern IDEs such as Visual Studio and PyCharm provide interactive debuggers.

Those are the three techniques. They are fairly low-level technologies, and require the programmer to keep a lot of knowledge in his or her head.

These techniques gave us Kernighan's quote:

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

— The Elements of Programming Style, 2nd edition, chapter 2

These debugging techniques are the equivalent of assemblers. They allow programmers to do the job, but put a lot of work on the programmers. They assist with the mechanical aspect of the task, but not the functional aspect. A programmer, working on a defect and using a debugger, usually follow the following procedure:

- understand the defect
- load the program in the debugger
- place some breakpoints in the source code, to pause execution at points that seem close to the error
- start the program running, wait for a breakpoint
- examine the state of the program (variables and their contents)
- step through the program, one line at a time, to see which decisions are made ('if' statements)

This process requires the programmer to keep a model of the program inside his or her head. It requires concentration, and interruptions or distractions can destroy that model, requiring the programmer to start again.

I think that we are ready for a breakthrough in debugging. A new approach that will make it easier for the programmer.

That new approach, I think, will be innovative. It will not be an incremental improvement on the interactive debuggers of today. (Those debuggers are the result of 30-odd years of incremental improvements, and they still require lots of concentration.)

The new debugger may be something completely new, such as running two (slightly different) versions of the same program and identifying the points in the code where execution varies.

Or possibly new techniques for visualizing the data of the program. Today's debuggers show us everything, with limited ways to specify items of interest (and other items that we don't care about and don't want to see).

Or possibly visualization of the program's state, which would be a combination of variables and executed statements.

I will admit that the effort to create a debugger (especially a new-style debugger) is hard. I have written two debuggers in my career: one for 8080 assembly language and another for an interpreter for BASIC. Both were challenges, and I was not happy with the results for either of them. I suspect that to write a debugger, one must be twice as clever as when writing the compiler or interpreter.

Yet I am hopeful that we will see a new kind of debugger. It may start as a tool specific to one language. It may be for an established language, but I suspect it will be for a newer one. Possibly a brand-new language with a brand-new debugger. (I suspect that it will be an interpreted language.) Once people see the advantages of it, the idea will be adopted by other language teams.

The new technique may be so different that we don't call it a debugger. We may give it a new name. So it may be that the new debugger is not a debugger at all.

Wednesday, July 14, 2021

Windows 11 is change, which is not new

Microsoft announced Windows 11, and with it a set of requirements for the hardware that is required to run Windows 11. This is not new; all versions of Windows have had a list of "minimum required hardware". Yet some folks are quite upset about the requirements. Why are they so upset?

Looking back over the history of PCs (and going back the the first IBM PC, before the days of Windows), we can see a steady pattern of improvements to hardware and operating systems that took advantage of those improvements. New versions often required better hardware.

The first IBM PCs came without hard disks, and floppy disks were an option. DOS, the PC operating system before Windows, required floppy disks. IBM's PC XT included a hard disk, and DOS version 2 took advantage of the hard disk. (And was required to use the hard disk.) One could run DOS 2 on a floppy-only PC -- if you had enough memory -- but it provided little advantage. Systems with insufficient memory were not supported.

Windows 3.0, the first version of Windows to achieve popularity, would run on a PC with an 8088 processor, but it required a hard drive, and the multimedia operations required an 80286 processor and a CD drive. Here we see that older, less capable systems, are not supported.

Windows NT and each of its successors have set requirements for processor, memory, graphics, and disk space. Windows 2000, Windows XP, Windows Vista, Windows 8, and Windows 10 all have requirements for hardware.

So we should be used to the idea that new operating systems will not support older systems.

But I keep coming back to the question: why are people so upset about this version of Windows? What is it with Windows 11 that makes people complain?

I can think of several reasons:

First, this announcement was a surprise. Microsoft has, for the past several years, released Windows 10 and kept the hardware requirements unchanged. Those requirements allowed for a broad swath of PCs to run Windows 10. (I myself have PCs from 2007 and 2012 that are running Windows 10.) There has been nothing in the messages from Microsoft that Windows 10 would be replaced, or that hardware requirements would change. Until now.

Second, the new requirements have dropped support for a lot of PCs, and perhaps folks are still using these older PCs. By raising the hardware bar for Windows, Microsoft has declared some (okay, lots of) PCs are "unworthy". If a person happens to have one of those PCs, they may consider this an insult.

But the reason I truly suspect is a different one.

Past updates and changes to hardware requirements have had clear benefits. When Windows/386 wanted a VGA card, we understood that the graphics capabilities of earlier video cards were not sufficient for the desired experience. When an operating system required a 16-bit card for the network interface, we understood that the transfer speeds of the older 8-bit cards were not sufficient. When Windows NT required an Intel 386 processor, we understood that the older 8088 and 80286 processors were not sufficient to provide multitasking the way we wanted it.

With past upgrades, we understood the reasons for the required hardware. That's not true with Windows 11.

Windows 11 needs a certain amount of memory and disk space; that's understood. It also needs the TPM 2 chip; we understand that. But Windows 11 has requirements for a certain, not-well-understood subset of Intel processors. (It's not clear that Microsoft understands the subset, either.)

Part of the problem is Intel's product line. Intel has gobs of processor models. It has so many that the old names of "8088" and "80286" or "Pentium 1" and "Pentium 3" don't work. Instead, Intel uses letters and numbers, something like i7-6550 and i5-5204. (Those aren't real models; I made them up. Or maybe they are real, maybe I hit on actual product numbers. But you get the idea.)

Intel has shipped, over the past decade, possibly thousands of different processor models, each with  different features. Most people don't care about most of the differences. The typical person looks at the processor clock speed and the number of cores, and little else. Hardware enthusiasts and game players may look at socket type and cache size.

Only the folks who write operating systems and low-level drivers go beyond those to look at the arcane aspects of the different processors. Those aspects can include the handling of interrupts, privileged execution of certain instructions, fixes to errors in the instruction set, virtual memory, and virtual machines.

It is these differences that are important to Microsoft. Windows has to work with all of those processors. It has to handle the quirks of each processor. It has to "know" that it can trust an instruction on some processors and not trust it on others. All of those quirks add up, and they can interact in strange and subtle ways.

On top of that, Microsoft has to test each of those configurations (preferably on real processors, not simulations). That means that Microsoft has to maintain a large collection of hardware.

By limiting the processors to those designed and shipped in the past three years, Microsoft eliminates the older processors and in so doing reduces the variation that they cause. The reduced set of processors allows for (relatively) simpler code for Windows, and a simpler test process.

But none of this is obvious. Microsoft has not said "we're limiting the supported processors to those we can test on", nor have they said "we're limiting the supported processors to those that have these (insert arcane aspect) features".

All we have is a vague announcement. (And I will say that the whole "Windows 11" announcement seems rushed. It doesn't have the depth and details of previous announcements from Microsoft. But that's another topic.)

That vague announcement does not give us understanding. And because we don't understand the reasons, we resent the change. That's basic psychology.

I will close with a few thoughts:

- Microsoft, I think, has thought about Windows 11 and its requirements, and has made a good decision.
- That decision is not available to us, so we see the change as arbitrary.
- It is easy to resent what we do not understand.
- Microsoft was probably surprised by the reaction to the announcement, and may be working on more announcements.
- While I don't understand Microsoft's decision, I have faith that they have a good process.

A poor message can hide a good process; let's wait for more information.

Also - Microsoft is not alone in changing hardware requirements. Apple has done so with every new version of macOS (I think). Even Linux drops support for older systems. I have an old 32-bit MacBook running Ubuntu 16.04 with no way to upgrade because Ubuntu now requires 64-bit processors.

Monday, July 5, 2021

Returning to the office, or not

As the COVID-19 pandemic wanes, companies must consider their future. Do they require all employees to return to the office? Do they continue a "work from home" policy?

Some companies have announced "work from home forever" policies. 

Some companies have required employees to return to the office. The response has often been one of protest, with some workers quitting their jobs instead of returning to the office.

Other companies have announced hybrid schedules. Apple is probably the largest company to announce such a schedule. Its employees protested such a plan.

Why such hostility to returning to normal? I have a few ideas.

Work-life balance: The shift to work-from-home has given workers experience with a different work-life balance. Parents have been able to watch their children. Workers have been able to not only get a cup of coffee but also do the laundry while working.

Companies, for a long time, have asked their employees to think about work-life balance, probably so the company can boast high scores for employee engagement and quality of workplace. Companies want to rank high on the "best places to work" surveys. With work-from-home experience, employees now have a new understanding of "work-life balance". It is now more than an item on an annual survey. Employees have more control and a better work-life balance when working from home, and they like it.

The commute: Employees have found that a "commute" from the bedroom to the kitchen table (or wherever they set up their workspace) is much more pleasant than the daily drive to the office. (Or the bus ride. Or the carpool.) To companies, this is an externality; employees bear the entire cost and effort of the commute, and must allow for delays and side trips. Asking people to switch from a 20-second walk to a 30-minute drive (and pay for gasoline and tolls) is imposing a cost, and employees are aware of the cost. It's easy to understand the advantages of the work-from-home commute.

Privacy: The modern office, with either an open-floor plan or even cubicles, provides little in the way of privacy. Interruptions can be frequent. Distractions can be many. Working at home provides interruptions and distractions too, but they can be moderated. Some people have even found ways to work in a spare room, with a door that can be closed.

I suspect that most employers are asking their workers to return to the (unmodified) offices with open-floor plans. Employees know that this will mean more noise, more distractions, and less privacy.

Lighter supervision: Employees working at home have zero chance of a manager walking past their workspace. Managers must make an extra effort to contact an employee. This means that employees can spend more time working, which can mean higher productivity. (Or at least longer periods of focus on work and less on management.)

If companies want employees to return to the office, they can mandate it (and workers will grudgingly return but they won't be happy) or they can request it (but they must make a good case to persuade workers to return).

A winning presentation to workers will have to address all of the above ideas. Saying "we'll all be in one place" won't cut it, nor will "we work better when we're together". Workers have had some degree of independence, and they (for the most part) like it.