Tuesday, August 24, 2021

After the PC

Is the M1 Mac a PC? That is, is the new Macbook a personal computer?

To answer that question, let's take a brief trip through the history of computing devices.

Mainframes were the first commercially viable electronic computing devices. They were large and consisted of a processing unit and a few peripherals. The CPU was the most important feature.

Minicomputers followed mainframes, and had an important difference: They had terminals as peripherals. The most important feature was not the CPU but the number of terminals.

PCs were different from minicomputers in that they had integrated video and and integrated keyboard. They did not have terminals. In this sense, M1 Macs are PCs.

Yet in another sense, M1 Macs are something different: They cannot be modified.

IBM PCs were designed for modification. The IBM PC had slots for accessory cards. The information for those slots was available. Other manufacturers could design and sell accessory cards. Consumers could open the computer and add hardware. They could pick their operating system (IBM offered DOS, CP/M-86, and UCSD p-System).

The openness of the IBM PC was somewhat unusual. Apple II (Apple ][) computers were not openable. Macs were not openable (you needed a special tool). Other computers of the era were typically built in a way that discouraged modifications.

The IBM PC set the standard for personal computers, and that standard was "able to be opened and modified".

The M1 Macs are systems on a chip. The hardware cannot be modified. The CPU, memory, disk (well, let's call it "storage", to use the old mainframe-era term) are all fixed. Nothing can be replaced or upgraded.

In this sense, the M1 Mac is not a PC. (Of course, if it is not a PC, then what do we call it? We need a name. "System-on-a-chip" is too long, "SoC" sounds like "sock" and that won't do, either.)

I suspect that the folks at Apple will be happy to refer to their products with a term other than "PC". Apple fans, too. But I don't have a suitable, generic, term. Apple folks might suggest the term "Mac", as in "I have a Mac in my backpack", but the term "Mac" is not generic. (Unless Apple is willing to let the term be generic. If so, when I carry my SoC Chromebook, I can still say "I have a Mac in my backpack." I doubt Apple would be happy with that.)

Perhaps the closest thing to the new Apple M1 Macs is something so old that it predates the IBM PC: The electronic calculator.

Available in the mid-1970s, electronic calculators were quite popular. Small and yet capable of numeric computations, they were useful for any number of people. Like the Apple M1 Mac, they were designed to remain unopened by the user (except perhaps to replace batteries) and they were not modifiable.

So perhaps the Apple M1 Macbooks are descendants of those disco-era calculators.

* * * * *

I am somewhat saddened by the idea that personal computers have evolved into calculators, that PCs are not modifiable. I gained a lot of experience with computers by modifying them: adding memory, changing disk drives, or installing new operating systems.

A parallel change occurred in the automobile industry. In the 1950s, lots of people bought cars and tinkered with them. They replaced tires and shock absorbers, adjusted carburetors, installed superchargers and turbochargers, and replaced exhaust pipes. But over time, automobiles become more complex and more computerized, and now very few people get involved with their cars. (There are some enthusiastic car-hackers, but they are few in number.)

We lost something with that change. We lost the camaraderie of learning together, of car clubs and amateur competitions.

We lost the same thing with the change in PCs, from open, modifiable systems to closed, optimized boxes.

Wednesday, August 18, 2021

Apple's trouble with CSAM is from its approach to new features

Apple's method, its motus operandi, is to design a product or process and present it to its customers. There is no discussion (at least none outside of Apple), there is no gathering of opinion, there is no debate. Apple decides, implements, and sells. Apple dictates, customers follow.

We can see this behavior in Apple's hardware.

Apple removed the floppy disk from the Macintosh. Apple removed the CD drive from the iMac. Apple removed the audio port on the iPhone.

We can also see this behavior in Apple's software.

Apple designs the UIs for iOS and iPadOS and has very definite ideas about how things ought to be.

This method has served Apple well. Apple designs products, and customers buy them.

Yet this method failed with CSAM.

Apparently, some customers don't like being dictated to.

One aspect of the CSAM change is that it is a change to an existing service. If Apple changes the design of the iPhone (removing the audio port, for example), customers still have their old audio-port-equipped iPhone. They can choose to replace that iPhone with a later model -- or not.

If Apple had introduced a new file storage service, and made CSAM scanning part of that service, customers could choose to use that new service -- or not.

But Apple made a change to an existing service (scanning photos in iCloud), and existing iPhones (if Apple scans the photos on the phone). That presents a different calculation to customers. Instead of choosing to use a new service or not, customers must now choose to stop using a service, or continue to use the modified service. Changing from one cloud storage service to another is a larger task. (And possibly not possible with iPhones and iPads.)

Customers don't have the option of "buying in" to the new service. The changes for CSAM are not something that one can put off for a few months. One cannot look at the experience of other customers and then decide to participate.

No, this change is Apple flexing its muscles, altering the deal that was already established. ("Pray that I do not alter it further," quipped Darth Vader in a movie from a long time ago.)

I think that the negative reaction to Apple's CSAM strategy is not so much the scanning of photos, but the altering of the existing service. I think that people now realize that Apple can, and will, alter services. Without the consent, or even the advice, of the customers. I think that is what is bothering people.

Sunday, August 15, 2021

COBOL and Elixir

Someone has created a project to transpile (their word) COBOL to Elixir. I have some thoughts on this idea. But first, let's look at the example they provide.

A sample COBOL program:

      >>SOURCE FORMAT FREE
IDENTIFICATION DIVISION.
PROGRAM-ID. Test1.
AUTHOR. Mike Binns.
DATE-WRITTEN. July 25th 2021
DATA DIVISION.
WORKING-STORAGE SECTION.
01 Name     PIC X(4) VALUE "Mike".
PROCEDURE DIVISION.

DISPLAY "Hello " Name

STOP RUN.

This is "hello, world" in COBOL. Note that it is quite longer than equivalent programs in most languages. Also note that while long, it is still readable. Even a person who does not know COBOL can make some sense of it.

Now let's look at the same program, transpiled to Elixr:

defmodule ElixirFromCobol.Test1 do
  @moduledoc """
  author: Mike Binns
  date written: July 25th 2021
  """

  def main do
    try do
      do_main()
    catch
      :stop_run -> :stop_run
    end
  end 

  def do_main do
    # pic: XXXX
    var_Name = "Mike"
    pics = %{"Name" => {:str, "XXXX", 4}}
    IO.puts "Hello " <> var_Name
    throw :stop_run
  end
end

That is ... a lot of code. More than the code for the COBOL version! Some of that is due to the exception of "stop run" which in this small example seems to be excessive. Why wrap the core function inside a main() that simply exists to trap the exception? (There is a reason. More on that later.)

I'm unsure of the reason for this project. If it is a side project made on a whim, and used for the entertainment (or education) of the author, then it makes sense.

But I cannot see this as a serious project, for a couple of reasons.

First, the produced Elixir code is longer, and in my opinion less readable, than the original COBOL code. I may be biased here, as I am somewhat familiar with COBOL and not at all familiar with Elixir, so I can look at COBOL code and say "of course it does that" but when I look at Elixir code I can only guess and think "well, maybe it does that". Elixir seems to follow the syntax for modern scripting languages such as Python and Ruby, with some unusual operators.

Second, the generated Elixir code provides some constructs which are not used. This is, perhaps, an artifact of generated code. Code generators are good, up to a point. They tend to be non-thinking; they read input, apply some rules, and produce output. They don't see the bigger picture. In the example, the transpiler has produced code that contains the variable "pics" which contains information about the COBOL programs PICTURE clauses, but this "pics" variable is not used.

The "pics" variable hints at a larger problem, which is that the transpiled code is not running the equivalent program but is instead interpreting data to achieve the same output. The Elixir program is, in fact, a tuned interpreter for a specific COBOL program. As an interpreter, its performance will be less than that of a compiled program. Where COBOL can compile code to handle the PICTURE clauses, the Elixir code must look up the PICTURE clause at runtime, decode it, and then take action.

My final concern is the most significant. The Elixir programming language is not a good match for the COBOL language. Theoretically, any program written in a Turing-complete language can be re-written in a different Turing-complete language. That's a nice theory, but in practice converting from one language to another can be easy or can be difficult. Modern languages like Elixir have object-oriented and structured programming constructs. COBOL predates those constructs and has procedural code and global variables.

We can see the impedance mismatch in the Elixir code to catch the "stop run" exception. A COBOL program may contain "STOP RUN" anywhere in the code. The Elixir transpiler project has to build extra code to duplicate this capability. I'm not sure how the transpiler will handle global variables, but it will probably be a method that is equally tortured. Converting code from a non-structured language to a structured programming language is difficult at best and results in odd-looking code.

My point here is not to insult or to shout down the transpiler project. It will probably be an educational experience, teaching the author about Elixir and probably more about COBOL.

My first point is that programs are designed to match the programming language. Programs written in object-oriented languages have object-oriented designs. Programs written in functional languages have functional designs. Programs written in non-structured languages have... non-structured designs. The designs from one type of programming language do not translate readily to a programming language of a different type.

My second point is that we assume that modern languages are better than older languages. We assume that object-oriented languages like C++, C#, and Java are better than (non-OO) structured languages like Pascal and Fortran-77. Some of us assume that functional languages are better than object-oriented languages.

I'm not so sure about those assumptions. I think that object oriented languages are better at some tasks that mere structured languages, and older structured-only languages are better at other tasks. Object-oriented languages are useful for large systems; they let us organize code into classes and functions, and even larger constructs through inheritance and templates. Dynamic languages like Python and Ruby are good for some tasks but not others.

And I must conclude that even older, non-functional, non-dynamic, non-object-oriented, non-structured programming languages are good for some tasks.

One analogy of programming languages is that of a carpenter's toolbox: full of various tools for different purposes. COBOL, one of the oldest languages, might be considered the hammer, one of the oldest tools. Hammers do not have the ability of saws, drills, tape measures, or levels, but carpenters still use them, when the task is appropriate for a hammer.

Perhaps we can learn a thing or two from carpenters.

Sunday, August 8, 2021

Apple and the photographs

There has been a lot of discussion about Apple's plan to identify photographs that contain illegal material. Various comments have been made on the "one in one trillion" estimate for false positives. Others have commented on the ethics of such a search.

One idea I have not seen discussed is the reason for Apple to do this. Why would Apple choose to identify these images? Why now?

It doesn't help sell Apple products.

It doesn't help sell Apple services.

It doesn't improve Apple's reputation.

Yet Apple made the effort to identify illegal photographs, which included requirements, design, coding, and testing. It cost Apple money to do this.

If Apple gains no revenue, gains no reputation, has no material gain at all, then why should they do it? Why make the effort and why expend the money?

Perhaps there is a gain, but one that Apple has not revealed to us. What unrevealed reason could Apple have to examine and identify photographs on Apple equipment? (Or more specifically, in Apple iCloud?)

The one thought that comes to mind is an agreement with law enforcement agencies. In such an agreement, Apple scans photographs and reports illegal material to law enforcement agencies. In exchange, Apple gets... what? Something from the government? Payment? Or perhaps they don't get something from the government, such as a lawsuit or regulatory interference.

I'm speculating here. I have no knowledge of Apple's motivations. Nor do I have any knowledge of such a deal between government and Apple -- or any company. (Any keep in mind that there are governments other than the US government that may make such requests.)

But if there is a deal, then perhaps Apple is not the last company to perform such action. We may see other companies announce similar efforts to identify illegal material. Worse, we may learn that some companies have implemented such programs without informing their customers.

Thursday, August 5, 2021

The roaring success of Windows 365

Microsoft announced Windows 365, its "Windows as a Service" offering that lets one (if one is a business) create and use virtual Windows desktops. And just as quickly, Microsoft announced that it was suspending new accounts, because too many had signed up for the service.

A few thoughts:

First (and obvious) is that Microsoft underestimated the demand for Windows 365. Microsoft hosts the service on its Azure framework, and if the demand for Windows 365 outstrips the available servers, then it is popular indeed.

Second, (and perhaps less obvious) is that Microsoft is (probably) kicking themselves for pricing the service as they did. With strong demand, Microsoft apparently "left money on the table" and could have charged higher rates.

Third, (and also not so obvious) is that Microsoft's business customers (that is, most businesses) really want to move from their current arrangement for PC hardware to a different one. They either want to move to Windows 365 or they want to try it -- which indicates that they are not happy with their current PC platform. (That current platform might be physical, real-world PCs on employee desks, or it might be virtual PCs accessed by Remote Desktop or Citrix or some other mechanism.)

The desire for customers to try a different solution is, in my mind, a warning for Microsoft. It means that customers are not happy with the current state of Windows and its support -- new versions, upgrades, security patches, and administration. It could mean that customers will entertain other solutions, including Linux and Apple.

A few other thoughts:

With demand from the business community much stronger than expected, Microsoft will probably focus on business customers. In other words, I expect Microsoft to offer no comparable service for individuals or families -- at least not for the next two years. I expect the cost for a consumer product is higher than the cost for a commercial product, and the revenue for a consumer product is lower than the revenue for a business product.

Microsoft may leverage demand for the Windows 365 service to spur Windows 11 sales. They have announced Windows 365 with access from various devices, and the opportunity is to provide additional services for Windows 11 clients. (Better security, better integration between access computer and virtual desktop, and possibly lower costs.)

Finally, there is the outside chance that Microsoft will provide a special edition of Windows 11, one that is stripped down and usable only to access Windows 365. (Much like ChromeOS is suitable only to run Chrome.) Microsoft may even design and sell hardware to support a Windows 11 "C" mode ("C" for "Connectivity"?).

The strong demand for Windows 365 shows that lots of people are interested in it. Microsoft won't ignore that. Be prepared for more announcements for this new service.

Wednesday, July 28, 2021

Linux is a parasite, and it may be our future

Linux is a parasite.

So is Unix.

The first Unix ran on a DEC PDP-7. But DEC did not sell PDP-7s to run Unix; it sold them to run its own operating system called "DECsys".

Later Unix versions ran on PDP-11s. But DEC did not sell PDP-11s to run Unix; it sold them to run later operating systems called RSX-11, TSX-11, CTS-11, and RSTS.

DEC's minicomputers were simple, compared to today's PCs. They would load and run just about any program. On many models, the loader program (what we would call the bootstrap code) was entered by hand on a front panel.

There was no trusted platform, no TPM, no signed code. It was easy to load Unix onto a DEC minicomputer. The success of Unix was due, in part, to the openness of those minicomputers.

But to be honest, Unix was a parasite. It took advantage of the hardware that was available.

Linux is in the same way a parasite on PCs. PCs are sold to run Windows. (Yes, a few are sold with Linux. But PCs are designed to run Windows, and the cast majority are sold with Windows.)

PC hardware has been, from the original IBM PC, open and well-documented. Linux took advantage of that openness, and has enjoyed a modicum of success.

Linux is a parasite on Apple PCs too, taking advantage of the hardware that Apple designed.

But the life of a parasite is not easy.

As Apple changes its hardware and bolsters security, it becomes harder to run Linux on an Apple PC. It is possible to run Linux on an M1 MacBook. I expect that the effort will increase over the next few years, as Apple introduces more changes to defend against malware.

Microsoft is making similar changes to Windows and the PC platform. Microsoft designs and builds a small number of PCs, and issues a specification for the hardware to run Windows. That specification is changing to defend against malware. Those changes also make it harder to install Linux.

Will we see a day when it is impossible to install Linux on a PC? Or on a Macbook? I think we will, probably with Apple equipment first. Devices such as the iPhone and Apple Time Capsule require signed code to boot an operating system, and Apple is not divulging the signing keys. It is not possible to install Linux on them. I think a similar fate awaits Apple's Macbooks and iMac lines. Once that happens, Linux will be locked out of Apple hardware.

Chromebooks look for code signed by Google, although in developer mode they can boot code that has been signed by others. (The Chromebook boot code looks for a signed kernel, but it doesn't care who signed it.)

Microsoft is moving towards signed code. Windows version 11 will require signed code and a TPM (Trusted Platform Module) in the PC. There are ways to load Linux on these PCs, so Linux has not yet been locked out.

I think Microsoft recognizes the contributions that Linux makes to the ecosystem, and is taking steps to ensure that Linux will be available on future PCs. Apple, I think, sees no benefit from Linux and is willing to lock Linux out of Apple devices. Microsoft sees value in letting Linux run on PCs; Apple doesn't.

It might be that Microsoft is preparing a radical change. It may be that Microsoft is getting ready to limit Windows to virtual systems, and drop support for "real" PCs. The new "Windows 365" product (virtual computers running Windows accessible from a browser) could be the future of Windows.

In this fantasy world I am constructing, Microsoft provides Windows on virtual hardware and not anywhere else. Access to Windows is available via browser, but one must acquire the hardware and operating system to run the browser. That could be an old PC running an old (in the future) version of Windows 10 or Windows 11, or it could mean a Chromebook running ChromeOS, or it could mean a desktop PC running Linux.

This would be a big change -- and I'm not saying that it will happen, only that it may happen -- and it would have profound affects on the IT world. There are some thoughts that come to mind:

First, performance becomes less important for the physical PC running the browser. The heavy CPU work is on the server side. The PC hosting the browser is a fancy terminal, displaying the results of the computation but not performing the computation. The race for speed shifts to the servers hosting the virtual instances of Windows. (And there is less pressure to update local PCs every three years.)

Second, the effort to develop and support Windows drops significantly. A lot of work for Microsoft is maintaining compatibility with hardware. Windows works with just about every piece of hardware going back decades: printers, video cards, disk drives, camera, phones, ... you name it, Windows supports it. If Microsoft shifts to a virtual-server-only version of Windows, a lot of that work disappears from Microsoft's queue. The work doesn't vanish; it shifts to the people building the non-virtual PCs that run the browsers. But the work (and the expense) does vanish from Microsoft's accounts.

Third, this change is one that Apple cannot follow. Apple has built its strategy of privacy on top of a system of local processing -- a secure box. They don't send data to remote servers -- doing so would let your personal data escape the secure box. It has no way to offer virtual instances of macOS that correspond to Windows 365 without breaking that secure box. (And just as Windows 365 allows for longer lifespans of local PCs, virtual macOS would allow for longer lifespans of Macs and Macbooks -- something that Apple would prefer not to see, as they rely on consumers replacing their equipment every so often.)

If Microsoft does make this change, the prospects for Linux improve. If Microsoft pulls Windows off of the market, then PC manufacturers must offer something to run on their hardware. That something cannot be macOS, and it certainly won't be FreeDOS. (As good as it is, FreeDOS is not what we need.)

The operating system that comes with PCs may be Linux, or a variant of Linux built for laptop makers. There could be two versions: a lightweight version that is close to ChromeOS (just enough to run a browser) and a heavier version that is close to today's Linux distros.

If Microsoft makes this change -- and again, I'm not sure that they will -- then we really could see "the year of the Linux desktop". Oh, and it would mean that Linux would no longer be a parasite.

Tuesday, July 20, 2021

Debugging

Development consists of several tasks: analysis, design, coding, testing, and deployment are the typical tasks listed for development. There is one more: debugging, and that is the task I want to talk about.

First, let me observe that programmers, as a group, like to improve their processes. Programmers write the compilers and editors and operating systems, and they build tools to make tasks easier.

Over the years, programmers have built tools to assist in the tasks of development. Programmers were unhappy with machine coding, so they wrote assemblers which converted text codes to numeric codes. They were unhappy with those early assemblers because they still had to compute locations for jump targets, so they wrote symbolic assemblers that did that work.

Programmers wrote compilers for higher-level languages, starting with FORTRAN and FLOW-MATIC and COBOL. We've created lots of languages, and lots of compilers, since.

Programmers created editors to allow for creation and modification of source code. Programmers have created lots of editors, from simple text editors that can run on a paper-printing terminal to the sophisticated editors in today's IDEs.

Oh, yes, programmers created IDEs (integrated development environments) too.

And tools for automated testing.

And tools to simplify deployment.

Programmers have made lots of tools to make the job easier, for every aspect of development.

Except debugging. Debugging has not changed in decades.

There are three techniques for debugging, and they have not changed in decades.

Desk check: Not used today. Used in the days of mainframe and batch processing, prior to interactive programming. To "desk check" a program, one looks at the source code (usually on paper) and checks it for errors.

This technique was replaced by tools such as lint and techniques such as code reviews and pair programming.

Logging: Modify the code to print information to a file for later examination. Also know as "debug by printf()".

This technique is in use today.

Interactive debugging: This technique has been around since the early days of Unix. It was available in 8-bit operating systems like CP/M (the DDT program). The basic idea: Run the program with the debugger, pausing the execution it at some point. The debugger keeps the program loaded in memory, and one can examine or modify data. Some debuggers allow you to modify the code (typically with interpreted languages).

This technique is in use today. Modern IDEs such as Visual Studio and PyCharm provide interactive debuggers.

Those are the three techniques. They are fairly low-level technologies, and require the programmer to keep a lot of knowledge in his or her head.

These techniques gave us Kernighan's quote:

"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

— The Elements of Programming Style, 2nd edition, chapter 2

These debugging techniques are the equivalent of assemblers. They allow programmers to do the job, but put a lot of work on the programmers. They assist with the mechanical aspect of the task, but not the functional aspect. A programmer, working on a defect and using a debugger, usually follow the following procedure:

- understand the defect
- load the program in the debugger
- place some breakpoints in the source code, to pause execution at points that seem close to the error
- start the program running, wait for a breakpoint
- examine the state of the program (variables and their contents)
- step through the program, one line at a time, to see which decisions are made ('if' statements)

This process requires the programmer to keep a model of the program inside his or her head. It requires concentration, and interruptions or distractions can destroy that model, requiring the programmer to start again.

I think that we are ready for a breakthrough in debugging. A new approach that will make it easier for the programmer.

That new approach, I think, will be innovative. It will not be an incremental improvement on the interactive debuggers of today. (Those debuggers are the result of 30-odd years of incremental improvements, and they still require lots of concentration.)

The new debugger may be something completely new, such as running two (slightly different) versions of the same program and identifying the points in the code where execution varies.

Or possibly new techniques for visualizing the data of the program. Today's debuggers show us everything, with limited ways to specify items of interest (and other items that we don't care about and don't want to see).

Or possibly visualization of the program's state, which would be a combination of variables and executed statements.

I will admit that the effort to create a debugger (especially a new-style debugger) is hard. I have written two debuggers in my career: one for 8080 assembly language and another for an interpreter for BASIC. Both were challenges, and I was not happy with the results for either of them. I suspect that to write a debugger, one must be twice as clever as when writing the compiler or interpreter.

Yet I am hopeful that we will see a new kind of debugger. It may start as a tool specific to one language. It may be for an established language, but I suspect it will be for a newer one. Possibly a brand-new language with a brand-new debugger. (I suspect that it will be an interpreted language.) Once people see the advantages of it, the idea will be adopted by other language teams.

The new technique may be so different that we don't call it a debugger. We may give it a new name. So it may be that the new debugger is not a debugger at all.