Tuesday, July 26, 2022

Mental models of computers

In the old movie "The Matrix", there is a scene in which the character Cipher is looking at code and another character, Neo, asks why he looks at the code and not the presentation-level view. Cipher explains that the code is better, because the presentation level is designed to fool us humans. At this the moment that Neo re-thinks his view of computers.

That scene (some years after the debut of the movie) got me thinking about my view of computers.

My mental model of computers is based on text. That is, I think of a computer as a device that processes text and talks to other devices that process text. The CPU processes text, terminals display text to users and accept text via the keyboard, printers print text, and disks store text. (Disks also store data in binary form, but for me that is simply a strange form of text.)

This model is perhaps not surprising, as my early experiences with computers were with text-oriented devices and text-oriented programs. Those computers included a DEC PDP-8 running BASIC; a DECsystem-10 running TOPS-10 with FORTRAN, Pascal, and a few other text-oriented languages; a Heathkit H-89 running HDOS (an operating system much like DEC's RT-11) and BASIC, assembly language, FORTRAN, C, and Pascal.

The devices I used to interact with computers were text terminals. The PDP-8 used Teletype ASR-33s, which had large mechanical keyboards (way more mechanical than today's mechanical keyboards) and printed text on a long continuous roll of paper. The DECsystem-10 and the H-89 both used CRT terminals (no paper) and mostly text with a few special graphics characters.

In those formative years, all of my experience with computers was for programming. That is, the primary purpose of a computer was to learn programming and to do programming. Keep in mind that this was before much of the technical world we have today. There was no Google, no Netflix, no Windows. Spreadsheets were the new thing, and even they were text-oriented. The few graphs that existed in computing were made on special (read that as "expensive and rare") equipment that few people had.

In my mind, back then, computers were for programming and programming was a process that used text and the computers used text so they were a good match for programming.

Programming today is still a text-oriented process. The "source code" of programs, the version that we humans write and that computers either compile or interpret into executed code, is text. One can write programs in the Windows "Notepad" program. (One must save them to disk and then tell the compiler to convert that saved file, but that is simply the process to get a program to run.)

So what does this have to do with "The Matrix" and specifically why is that one scene important?

It strikes me that while my experience with computers started with programming and text-oriented devices, not every (especially now-a-days) has that same experience. Today, people are introduced to computers with cell phones, or possibly tablets. A few may get their first experience on a laptop running Windows or macOS.

All of these are different from my text-based introduction. And all of these are graphics-based. People today, when they first encounter computers, encounter graphical interfaces, and use computers for many things other than programming. People today must have a very different mental model about computers. I saw computers as boxes that processed text. Today, most people see computers as boxes that process graphics, and sound, and voice.

What a shock it must be for someone today to start to learn programming. They are taken out of their comfortable mental model and forced to use text. Some classes begin with simple "hello, world" programs that not only use text source code but also produce text output. How primitive this must seem to people familiar with graphical interfaces! (Some classes begin with simple programs that present web pages, which is a bit better in that the output is familiar, but the source code is still text.)

But this different mental model may be a problem for people entering the programming world. They are moving from a graphical world to a text-based world, and that transition can be difficult. Modern IDE programs ease the transition by allowing many operations in a graphical environment, but the source code remains text.

Do people revolt? Do they reject the text-oriented approach to source code? I imagine that some find the change in mental models difficult, perhaps too difficult, and they abandon programming.

A better question is: Why has no one created a graphical-oriented programming language? Not just a programming language in an IDE -- we already have those. I'm thinking of a new approach to programming, something very different from the text approach of today.

It might be that programming has formed a self-reinforcing loop. Only programmers can create new programming languages and programming environments, and these programmers (obviously) are comfortable with the text paradigm. Perhaps the see no need to make such a large change.

Or it might be that the text model is the best model for programming. Programming is the organization of ideas into clearly specified collections and operations, and text handles that task better than graphics. Visual representations of collections and operations can be clear, and they can be ambiguous. (But then, text representations can also be ambiguous, so I'm not sure that there is a clear advantage for text.)

Or possibly we simply have not seen the right person to come along, with the right mix of technical skills, graphics abilities, and desire for a visual programming language. It may be that graphical programming languages are possible, and that we just haven't invented them.

I want to think it is the last of these reasons, because that means there is a lot more for us to learn about programming. The introduction of a visual programming language will open new vistas for programming, and applications, and computing.

I want to think that there will always be something new for the programmers.

Wednesday, July 20, 2022

The Macbook camera may always be second-rate

A recent article on MacWorld complained about Apple's "solution" of a webcam for MacBooks, namely using the superior camera in the iPhone.

It is true that iPhones have better cameras than MacBooks. But why?

I can think of no technical reason.

It's not that the iPhone camera won't fit in the MacBook. The MacBook has plenty of space. It is the iPhone that puts space at a premium.

It's not that the iPhone camera won't work with a MacBook processor. The iPhone camera works in the iPhone with its A12 (or is it A14?) processor. The MacBook has an M1 or an M2 processor, using very similar designs. Getting the iPhone camera to work with an M1 processor should be relatively easy.

It's not a matter of power. The MacBook has plenty of electricity. It is the iPhone that must be careful with power consumption.

It's not that the MacBook developers don't know how to properly configure the iPhone camera and get data from it. (The iPhone developers certainly know, and they are just down the hall.)

It's not a supply issue. iPhone sales dwarf MacBook sales (in units, as well as dollars). Diverting cameras from iPhones to MacBooks would probably not even show in inventory reports.

So let's say that the reason is not technical.

Then the reason must be non-technical. (Obviously.)

It could be that the MacBook project lead wants to use a specific camera for non-technical reasons. Pride, perhaps, or ego. Maybe the technical lead, on a former project, designed the camera that is used in the MacBook, and doesn't want to switch to someone else's camera. (I'm not convinced of this.)

Maybe Apple has a lot of already-purchased cameras and wants to use them, rather than discarding them. (I'm not believing this, either.)

I think the reason may be something else: marketing.

When Apple sells a MacBook with an inferior camera, and it provides the "Continuity Camera" service to allow an iPhone to be used as a camera for that MacBook, Apple has now given the customer a reason to purchase an iPhone. Or if the customer already has an iPhone, a reason to stay with the iPhone and not switch to a different brand.

It's not a nice idea. In fact, it's rather cynical: Apple deliberately providing a lesser experience in MacBooks for the purpose of selling more iPhones.

But it's the only one that fits.

Maybe I'm wrong. Maybe Apple has a good technical reason for supplying inferior cameras in MacBooks.

I hope that I am. Because I want Apple to be a company that provides quality products, not inferior products carefully crafted to increase sales of other Apple products.

Thursday, July 14, 2022

Two CPUs

Looking through some old computer magazines from the 1980s, I was surprised at the number of advertisements for dual-CPU boards.

We don't see dual-CPU configurations now. I'm not talking about dual cores (or multiple cores) but dual CPUs. Two actual, and different, CPUs. In the 1980s, the common mix was a Z-80 and an 8086 on the same board. Sometimes it was an 8085 and an 8086.

Dual-CPU boards were popular as the industry transitioned from 8-bit processors (8080, Z-80, 6502, and 6800) to 16-bit processors (8088 and 8086, mainly). A dual-CPU configuration allowed one to test the new CPU while still keeping the old software (and data) available.

Dual-CPU configurations did not use two CPUs at the same time. They allowed for one or the other CPU to be active, to run an 8-bit operating system or a 16-bit operating system, much like today's dual-boot configuration for multiple operating systems on a single PC.

Computers at the time were more expensive and more modular than they are today. Today's computers have a motherboard with CPU (often with integrated graphics), slots for memory, slots for GPU, SATA ports for disks, and a collection of ports (video, USB, ethernet, sound, and sometimes PS/2 keyboard and mouse). All of those items are part of the motherboard.

In contrast, computers in the 1980s (especially those not IBM-compatible) used a simple backplane with a buss and separate cards for CPU, memory, floppy disk interface, hard disk interface, serial and parallel ports, real-time clock, and video display. It was much easier to replace the CPU and keep the rest of the computer.

So why don't we see dual-CPU configurations today? Why don't we see (for example) a PC with an Intel processor and an ARM processor?

There are a number of factors.

One is the cost of hardware. Computers today are much less expensive than computers in the 1980s, especially after accounting for inflation. Today, one can get a basic laptop for $1000, and a nice one for $2000. In the 1980s, $1000 would get a disk subsystem (interface card and drives) but the cost of the entire computer was upwards of $2000 (in 1980 dollars).

Another factor is the connection between hardware and operating system. Today, operating systems are tightly bound to hardware. A few people use grub or BootCamp to run different operating systems on the same computer, but they are few. For most people, the hardware and the operating system are a set, both coming in the same box.

Booting from a floppy disk (instead of an internal fixed disk) puts a degree of separation between hardware and operating system. One can easily insert a different disk to boot a different operating system.

Computers today are small. Laptops and micro-size desktops are convenient to plunk down almost anywhere. It is easy enough to find space for a second computer. That was not the case in the 1980s, when the CPU was the size of today's large tower unit, and one needed a large terminal with CRT and keyboard. Space for a second computer was a luxury.

Finally, today's processors are sophisticated, with high integration with the other electronics on the motherboard. The processors of the 1980s were quite isolationist in their approach to other circuitry, and it was (relatively) easy to design the circuits to allow for two CPUs and to enable one and not the other. The connections for today's Intel processors and today's ARM processors are much more sophisticated, and the differences between Intel and ARM are pronounced. A two-CPU system today requires much more than the simple enable/disable circuits of the past.

All of those factors (expense of hardware, ease at replacing a single-CPU card with a dual-CPU card, limited space, the ease of changing operating systems, and the ability to link two unlike CPUs on a single card) meant that the dual-CPU configuration was the better choice, in the 1980s. But all of those factors have changed, so in 2022 the better choice is to use two different computers.

I suppose one could leverage the GPU slot of a PC, and install a special card with an ARM processor, but I'm not sure that the card in the GPU slot can operate as a buss master and control the ports and memory of the motherboard. But such a board would have limited appeal, and probably not a viable product.

I think we won't see a dual-CPU system. We won't see a PC with an Intel processor and an ARM processor, with boot options to use one or the other. But maybe we don't need dual-CPU PCs. We can still explore new processors (by purchasing relatively inexpensive second PCs) and we can transition from older CPUs to newer ones (by purchasing relatively inexpensive replacement PCs).