Friday, May 27, 2022

The promise of Windows

Windows made a promise to run on various hardware, and allow different hardware platforms. This was a welcome promise, especially for those of us who liked computers other than the IBM PC.

At the time Windows was introduced, the IBM PC design was popular, but not universal. Some manufacturers had their own designs for PCs, different from the IBM PC. Those other PCs ran some software that was designed for IBM PCs, but not all software. The Victor 9000 and the Zenith Z-100 were PCs that saw modest popularity, running MS-DOS but with different specifications for keyboards, video, and input-output ports.

Some software was released in multiple versions, or included configuration programs, to match the different hardware. Lotus 1-2-3 had packages specific to the Z-100; WordStar came with a setup program to define screen and keyboard functions.

Buying hardware and software was a big effort. One had to ensure that the software ran on the hardware (or could be configured for it) and that the hardware supported the software.

Windows promised to simplify that effort. Windows would act as an intermediary, allowing any software (if it ran on Windows) to use any hardware (if Windows ran on it). Microsoft released Windows for different hardware platforms (including the Zenith Z-100). The implications were clear: Windows could "level the playing field" and make those other PCs (the ones not compatible with the IBM PC) useful and competitive.

That promise was not fulfilled. Windows ran on various computing hardware, but the buyers were trained to look for IBM PCs or compatibles, and they stayed with IBM PCs and compatibles. It didn't matter that Windows ran on different computers; people wanted IBM PCs, and they bought IBM PCs. The computers that were different were ignored and discontinued by their manufacturers.

And yet, Windows did keep its promise of separating software from hardware and allowing programs to run on different hardware. We can look at the history of Windows and see its growth over time, and the different hardware that it supported.

When USB was introduced, Windows supported it. (The implementation was rough at first, but Microsoft improved it.)

As displays improved and display adapters improved, Windows supported them. One can attach almost any display unit, and any display adapter to a PC and Windows can use them.

Printers and scanners have the same story. Windows supported lots of printers, from laserjets to inkjets to dot-matrix printers.

Much of this success is due to Microsoft and its clear specifications for adapters, displays, printers, and scanners. But those specifications allowed for growth and innovation.

Microsoft supported different processors, too. Windows ran on Intel's Itanium processors, and DECs Alpha processors. Even now Microsoft has support for ARM processors.

Windows did keep its promise, albeit in a way that we were not expecting.


Thursday, May 19, 2022

More than an FPU, less than a GPU

I think that there is an opportunity to enhance, or augment, the processing units in our current PCs.

Augmenting processors is not a new idea. Intel supplied numeric coprocessors for its 8086, 80286, and 80386 processors. These coprocessors performed numeric computations that were not natively available on the main processor. The main processor could perform the calculation, but the coprocessors were designed for numeric functions and performed the work much faster.

(Intel did not invent this idea. Coprocessors were available on minicomputers before PCs existed, and on mainframes before minicomputers existed.)

A common augmentation to processors is the GPU. Today's GPUs are in a combination of video adapter and numeric processor. They drive a video display, and they also perform graphic-oriented processing. They often use more power than the CPU, and perform more computations than the CPU, so one could argue that the GPU is the main processor, and the CPU is an auxiliary processor that handles input and output for the GPU.

Let's consider a different augmentation to our PC. We can keep the CPU, memory, and input-output devices, but let's replace the GPU with a different processor. Instead of a graphics-oriented processor that drives a video display, let's imagine a numeric-oriented processor.

This numeric-oriented processor (let's call it a NOP*) is different from the regular GPU. A GPU is designed, in part, to display images, and our NOP does not need to do that. The NOP waits for a request from the CPU, acts on it, and provides a result. No display is involved. The request that we send to the NOP is a program, but to avoid confusion with our main program, let's call it a 'subprogram'.

A GPU does the same, but also displays images, and therefore has to operate in real-time. Therefore, we can relax the constraints on our NOP. It has to be fast (faster at computations than the CPU) but it does not have to be as fast as a GPU.

And as a NOP does not connect to a display, it does not need to have a port for the display, nor does it need to be positioned in a PC slot for external access. It can be completely in the PC, much like memory modules are attached to the motherboard.

Also, our NOP can contain multiple processors. A GPU contains a single processor with multiple cores. Our NOP could have a single processor with multiple cores, or it could have multiple processors, each with multiple cores.

A program with multiple threads runs on a single processor with multiple cores, so a NOP with one processor can one run 'subprogram' that we assign to it. A NOP with multiple processors (each with multiple cores) could run multiple 'subprograms' at a time. A simple NOP could have one processor, and a high-end NOP could have multiple processors.

Such a device is quite possible with our current technology. The question is, what do we do with it?

The use of a GPU is obvious: driving a video card.

The use of a NOP is... less obvious.

It's less obvious because a NOP is a form of computing that is different from our usual form of computing. Our typical program is a linear thing, a collection of instructions that are processed in sequence, starting with the first and ending with the last. (Our programs can have loops and conditional branches, but the idea is still linear.)

Our NOP is capable of performing multiple calculations at the same time (multiple cores) and performing those calculations quickly. To use a NOP, one must have a set of calculations that can run in parallel (not dependent on each other) and with a large set of data. That's a different type of computing, and we're not accustomed to thinking in those terms. That's why the use of a NOP is not obvious.

There are several specialized applications that could use a NOP. We could analyze new designs for automobiles or airplanes, simulate protein folding, explore new drugs, and analyze buildings for structural integrity. But these are all specialized applications.

To justify a NOP as part of a regular PC, we need an application for the average person. Just as the spreadsheet was the compelling application for early PCs, and GPS and online maps were the compelling app for cell phones, we need a compelling app for a NOP.

I don't know what such an application be, but I know something of what it would look like. A NOP, like a GPU, has a processor with many cores. Today's GPUs can have in excess of 6000 cores (which exceed today's CPUs that have a puny dozen or two). But cores don't automatically make your program run faster. Cores allow a program with multiple threads to run faster.

Therefore, a program that takes advantage of a NOP would use multiple threads -- lots of them. It would perform numerical processing (of course) and it would operate on lots of data. The CPU would act as a dispatcher, sending "jobs" to the NOP and coordinating the results.

If our PCs had NOPs built in, then creative users would make programs to use them. But our PCs don't have NOPs built in, and NOPs cannot be added, either. The best we can do is use a GPU to perform some calculations. Many PCs do have GPUs, and some are used for numeric-oriented programming, but the compelling application has not emerged.

The problem is not one of technology but one of vision and imagination.

- - - - -

* I am quite aware that the term NOP is used, in assembly language, for the "no operation" instruction, an instruction that literally does nothing.

Thursday, May 5, 2022

C but for Intel X86

Let's step away from current events in technology, and indulge in a small reverie. Let's play a game of "what if" with past technology. Specifically the Intel 8086 processor.

I will start with the C programming language. C was designed for the DEC PDP-7 and PDP-11 processors. Those processors had some interesting features, and the C language reflects that. One example is the increment and decrement operators (the '++' and '--' operators) which map closely to addressing modes on the DEC processors.

Suppose someone had developed a programming language for the Intel 8086, just as Kernighan and Ritchie developed C for the PDP-11. What would it look like?

Let's also suppose that this programming language was developed just as the 8086 was designed, or shortly thereafter. It was released in 1978. We're looking at the 8086 (or the 8088, which has the identical instruction set) and not the later processors.

The computing world in 1978 was quite different from today. Personal computers were just entering the market. Apple and Commodore had computers that used the 6502 processor; Radio Shack had computers that used the Z-80 and the 68000.

Today's popular programming languages didn't exist. There was no Python, no Ruby, no Perl, no C#, and no Java. The common languages were COBOL, FORTRAN, BASIC, and assembler. Pascal was available, for some systems. Those would be the reference point for a new programming language. (C did exist, but only in the limited Unix community.)

Just as C leveraged features of the PDP-7 and PDP-11 processors, our hypothetical language would leverage features of the 8086. What are those features?

One feature that jumps out is text strings. The 8086 has instructions to handle null-terminated text. It seems reasonable that an 8086-centric language would support them. They might even be a built-in type for the language. (Strings do require memory management, but that is a feature of the run-time library, not the programming language itself.)

The 8086 supports BCD (binary-converted decimal) arithmetic. BCD math is rare today, but it was common on IBM mainframes and a common way to encode data for exchange with other computers.

The 8086 had a segmented architecture, with four different segments (code, data, stack, and "extra"). Those four segments map well to Pascal's organization of code, static data, stack, and heap. (C and C-derivatives use the same organization.) A language could support dynamic allocation of memory and recursive functions (two things that were not available in COBOL, FORTRAN, or BASIC). And it could also support a "flat" organization like those used in COBOL and FORTRAN, in which all variables and functions are laid out at link time and fixed in position.

There would be no increment or decrement operators.

Who would build such a language? I suppose a natural choice would be Intel, as they knew the processor best. But then, maybe not, as they were busy with hardware design, and had no operating system on which to run a compiler or interpreter.

The two big software houses for small systems (at the time) were Microsoft and Digital Research. Both had experience with programming languages. Microsoft provided BASIC for many different computer systems, and also had FORTRAN and COBOL. Digital Research provided CBASIC (a compiled BASIC) and PL/M (a derivative of IBM's PL/I).

IBM would probably not create a language for the 8086. They had no offering that used that processor. The IBM PC would arrive in 1981, and IBM didn't consider it a serious computer -- at least not until people started buying them in large quantities.

DEC, at the time successful with minicomputers, also had no offering that used the 8086. DEC offered many languages, but used their own processors.

Our language may have been developed by a hardware vendor, such as Apple or Radio Shack, but they like Intel were busy with hardware and did very little in terms of software.

So it may have been either Microsoft or Digital Research. Both companies were oriented for business, so a language developed by either of them would be oriented for business. A new language for business might be modelled on COBOL, but COBOL didn't allow for variable-length strings. FORTRAN was oriented for numeric processing, and it didn't handle strings either. Even Pascal had difficulty with variable-length strings.

My guess is that our new language would mix elements of each of the popular languages. It would be close to Pascal, but with more flexibility for text strings. It would support BCD numeric values, not only in calculations but also in input-output operations. The language would be influenced by COBOL's verbose approach to a language.

We might see something like this:

PROGRAM EXAMPLE
DECLARE
  FILE DIRECT CUSTFILE = "CUST.DAT";
  RECORD CUSTREC
    BCD ACCTNUM PIC 9999,
    BCD BALANCE PIC 9999.99;
    STRING NAME PIC X(20);
    STRING ADDRESS PIC X(20);
  FILE SEQUENTIAL TRANSFILE = "TRANS.DAT";
  RECORD TRANSREC
    BCD ACCTNUM PIC 9999,
    BCD AMOUNT PIC 999.99;
  STRING NAME;
  BCD PAYMENT,TAXRATE;
PROCEDURE INIT
  OPEN CUSTFILE, TRANSFILE;
PROCEDURE MAINLINE
  WHILE NOT END(TRANSFILE) BEGIN
    READ TRANSFILE TO TRANSREC;
  END;
CALL INIT;
CALL MAINLINE;

and so forth. This borrows heavily from COBOL; it could equally borrow from Pascal.

It may have been a popular language. Less verbose than COBOL, but still able to process transactions efficiently. Structured programming from Pascal, but with better input-output. BCD data for efficient storage and data transfer to other systems.

It could have been a contender. In an alternate world, we could be using programming languages derived not from C but from this hybrid language. That might solve some problems (such as buffer overflows) but maybe given us others. (What problems, you ask? I don't know. I just now invented the language, I'll need some time to find the problems.)


Sunday, May 1, 2022

Apple wants only professional developers

Apple got itself some news this past week: It sent intimidating letters to the developers of older applications. Specifically, Apple threatened to expel old apps -- apps that had not been updated in three years -- from the iTunes App Store. (Or is it the Apple App Store?)

On the surface, this seems a reasonable approach. Apple has received complaints about the organization of the App Store, and the ability (or lack thereof) to find specific apps. By eliminating older apps, Apple can reduce the number of apps in the store and improve the user experience.

But Apple showed a bit of its thinking when it sent out the notice. It specified a grace period of 30 days. If the developers of an app submitted a new version, the app could remain in the App Store.

The period of 30 days calls my attention. I think it shows a lack of understanding on Apple's part. It shows that Apple thinks anyone can rebuild and resubmit their app in less than one month.

For professional teams, this seems a reasonable limit. Companies that develop apps should be familiar with the latest app requirements, have the latest tools, and have processes to release a new version of their app. Building apps is a full-time job, and they should be ready to go.

Individuals who build apps "for fun" are in a different situation. For them, building apps is not a full-time job. They probably have a different full-time job, and building apps is a side job. They don't spend all day working on app development, and they probably don't have the latest tools. (They may not even have the necessary equipment to run the compilers and packager necessary to meet Apple's requirements.) For them, the 30-day period is an impossible constraint.

Apple, in specifying that limit, showed that it does not understand the situation of casual developers. (Or, more ominously, deliberately chose to make life difficult for casual developers. I see no evidence of hostility, so I will credit this limit to ignorance.)

In either case (ignorance or malice) Apple thinks -- apparently -- that developers can spin out new versions of their apps quickly. In the long term, this will become true. Casual developers will give up on Apple and stop developing their apps. (They may also drop Apple's products. Why buy a phone that won't run their app?)

As casual developers leave the field, only the serious developers will remain. Those will be the commercial developers, or the professional developers who are paid by corporations.

That sets the Apple environment as a commercial platform, one that serves commercially-developed apps for commercial purposes. The Apple platform will have online banking, email, commercial games, video streaming, location tracking for auto insurance, and other apps in the realm of commerce. But it will have very few fun apps, very few apps built by individuals for enjoyment. Every app will have a purpose, and that purpose will be to make money, either directly or indirectly.

Yet it doesn't have to be this way. Apple has an alternative, if they want it.

Right now, the Apple App Store is a single store. Apple could change that, splitting the App Store into multiple stores, or a single store with sections. One section could be for the commercial apps and another section for the amateur apps. The commercial section will have the tighter restrictions that Apple wants to ensure a good experience for users (updated apps, recent APIs, etc.) and the amateur section will hold the casually-built apps. But the two sections are not equal: apps in the commercial section are allowed to use API calls for payments, and apps in the amateur section are not. (Apple could limit other APIs too, such as biometrics or advertising.)

A two-tiered approach to the App Store gives the developers of iOS apps a choice: play in the pro league, or play in the amateur league. It may be an approach worth considering.