Thursday, March 19, 2020

The Lesson from BASIC, Visual Basic, and VB.NET

This week Microsoft announced that VB.NET would receive no enhancements. In effect, Microsoft has announced the death of VB.NET. And while some developers may grieve over the loss of their favorite language, we should look at the lesson of VB.NET.

But first, let's review the history of BASIC, the predecessor of VB.NET.

BASIC has a long history, and Microsoft was there for most of it. Invented in the mid-1960s, BASIC was a simple interpreter, designed for timeshare systems and people who were not programmers. The major competing languages, the programming languages one could use instead of BASIC were FORTRAN and COBOL. BASIC, while less powerful, was much easier to use than any of the alternatives.

Small home computers were a natural for BASIC. Microsoft saw an opportunity and built BASIC for the Apple II, Commodore's PET and CBM, Radio Shack's TRS-80, and many others. Wherever you turned, BASIC was available. It was the lingua franca of programming, which made it valuable.

BASIC was popular, but its roots in timeshare made it a text-oriented language. (To be fair, all other languages were text-oriented, too.) As computers become more popular, programmers had to manipulate hardware directly to use special effects such as colors and graphics. Microsoft helped, by enhancing the language with commands for graphics and other hardware such as sound. BASIC remained the premiere language for programming because it was powerful enough (or good enough) to get the job done.

Microsoft's Windows posed a challenge to BASIC. Even with its enhancements for graphics, BASIC was not compatible with the event-driven model of Windows. Microsoft's answer was Visual Basic, a new language that shared some keywords with BASIC but little else. The new Visual Basic was a completely different language, even more powerful than the biggest "Disk BASIC" Microsoft ever released. Microsoft's other language for Windows, Visual C++, was powerful but hard to use. Visual Basic was less powerful but much easier to use, and it had better support for COM. The ease of use and COM support provided value to developers.

Microsoft's .NET posed a second challenge to Visual Basic, which was not compatible with the new architecture the .NET framework. Microsoft's answer was VB.NET, a redesign that looked a lot like C# but with some keywords retained from Visual Basic.

For the past two decades (almost), VB.NET has been a supported language in Microsoft's world, living beside C#. That coexistence now comes to an end, with C# getting upgrades and VB.NET getting... very little.

The problem with VB.NET (as I see it) is that VB.NET was too close, too similar to C#. VB.NET offered little that was different (or better) than C#. Thus, when picking a language for a new project, one could pick C# or VB.NET and be assured that it would work.

But being similar, for programming languages, is not a good thing. Different languages should be different. They should offer different programming constructs and different capabilities, because the differences can provide value.

C++ is different from C, and while C++ can compile and run every C program, the differences between C++ and C are enough that both languages offer value.

Python and Ruby are different enough that both can exist. Both offer value to the programmer.

C# and Java are close cousins, and one could argue that they too are too similar to co-exist. For this case, it may be that the sponsoring companies (Microsoft for C#, Oracle for Java) is enough of a difference. For these languages, the relationship with the sponsoring company is the value.

But VB.NET was too close to C#. Anything you could do in VB.NET you could do in C#, and usually with no additional effort. VB.NET offered nothing of distinct value.

We should note that the end of VB.NET does not mean the end of BASIC. There are other versions of BASIC, each quite different from VB.NET and C#. These different versions may continue to thrive and provide value to programmers.

Sometimes, being different is important.

Wednesday, March 11, 2020

Holding back time

An old episode of the science-fiction series "Dr. Who", the villain roams the galaxy and captures entire planets, all to power "time dams" used to prevent his tribal matriarch from dying. The efforts were in vain, because while one can delay changes one cannot hold them back indefinitely.

A lot of effort in IT is also spent on "keeping time still" or preventing changes.

Projects using a "waterfall" process prevent changes by agreeing early on to the requirements, and then "freezing" the requirements. A year-long project can start with two-month phase to gather, review, and finalize requirements; the remainder of the year is devoted to implementing those requirements, exactly as agreed, with no changes or additions. The result is often disappointing. Delivered systems were incorrect (because the requirements, despite review, were incorrect) or incomplete (for the same reason) and even if neither of those were true the requirements were a year out of date. Time had progressed, and changes had occurred, outside of the project "bubble".

Some waterfall-managed projects allow for changes, usually with an onerous "change control" process that requires a description and justification of the change and agreement (again) among all of the concerned parties. This allows for changes, but puts a "brake" on them, limiting the number and scope of changes.

But project management methodologies are not the only way we try to hold back time. Other areas that we try to prevent changes include:

Python's "requirements.txt" file, which lists the required packages. When used responsibly, it lists the required packages and the minimum version of each package. (A good idea as one does need to know the packages and the versions, and this was a consistent method.) Some projects try to hold back changes by specifying an exact version of a package (such as "must be version 1.4 and no other") in fear that a later version may break something.

Locking components to specific versions will eventually fail: a component will not be available, or the specified version will not work on a new operating system or in a new version of the interpreter. (Perhaps even the Python interpreter itself, if held back in this manner, will fail.)

Containers, which contain the "things that an application needs". Many "containerized" applications contain a database and the database software, but they can also include other utilities. The container holds a frozen set of applications and libraries, installed each time the container is deployed. While they can be updated that doesn't mean they are updated.

Those utilities and libraries that are "frozen in time" will eventually cause problems. They are not stand-alone; they often rely on other utilities and libraries, which may not be present in the container. At some point, the "outside" libraries will not work for the "inside" applications.

Virtual machines to run old versions of operating systems, to run old versions of applications that run only on old versions of operating systems. Virtual machines can be used for other purposes, and this is yet another form of "holding back time".

Virtual machines with old versions of operating systems, running old versions of applications, also have problems. Their ability to communicate with other systems on the network will (probably) break, due to expired certificates or a change in a protocol.

All of these techniques pretend to solve a problem. But they are not really solutions -- they simply delay the problem. Eventually, you will have an incompatibility, somewhere. But that isn't the biggest problem.

The biggest problem may be in thinking that you don't have a problem.

Thursday, March 5, 2020

Programming languages differ

A developer recently ranted about the Go language. The rant was less about the language and more about run-time libraries and interfaces to underlying operating systems and their file systems. The gist of his rant is that Go is not a suitable programming language for performing certain operations on certain operating systems, and therefore he is "done" with Go.

The first part of his rant is correct. I disagree with the conclusion.

We have the idea that programming languages are general-purpose, that any language can be used for any problem. But programming didn't start that way. Programming languages started unequal, with different languages designed for different types of computing. And while languages have become less specific and more general-purpose, they are still not equal.

I think our idea about a general-purpose programming language (or perhaps an all-purpose programming language) started with the IBM System/360  in the 1960s and the PL/1 programming language.

Prior to the System/360, computers had specific purposes. Some computers were designed for numeric processing, and others were designed for transaction processing. Not only were computers specific to a purpose, but programming languages were, too. FORTRAN was for numeric processing, and used on computers built for numeric processing. COBOL was for transaction processing, and used on computers built for commercial processing.

After IBM introduced the System/360, a general-purpose computer suitable for both numeric and commercial processing, it introduced PL/1, a general-purpose programming language suitable for numeric and commercial processing. (A very neat symmetry, with general-purpose hardware using a general-purpose programming language.)

PL/1 saw little popularity, but the notion of a general-purpose programming language did gain popularity and still dominates the our mindsets.We view programming languages as rough equals, and the choice of language can be made based on factors such as popularity (a proxy for availability of talent) and tool support (such as IDEs and debuggers).

There are incentives to reinforce the notion that a programming language can do all things. One comes from vendors, another comes from managers, and the third comes from programmers.

Vendors have an incentive to push the notion that a language can do everything -- or at least everything the client needs. Systems from a vendor come with some languages but not all languages. Explaining that your languages can solve problems is good marketing. Explaining that they cannot solve every problem is not.

The managers who purchased computers (which were expensive in the early days) wanted validation of their selection. They wanted to hear that their computer could solve the problems of the business. That meant believing in the flexibility and power of the hardware, and of the programming languages.

The third group of believers is programmers. Learning a programming language takes time. The process is an investment. We programmers want to think that we made a good investment. Admitting that a programming language is not suitable for some tasks means that one may have to learn a different programming language. That's another investment of time. It's easier to convince oneself that the current programming language is capable of everything that is needed.

But different programming languages have different strengths -- and different weaknesses. Programming languages are not the same, and they are not interchangeable.

COBOL is good for transaction processing, especially with flat files. But I would not use it for word processing or games.

FORTRAN is good for numeric processing. But I would not for word processing, nor for transaction processing.

Object oriented languages such as C++, Java, and C# are good for large applications that require structure and behavior that can be defined and verified by the compiler. (Static types and type checking.)

BASIC and Pascal are good for learning the concepts of programming. Both have been expanded in many ways, and have been used for serious development.

R is good for statistics and rapid analysis of numeric data, and the visualization of data. But I would not use it for machine learning.

Perl, Python, and Ruby are good for prototyping and for small- to medium-size applications. I would not use them for large-scale systems.

We should not assume that every language is good for every task, or every purpose. Languages (and their run-time libraries) are complex. They can be measured in multiple dimensions: complexity of the language, memory management and garbage collection, type safety, support tools, library support, connections to data bases and data sources, vendor support, community support, and more. Each language has strengths and weaknesses.

The developer who ranted against the Go language had criticisms about Go's handling of filesystems on Windows. His complaint is not unfounded; Go has a library that expects a Unix-like filesystem and not a Windows filesystem, and works poorly with Windows. But that doesn't mean that the language is useless!

An old joke tells of a man who consults a doctor. The man lifts his arm and says "Doc, it hurts when I do this." The doctor replies, "Well, don't do that!" While it gets laughs, there is some wisdom in it. Go is a poor language for handling the Windows filesystem; don't use it for that.

A more general lesson is this: know your task, and know your programming language. Understand what you want to accomplish, at a fairly detailed level. Learn more than one programming language and recognize that some are better (at certain things) than others. When selecting a programming language, make an informed decision. Don't write off a language forever because it cannot do a specific task, or works poorly for some projects.

Tuesday, February 25, 2020

The language treadmill

Technology is not a platform, but a treadmill. Readers of a certain age may remember the closing credits of "The Jetsons", in which George Jetson walks their dog astro on a space-age treadmill that runs a bit faster than he would like. The image is not too far off from the technology treadmill.

We're accustomed to thinking of hardware as changing. This year's laptop computer is better (faster, more powerul, more memory) than last year's laptop computer. We consistently improve processors, memory, storage, displays, ... just about everything. Yet the hardware also changes; today's laptop PCs are a far cry from the original IBM PC. Even desktop PCs are different from the original IBM PC. So different, in fact, that they would not be considered "PC compatible" at the time. (Today's USBC-attached displays would not connect, today's USB keyboards would not connect, nor would USB thumb drives. Nothing from today's PC's would connect to an IBM PC model 5150, nor would anything from 1982's IBM PC connect to today's PCs.)

The treadmill includes more that hardware. It also includes software. Operating systems are the most visible software that changes; we deal with them every day. Microsoft has made improvements to each version of Windows, and Windows 10 is different from Windows 3.11 and especially Windows 1.0. Apple changes OS X and that is quite different from the original OS X and much different from the previous Apple OS versions.

The treadmill extends beyond hardware and operating systems. It includes applications (Microsoft Word, for example) and it also includes programming languages.

Programming languages, whether governed by committee (C, C++) or individual (Perl, Python) change over time. Many times, a new version is "backwards compatible", meaning that your old code will work with the new compiler or interpreter. But not always.

Some years ago, I worked on a project that built and maintained a large-ish C++ application which was deployed on Windows and Solaris. The C++ language was chosen because both platforms supported that language; compilers were available for Windows and for Solaris. But the compilers were upgraded on different schedules: Microsoft had their schedule for upgrades to their compiler, and Sun had their (different) schedules for upgrades to their compiler.

Different upgrade schedules could have been a problem, but it wasn't. (Until we made it one. More on that later.) Most updates to the compilers were backwards-compatible, so old code could be moved to the new compiler, recompiled, and the resulting program run immediately.

Most updates worked that way.

One update did not. It was an update to the C++ language, one that changed the way C++ worked and just so happened to break existing code. (The C++ governing committee was reluctant to make the change, but it was necessary and I agree with the reasoning.) So the upgrade required changes to the code. But we couldn't make the changes immediately; we had to wait for both compilers (Microsoft and Solaris) to support the new version of C++.

It turns out that we also had to wait for our managers to allocate time to make the code changes for the new compilers. Our time was allocated for new features to the large-ish application, not to technical improvements that provided no new business features. Thus, our code stayed with the old (soon to be outdated) structures.

Eventually, we saw a chain of events that forced us to update our code. Sun released a new version of the Solaris operating system, and we had to install that to stay current with out licenses. Once installed, we learned that the new operating system supported only the new version of the C++ compiler, and our code immediately broke. We could not compile and build our application on Solaris, nor release it to our production servers (which had been updated to the new Solaris).

This caused a scramble in the development teams. Our managers (who had prevented us from modifying the C++ code and moving to the new C++ compiler) were now anxious for us to make the changes, run tests, and deploy our application to our servers. We did make the changes, but it required us to stop our current work, make changes, run tests, and deploy the applications. Those efforts delayed other work for new features.

This is a cautionary tale, illustrating the need to stay up to date with programming tools. It also shows that programming languages change. Our example was with C++, but other languages change. Python has had two version "tracks" for some time (version 2 and version 3) and development and support of version 2 has come to an end. The future of Python is its version 3. (Do you use Python? Are you using version 3?)

Oracle has released Java version 14. Many businesses still use version 8.

There have been changes to JavaScript, TypeScript, C# (and .NET, on which C# rests), and even SQL. Most of these changes are backwards-compatible, so no code changes are required to move to the new version.

You will eventually move to the new operating system, or compiler, or interpreter. Your code may have to change. The changes can be on your schedule, or on someone else's schedule. My advice: be aware of updates and migrate your programming tools on yours.

Wednesday, February 19, 2020

A server that is not a PC

Today, servers are PCs. They have the same architecture as PCs. They run PC operating systems. But do they have to be PCs? Is there another approach? There might be.

First, let's consider PCs. PCs have lots of parts, from processor to memory to storage, but the one thing that makes a PC a PC is the video. PCs use memory-mapped video. They dedicate a portion of memory to video display. (Today, the dedicated memory is a "window" into the much larger memory on the video card.)

Which is a waste, as servers do not display video. (Virtual machines on servers do display video, but it is all a game of re-assigned memory. If you attach a display to a server, it does not show the virtual desktop.)

Suppose we made a server that did not dedicate this memory to video. Suppose we created a new architecture for servers, an architecture that is exactly like the servers today, but with no memory reserved for video and no video card.

Such a change creates two challenges: installing an operating system and the requirements of the operating system.

First, we need a way to install an operating system (or a hypervisor that will run guest operating systems). Today, the process is simple: attach a keyboard and display to the server, plug in a bootable USB memory stick, and install the operating system. The boot ROM and the installer program both use the keyboard and display to communicate with the user.

In our new design, they cannot use a keyboard and display. (The keyboard would be possible, but the server has no video circuitry.)

My first though was to use a terminal and attach it to a USB port. A terminal contains the circuitry for a keyboard and display; it has the video board. But no such devices exist nowadays (outside of museums and basements) and asking someone to manufacture them would be a big ask. I suppose one could use a tiny computer such as a Raspberry Pi, with a terminal emulator program. But that solution is merely a throwback to the pre-PC days.

A second idea is to change the server boot ROM. Instead of presenting messages on a video display, and accepting input from a keyboard, the server could run a small web server and accept requests from the network port. (A server is guaranteed to have a network port.)

The boot program could run a web server, just as network routers allow configuration with built-in web servers. When installing a new server, one can simply attach it to your network and then connect to it via SSH.

Which brings us to the next challenge: an operating system. (Or a hypervisor.)

Today, servers run PC operating systems. Hypervisors (such as Microsoft's Hyper-V) are nothing more than standard PC operating systems that have been tweaked to support guest operating systems. As such, they expect to find a video card.

Since our server does not have a video card, these hypervisors will not work properly (if at all). They will have to be tweaked again to run without a video card. (Which should be somewhat easy, as hypervisors do not use the video card for their normal operation.)

Guest operating systems may be standard, unmodified PC operating systems. They want to see a video card, but the hypervisor provides virtualized video cards, one for each instance of a guest operating system. The guest operating systems never see the real video card, and don't need to know that one is present -- or not.

What's the benefit? A simpler architecture. Servers don't need video cards, and in my opinion, shouldn't have them.

Which would, according to my "a PC must have memory-mapped video" rule, make servers different from PCs. Which I think we want.

The video-less server is an idea, and I suspect that it will remain an idea. Implementing it requires special hardware (a PC minus the video circuitry that it normally has) and special software (an operating system that doesn't demand a video card) and special boot ROM (one that talks over SSH). As long as PCs are dominant, our servers will simply be PCs with no display attached.

But if the market changes, and PCs lose their dominance, then perhaps one day we will see servers without video cards.

Wednesday, February 12, 2020

Advances in platforms and in programming languages

The history of computing can be described as a series of developments, alternating between computing platforms and programming languages. The predominant pattern is one in which hardware is advanced, and then programming languages. Occasionally, hardware and programming languages advance together, but that is less common. (Hardware and system software -- not programming languages -- do advance together.)

The early mainframe computers were single-purpose devices. In the early 21st century, we think of computers as general-purpose devices, handling financial transactions, personal communication, navigation, and games, because our computing devices perform all of those tasks. But in the early days of electronic computing, devices were not so flexible. Mainframe computers were designed for a single purpose; either commercial (financial) processing, or scientific computation. The distinction was visible through all aspects of the computer system, from the processor and representations for numeric values to input-output devices and the characters available.

Once we had those computers, for commercial and for scientific computation, we built languages. COBOL for commercial processing; FORTRAN for scientific processing.

And thus began the cycle of alternating developments: computing platforms and programming languages. The programming languages follow the platforms.

The next advance in hardware was the general-purpose mainframe. The IBM System/360 was designed for both types of computing, and it used COBOL and FORTRAN. But we also continued the cycle of "platform and then language" with the invention of a general-purpose programming language: PL/1.

PL/1 was the intended successor to COBOL and to FORTRAN. It improved the syntax of both languages and was supposed to replace them. It did not. But it was the language we invented after general-purpose hardware, and it fits in the general pattern of advances in platforms alternating with advances in languages.

The next advance was timesharing. This advance in hardware and in system software let people use computers interactively. It was a big change from the older style of scheduled jobs that ran on batches of data.

The language we invented for this platform? It was BASIC. BASIC was designed for interactive use, and also designed to avoid requests of system operators to load disks or tapes. A BASIC program could contain its code and its data, all in one. Such a thing was not possible in earlier languages.

The next advance was minicomputers. The minicomputer revolution (DEC's PDP-8, PDP-11 and other  systems from other vendors) used BASIC (adopted from timesharing) and FORTRAN. Once again, a new platform initially used the languages from the previous platform.

We also invented languages for minicomputers. DEC invented FOCAL (a lightweight FORTRAN) and DIBOL (a lightweight COBOL). Neither replaced its corresponding "heavyweight" language, but invent them we did.

The PC revolution followed minicomputers. PCs were small computers that could be purchased and used by individuals. Initially, PCs used BASIC. It was a good choice: small enough to fit into the small computers, and simple enough that individuals could quickly understand it.

The PC revolution invented its own languages: CBASIC (a compiled form of BASIC), dBase (later named "xbase"), and most importantly, spreadsheets. While not a programming language, a spreadsheet is a form of programming. It organizes data and specifies calculations. I count it as a programming platform.

The next computing platform was GUI programming, made possible with both the Apple Macintosh and Microsoft Windows. These "operating environments" (as they were called) changed programming from text-oriented to graphics, and required more powerful hardware -- and software. But the Macintosh first used Pascal, and Windows used C, two languages that were already available.

Later, Microsoft invented Visual Basic and provided Visual C++ (a concoction of C++ and macros to handle the needs of GUI programming), which became the dominant languages of Windows. Apple switched from Pascal to Objective-C, which it enhanced for programming the Mac.

The web was another computing advance, bringing two distinct platforms: the server and the browser. At first, servers used Perl and C (or possibly C++); browsers were without a language and had to use plug-ins such as Flash. We quickly invented Java and (somewhat less quickly) adopted it for servers. We also invented JavaScript, and today all browsers provide JavaScript for web pages.

Mobile computing (phones and tablets) started with Objective-C (Apple) and Java (Android), two languages that were convenient for those devices. Apple later invented Swift, to fix problems with the syntax of Objective-C and to provide a better experience for its users. Google invented Go and made it available for Android development, but it has seen limited adoption.

Looking back, we can see a clear pattern. A new computing platform emerges. At first, it uses existing languages. Shortly after the arrival of the platform, we invent new languages for that platform. Sometimes these languages are adopted, sometimes not. Sometimes a language gains popularity much later than expected, as in the case of BASIC, invented for timesharing but used for minicomputers and PCs.

It is a consistent pattern.

Consistent that is, until we get to cloud computing.

Cloud computing is a new platform, much like the web was a new platform, and PCs were a new platform, and general-purpose mainframes were a new platform. And while each of those platforms saw the development of new languages to take advantage of new features, the cloud computing platform has seen... nothing.

Well, "nothing" is a bit harsh and not quite true.

True to the pattern, cloud computing uses existing languages. Cloud applications can be built in Java, JavaScript, Python, C#, C++, and probably Fortran and COBOL. (And there are probably cloud applications that use these languages.)

And we have invented Node.js, a framework in JavaScript that is useful for cloud computing.

But there is no native language for cloud computing. No language that has been designed specifically for cloud computing. (No language of which I am aware. Perhaps there is, lurking in the dark corners of the internet that I have yet to visit.)

Why no language for the cloud platform? I can think of a few reasons:

First, it may be that our current languages are suitable for the development of cloud applications. Languages such as Java and C# may have the overhead of object-oriented design, but that overhead is minimal with careful design. Languages such as Python and JavaScript are interpreted, but that may not be a problem with the scale of cloud processing. Maybe the pressure to design a new language is low.

Second, it may be that developers, managers, and anyone else connected with projects for cloud applications is too busy learning the platform. Cloud platforms (AWS, Azure, GCP, etc.) are complex beasts, and there is a lot to learn. It is possible that we are still learning about cloud platforms and not ready to develop a cloud-specific language.

Third, it may be too complex to develop a cloud-specific programming language. The complexity may reside in separating cloud operations from programming, and we need to understand the cloud before we can understand its limits and the boundaries for a programming language.

I suspect that we will eventually see one or more programming languages for cloud platforms. The new languages may come from the big cloud providers (Amazon, Microsoft, Google) or smaller providers (Dell, Oracle, IBM) or possibly even someone else. Programming languages from the big providers will be applicable for their respective platforms (of course). A programming language from an independent party may work across all cloud platforms -- or may work on only one or a few.

We will have to wait this one out. But keep yours eyes open. Programming languages designed for cloud applications will offer exciting advances for programming.

Wednesday, February 5, 2020

Windows succeeded because of laser printers

It is easy to survey the realm of computing and see that Windows is dominant (at least on desktop computers, and on lots of laptops in offices). But Windows did not always have dominance; it had to fight its way to the top. Windows had to replace PC-DOS/MS-DOS, it had to fight off OS/2, and it had to beat a number of other (smaller) contenders.

Much has been written about the transition from PC-DOS to Windows and the competition between Windows and OS/2. There is one factor, I think, that has received little attention. This one factor, by itself, may not have made the decision, but it was a factor that favored Windows.

That factor was the laser printer. (Specifically the Hewlett-Packard LaserJet printer.)

Laser printers were desired. They were expensive, which dampened their acceptance, but people wanted them. They were quieter, they were faster, and they produced better quality output. They could provide different typefaces and they could print graphics. And people wanted quiet, fast, high-quality output, especially with graphics.

One could use a laser printer with programs in PC-DOS. It was not always easy, and it was not always possible. PC-DOS provided few services for devices; just enough to send data to a parallel port or a serial port. Applications that wanted to use sophisticated devices (such as laser printers) had to build their own drivers. (The same issue was present for video cards, too.) Thus, when purchasing software, the first question was "Will it support a laser printer?". Some software did, some did not, and some supported laser printers poorly.

Windows supported graphics, video cards, and laser printers from the start. Windows was built around graphics; the first release of Windows was a graphics program. Windows also handled device drivers, allowing a device to have a single driver for all applications. If a program ran in Windows, it could print on all of the printers supported by Windows. Windows was graphics.

In contrast, the first version of OS/2 worked only in text mode. OS/2 users had to wait for version 2.1 to have graphics. Microsoft and IBM (developing OS/2 jointly) focused on multitasking and memory and security.

The difference between Windows and OS/2 was that orientation. Windows was an operating system for a PC; that is, an operating system for a video display board that had a processor and memory and storage attached. OS/2 was an operating system for a minicomputer; very good at multitasking for a use that communicated through a character interface. Even though PCs at the time had video boards, OS/2 pretended that the user was sitting at a terminal.

But people wanted graphics. They wanted graphics because they could see the print-outs from laser printers. They were willing to pay lots of money for laser printers, to impress their co-workers and their bosses.

Windows had graphics. OS/2 did not.

I cannot help but think that laser printers helped Windows win over OS/2.

(I do recognize that other factors contributed to the success of Windows. Those factors include licensing arrangements, marketing, and compatibility with PC-DOS applications. I think laser printers are another -- unrecognized -- factor.)

Today, we casually accept that just about every device works with Windows, and that we can print from any application to any device (laser printer, ink-jet printer, and even PDF file), and that it all works. The computing world of 2020 is very different from the world of 1985.

But maybe we should be looking forward instead of backward. Windows won over OS/2 because it met the demand of the market. It provided graphics on screen and on printouts. It gave people what they wanted.

Today, in 2020, what do people want? And which companies are providing it?