Thursday, December 6, 2018

Rebels need the Empire

The PC world is facing a crisis. It is a silent crisis, one that few people understand.

That crisis is the evil empire, or more specifically, the lack of an evil empire.

For the entire age of personal computers, we have had an evil empire. The empire changed over time, but there was always one. And that empire was the unifying force for the rebellion.

The first empire was IBM. Microcomputer enthusiasts were fighting this empire of large, expensive mainframe computers. We fought it with small, inexpensive (compared to mainframes) computers. We offered small, interactive, "friendly" programs written in BASIC in opposition to batch mainframe systems written in COBOL. The rebellion used Apple II, TRS-80, and other small systems to unite and fight for liberty. This rebellion was successful. So successful that IBM decided to get in on the personal computer action.

The second empire was also IBM. The IBM PC became the standard for computing, and the diverse set of computers prior to the IBM model 5150 was wiped out. Rebels refused to use IBM PCs and attempted to keep non-PC-compatible computers financially viable. That struggle was lost, and the IBM design became the standard design. Once Compaq introduced a PC-compatible (and didn't get sued) other manufacturers introduced their own PC compatibles. The one remnant of this rebellion was Apple, who made non-compatible computers for quite some time.

The third empire was Microsoft. The makers of IBM-compatible PCs needed an operating system and Microsoft was happy to sell them MS-DOS. IBM challenged Microsoft with OS/2 (itself a joint venture with Microsoft) but Microsoft introduced Windows and made it successful. Microsoft was so successful that its empire was, at times, considered larger and grander than IBM mainframe empire. The rebellion against Microsoft took some time to form, but it did arise as the "open source" movement.

But Microsoft has fallen from its position as evil empire. It still holds a majority of desktop computer operating systems, but the world of computing has expanded to web servers, smartphones, and cloud systems, and these are outside of Microsoft's control.

In tandem with Microsoft's decline, open source has become accepted as the norm. As such, it is no longer the rebellion. The software market exists in tripartite: Windows, macOS, and Linux. Each is an acceptable solution.

Those two changes -- Microsoft no longer the evil empire and open source no longer the rebellion -- mean that, at the moment, there is no evil empire.

Some companies have large market shares of certain segments. Amazon.com dominates the web services and cloud market -- but competitors are reasonable and viable options. Microsoft dominates the desktop market, especially the corporate desktop market, but Apple is a possible choice for the corporate desktop.

No one vendor controls the hardware market.

Facebook dominates in social media, but is facing significant challenges in areas of privacy and "fake news". Other media channels like Twitter are looking to gain at Facebook's expense.

Even programming languages have no dominant player. According to the November report from Tiobe, Java and C have been the two most popular languages and neither is gaining significantly. The next three (C++, Python, and VB.net) are close, as are the five following (C#, JavaScript, PHP, SQL, and Go). No language is emerging as a dominant language, as we had with BASIC in the 1980s and Visual Basic in the 1990s.

A world without an evil empire is a new world for us. Personal computers were born under an evil empire, operating systems matured under an evil empire, and open source became respectable under an evil empire. I like to think that such innovations were driven (or at least inspired) by a rebellion, an active group of people who rejected the market leader.

Today we have no such empire. Will innovation continue without one? Will we see new hardware, new programming languages, new tools? Or will the industry stagnate as major plays focus more on market share and less on innovation?

If the latter, then perhaps someday a new market leader will emerge, strong enough to win the title of "evil empire" and rebels will again drive innovation.

Thursday, November 8, 2018

Why is there still a MacBook Air?

This week, Apple introduced upgrades to a number of its products. They showed a new Mac Mini and a new MacBook Air. The need for a new Mac Mini I understand. The need for a new MacBook Air I do not.

The original MacBook Air was revolutionary in that it omitted a CD/DVD reader. So revolutionary that Apple needed a way for a MacBook Air to "borrow" a CD/DVD reader from another computer (another Apple computer) to install software.

The MacBook Air stunned the world with its thinness and its low weight -- hence the adjective "Air". Compared to laptops of the time, even Apple's MacBooks, the MacBook Air was almost weightless.

But that was then. This is now.

Apple has improved the MacBook (without the "Air") to the point that MacBooks and MacBook Airs are indistinguishable. They are both thin. They are both lightweight. They both have no CD/DVD reader.

Yes, there are some minor points and one can tell a MacBook from a MacBook Air. MacBooks are slightly smaller and have only one USB C port, whereas MacBook Airs are larger and have multiple ports.

But in just about every respect, the MacBook Air is a new and improved MacBook. When you consider the processor, the memory and storage, the display, and the capabilities of the two devices, the MacBook Air is simply another member of the MacBook line. So why keep it? Why not just call it a MacBook?

Apple could certainly have two MacBooks. They have two MacBook Pro computers, a 13-inch model and a 15-inch model. They could have a 12-inch MacBook and a 13-inch MacBook. Yet they keep the "Air" designation. Why?

Its possible that the "MacBook Air" name has good market recognition, and Apple wants to leverage that. If so, we can expect to see other "Air" products, much like the iPad Air.

Monday, October 29, 2018

IBM and Red Hat Linux

The news that IBM had an agreement to purchase Red Hat (the distributor and supporter of a Linux distro for commercial use) was followed quickly by a series of comments from the tech world, ranging from anger to disappointment.

I'm not sure that the purchase of Red Hat by IBM is a bad thing.

One can view this event in the form of two questions. The first is "Should Red Hat sell itself (to anyone)?". The second is "Given that Red Hat is for sale, who would be a good purchaser?".

The negative reaction, I think, is mostly about the first question. People are disappointed (or angered) by the sale of Red Had -- to anyone.

But once you commit to a sale, the question changes and the focus is on the buyer. Who are possible buyers for Red Hat?

IBM is, of course, a possibility. Many people might object to IBM, and if we think of the IBM from its monopoly days and its arrogance and incompatible hardware designs, then IBM would be a poor choice. (Red Hat would also be a poor acquisition for that IBM, too.)

But IBM has changed quite a bit. It still sells mainframes; its S/36 line has mutated into servers, and it has sold (long ago) its PC business. It must compete in the cloud arena with Amazon.com, Microsoft, and Google (and Dell, and Oracle, and others). Red Hat helps IBM in this area. I think IBM is not so foolish as to break Red Hat or make many changes.

One possibility is that IBM purchased Red Hat to prevent others from doing so. (You buy something because you need it or because you want to keep it from others.) Who are the others?

Amazon.com and Microsoft come quickly to mind. They both offer cloud services, and Red Hat would help both with their offerings. The complainers may consider this; would they prefer Red Hat to go to Amazon or Microsoft? (Of the two, I think Microsoft would be the better owner. It is expanding its role with Linux and moving its business away from Windows and Windows-only software to a larger market of cloud services that support both Windows and Linux.)

There are other possible purchasers. Oracle has been mentioned by critics (usually as a "could be worse, could be Oracle" comment). Red Hat fills a gap in Oracle's product line between hardware and its database software, and also provides a platform for Java (another Oracle property).

Beyond those, there are Facebook, Dell, and possibly Intel, although I consider the last to be a long shot. None of them strike me as a good partner.

Red Hat could be purchased by an equity/investment company, which would probably doom Red Hat to partitioning and sales of individual components.

In the end, IBM seems quite a reasonable purchaser. IBM has changed from its old ways and it supports Linux quite a bit. I think it will recognize value and strive to keep it. Let's see what happens.

Tuesday, October 23, 2018

It won't be Eiffel

Bertrand Meyers has made the case, repeatedly, for design-by-contract as a way to improve the quality of computer programs. He has been doing so for the better part of two decades, and perhaps longer.

Design-by-contract is a notion that uses preconditions, postconditions, and invariants in object-oriented programs. Each is a form of an assertion, a test that is performed at a specific time. (Preconditions are performed prior to a function, postconditions after, etc.)

Design-by-contract is a way of ensuring that programs are correct. It adds rigor to programs, and  requires careful analysis and thought in the design of software. (Much like structured programming required analysis and thought for the design of software.)

I think it has a good chance of being accepted as a standard programming practice. It follows the improvements we have seen in programming languages: Bounds checking of indexes for arrays, function signatures, and type checking rules for casting from one type to another.

Someone will create a language that uses the design-by-contract concepts, and the language will gain popularity. Perhaps because of the vendor (Microsoft? Google?) or perhaps through grass-roots acceptance (a la Python).

There already is a language that implements design-by-contract: Eiffel, Meyers' pet language. It is available today, even for Linux, so developers can experiment with it. Yet it has little interest. The Eiffel language does not appear on the Tiobe index (at least not for September 2018) at all -- not only not in the top 20, but not in the top 100. (It may be lurking somewhere below that.)

So while I think that design-by-contract may succeed in the future, I also think that Eiffel has missed its opportunity. It hasn't been accepted by any of the big vendors (Microsoft, Oracle, Google, Apple) and its popularity remains low.

I think that another language may pick up the notion of preconditions and postconditions. The term "Design by contract" is trademarked by Meyers, so it is doubtful that another language will use it. But the term is not important -- it is the assertions that bring the rigor to programming. These are useful, and eventually will be found valuable by the development community.

At that point, multiple languages will support preconditions and postconditions. There will be new languages with the feature, and adaptations of existing languages (C++, Java, C#, and others) that sport preconditions and postconditions. So Bertrand Meyer will have "won" in the sense that his ideas were adopted.

But Eiffel, the language, will be left out.

Tuesday, October 9, 2018

C without the preprocessor

The C and C++ languages lack one utility that is found in many other languages: a package manager. Will they ever have one?

The biggest challenge to a package manager for C or C++ is not the package manager. We know how to build them, how to manage them, and how to maintain a community that uses them. Perl, Python, and Ruby have package managers. Java has one (sort of). C# has one. JavaScript has several! Why not C and C++?

The issue isn't in the C and C++ languages. Instead the issue is in the preprocessor, an external utility that modifies C or C++ code before the compiler does its work.

The problem with the preprocessor is that it can change just about any token in the code to something else, including statements which would be used by package managers. The preprocessor can change "do_this" to "do_that" or change "true" to "TRUE" or change "BEGIN" to "{".

The idea of a package manager for C and C++ has been discussed, and someone (I forget the person now) listed a number of questions that the preprocessor raises for a package manager. I won't repeat the list here, but they were very good questions.

To me, it seems that a package manager and a preprocessor are incompatible. If you have one, you cannot have the other. (At least, not with any degree of consistency.)

So I started thinking... what if we eliminate the C/C++ preprocessor? How would that change the languages?

Let's look at what the preprocessor does for us.

For starters, it is the mechanism to include headers in programs. The "#include" lines are handled by the preprocessor, not the compiler. (When C was first designed, a preprocessor was considered a "win", as it separated some tasks from the compiler and followed the Unix philosophy of separation of duties.) We still need a way to include definitions of constants, functions, structures, and classes, so we need a replacement for the #include command.

A side note: C and C++ standard wonks will know that it is not required that the preprocessor and not the compiler handle "#include" lines. The standards dictate that after certain lines (such as #include "string") the compiler must exhibit certain behaviors. But this bit of arcane knowledge is not important to the general idea of elminating the preprocessor.

The preprocessor allows for conditional compilation. It allows for "#if/#else/#endif" blocks that can be conditionally compiled, based on what follows the "#if". Conditional compilation is extremely useful on software that has multiple targets, such as the Linux kernel (which targets many different processors).

The preprocessor also allows for macros and substitution of values. It accepts a "#define" line which can change any token into something else. This mechanism was used for the "max()" and "min()" functions.

All of that would be lost with the elimination of the preprocessor. As all of those features are used on many projects, they would all have to be replaced by some form of extension to the compiler. The compiler would have to read the included files, and would have to compile (or not compile) conditionally-marked code.

Such a change is possible, but not easy. It would probably break a lot of existing code -- perhaps all nontrivial C and C++ programs.

Which means that removing the preprocessor from C and C++ and replacing it with something else is a change to the language that makes C and C++ no longer C and C++. Removing the preprocessor changes the languages. They are no longer C and C++, but different languages, and deserving of different names.

So in once sense you can remove the preprocessor from C and C++, but in another sense you cannot.

Friday, September 28, 2018

Macbooks are not an incentive

I've seen a number of job postings that include the line "all employees use MacBooks".

I suppose that this is intended as an enticement. I suppose that a MacBook is considered a "perk", a benefit of working at the company. Apple equipment is considered "cool", for some reason.

I'm not sure why.

MacBooks in 2018 are decent computers, but I find that they are inferior to other computers, especially when it comes to development.

I've been using computers for quite some time, and programming for most of that time. I've used MacBooks and Chromebooks and modern PCs. I've used older PCs and even ancient PCs with IBM's Model M keyboard. I've worked on IBM's System/23 (which was the origin of the first IBM PC keyboard). I have even used model 33 ASR Teletype terminals, which are large mechanical beasts that print uppercase on roll paper and do a poor job of it. So I know what I like.

And I don't like Apple's MacBook and MacBook Pro computers. I dislike the keyboard; I want more travel in the keys. I dislike the touchpad in front of the keyboard; I prefer the small pointing stick embedded in Lenovo and some Dell laptop keyboards. I dislike Apple's displays, which are too bright and too reflective. I want "matte" finish displays which hide reflections from light sources such as windows and ceiling lights.

My main client provides a computer, one that I must use when working for them. The computer is a Dell laptop, with a high-gloss display and a keyboard that is a bit better than current Apple keyboards, but not by much. I supplement the PC with a matte-finish display and a Matias "Quiet Pro" keyboard. These make the configuration much more tolerable.

Just as I "fixed" the Dell laptop, I could "fix" a MacBook Pro with an additional keyboard and display. But once I do that, why bother with the MacBook? Why not use a Mac Mini, or for that matter any small-factor PC? The latter would probably offer just as much memory and disk, and more USB ports. And cost less. And run Linux.

It may be some time before companies realize that developers have strong opinions about the equipment that they use. I think that they will, and when they do, they will provide developers with choices for equipment -- including the "bring your own" option.

And it may be some time before developers realize that Apple MacBooks are not the best for development. Apple devices have a lot of glamour, but glamour doesn't get the job done -- at least not for me. Apple designs computers for visual appeal, and I need good ergonomic design.

I'm not going to forbid developers from using Apple products, or demand that everyone use the same equipment that I use. I will suggest that developers try different equipment, see which devices work for them, and understand the benefits of those devices. Pick your equipment for the right reasons, not because it has a pretty logo.

In the end, I find the phrase "all employees use MacBooks" to be a disincentive, a reason to avoid a particular gig. Because I would rather be productive than cool.

Tuesday, September 18, 2018

Programming languages and the GUI

Programming languages and GUIs don't mix. Of all the languages available today, none are GUI-based languages.

My test for a GUI-based language is the requirement that any program written in the language must use a GUI. If you can write a program and run it in a text window, then the language is not a GUI-based language.

This is an extreme test, and perhaps unfair. But it shows an interesting point: We have no GUI-based languages.

We had programming before the GUI with various forms of input and output (punch cards, paper tape, magnetic tape, disk files, printers, and terminals). When GUIs came along, we rushed to create GUI programs but not GUI programming languages. (Except for Visual Basic.) We still have GUIs, some thirty years on, and today we have no GUI programming languages.

Almost all programming languages treat windows (or GUIs) as a second thought. Programming for the GUI is bolted on to the language as a library or framework; it is not part of the core language.

For some languages, the explanation is obvious: the language existed before GUIs existed (or became popular). Languages such as Cobol, Fortran, PL/I, Pascal, and C had been designed before GUIs appeared on the horizon. Cobol and Fortran were designed in an era of magnetic tapes, disk files, and printers. Pascal and C were created for printing terminals or "smart" CRT terminals such as DEC's VT-52.

Some languages were designed for a specific purpose. Such languages have no need of GUIs, and they don't have any GUI support. AWK was designed as a text processing language, a filter that fit in with Unix's tool-chain philosophy. SQL was designed for querying databases (and prior to GUIs).

Other languages were designed after the advent of the GUI, and for general-purpose programming. Languages such as Java, C#, Python, and Ruby came to life in the "graphical age", yet graphics is an extension of the language, not part of the core.

Microsoft extended C++ with its Visual C++ package. The early versions were a daunting mix of libraries, classes, and #define macros. Recent versions are more palatable, but C++ remains C++ and the GUI parts are mere add-ons to the language.

Borland extended Pascal, and later provided Delphi, for Windows programming. But Object Pascal and Windows Pascal and even Delphi were just Pascal with GUI programming bolted on to the core language.

The only language that put the GUI in the language was Visual Basic. (The old Visual Basic, not the VB.NET language of today.) These languages not only supported a graphical display, they required it.

I realize that there may be niche languages that handle graphics as part of the core language. Matlab and R support the generation of graphics to view data -- but they are hardly general-purpose languages. (One would not write a word processor in R.)

Mathematica and Wolfram do nice things with graphics too, but again, for rendering numerical data.

There are probably other obscure languages that handle GUI programming. But they are obscure, not mainstream. The only other (somewhat) popular language that required a graphical display was Logo, and that was hardly a general-purpose language.

The only popular language that handled the GUI as a first-class citizen was Visual Basic. It is interesting to note that Visual Basic has declined in popularity. Its successor, VB.NET is a rough translation of C# and the GUI is, like other languages, something added to the core language.

Of course, programming (and system design) today is very different from the past. We design and build for mobile devices and web services, with some occasional web applications. Desktop applications are considered passe, and console applications are not considered at all (except perhaps for system administrators).

Modern applications place the user interface on a mobile device. The server provides services, nothing more. The GUI has moved from the desktop PC to the web browser and now to the phone. Yet we have no equivalent of Visual Basic for developing phone apps. The tools are desktop languages with extensions for mobile devices.

When will we get a language tailored to phones? And who will build it?