Thursday, October 20, 2022

The Next Big Thing

What will we see as the next big thing?

Let's look at the history of computer technology -- or rather, a carefully curated version of the history of computer technology.

The history of computing can be divided into eras: The mainframe era, the minicomputer era, the micro/PC era, and so forth. And, with careful editing, we can see that these eras have similar durations: about 15 years each.

Let's start with mainframe computers. We can say that it ran from 1950 to 1965. Mainframe computers were (and still are) large, expensive computers capable of significant processing. They are housed in rooms with climate control and dedicated power. Significantly, mainframe computers are used by people only indirectly. In the mainframe age, programmers submitted punch cards which contained source code; the cards were fed into the computer by an operator (one who was allowed in the computer room); the computer compiled the code and ran the program; output was usually on paper and delivered to the programmer some time later. Mainframe computers also ran batch jobs to read and process data (usually financial transactions). Data was often read from magnetic tape and output could be to magnetic tape (updated data) or paper (reports).

Minicomputers were popular from 1965 to 1980. Minicomputers took advantage of newer technology; they were smaller, less expensive, and most importantly, allowed for multiple users on terminals (either paper-based or CRT-based). The user experience for minicomputers was very different from the experience on mainframes. Hardware, operating systems, and programming languages let users interact with the computer in "real time"; one could type a command and get a response.

Microcomputers and Personal Computers (with text displays, and without networking) dominated from 1980 to 1995. It was the age of the Apple II and the IBM PC, computers that were small enough (and inexpensive enough) for an individual to own. They inherited the interactive experience of minicomputers, but the user was the owner and could change the computer at will. (The user could add memory, add disk, upgrade the operating system.)

Personal Computers (with graphics and networking) made their mark from 1995 to 2010. They made the internet available to ordinary people. Graphics made computers easier to use.

Mobile/cloud computers became dominant in 2010. Mobile devices without networks were not enough (the Palm Pilot and the Windows pocket computers never gained much traction). Even networked devices such as the original iPhone and the Nokia N800 saw limited acceptance. It was the combination of networked mobile device and cloud services that became the dominant computing model.

That's my curated version of computing history. It omits a lot, and it fudges some of the dates. But it shows a trend, one that I think is useful to observe.

That trend is: computing models rise and fall, with their typical life being fifteen years.

How is this useful? Looking at the history, we can see that the mobile/cloud computing model has been dominant for slightly less than fifteen years. In other words, its time is just about up.

More interesting is that, according to this trend (and my curated history is too pretty to ignore), something new should come along and replace mobile/cloud as the dominant form of computing.

Let's say that I'm right -- that there is a change coming. What could it be?

It could be any of a number of things. Deep-fake tech allows for the construction of images, convincing images, of any subject. It could be virtual reality, or augmented reality. (The difference is nontrivial: virtual reality makes full images, augmented reality lays images over the scene around us.) It could be watch-based computing. 

My guess is that it will be augmented reality. But that's a guess.

Whatever the new thing is, it will be a different experience from the current mobile/cloud model. Each of the eras of computing had its own experience. Mainframes had an experience of separation and working through operators. Minicomputers had interactive experience, although someone else controlled the computer. Personal computers had interaction and the user owned the computer. Mobile/cloud let people hold computers in their hand and use them on the move.

Also, the next big thing does not eliminate the current big thing. Mobile/cloud did not eliminate web-based systems. Web-based systems did not eliminate desktop applications. Even text-mode interactive applications continue to this day. The next big thing expands the world of computing.

Wednesday, October 19, 2022

Businesses discover that cloud computing isn't magic

Businesses are now (just now, after more than a decade of cloud computing) discovering that cloud computing is not magic. That it doesn't make their computing cheap. That it doesn't solve their problems.

Some folks have already pointed this out. Looking back, it seems obvious: If all you have done is move your web-based system into cloud-based servers, why would things change? But they miss an important point.

Cloud computing is a form of computing, different from web-based applications and different from desktop applications. (And different from mainframe batch processing of transactions.)

A cloud-based system, to be efficient, must be designed for cloud computing. This means small independent services reading and writing to databases or other services, and everything coordinated through message queues. (If you know what those terms mean, then you understand cloud computing.)

Moving a web-based application into the cloud, unchanged, makes little sense. Or as much sense as moving a desktop-based application (remember those?) such as Word or Excel into the web, unchanged.

So why use cloud computing?

Cloud computing's strengths are redundancy, reliability, and variable power. Redundancy in that a properly designed cloud computing system consists of multiple services, each of which can be hosted on multiple (as in more than one per service) servers. If your system contains a service to perform address validations, that service could be running on one, two or seven different servers. Each instance does the same thing: examine a mailing address and determine the canonical form for that address.

The other components in your system, when they need to validate or normalize an address, issue a request to the validation service. They don't care which server handles the request.

Cloud systems are reliable because of this redundancy. A traditional web-based service would have one address validation server. If that server is unavailable, the service is unavailable for the entire system. Such a failure can lead to the entire system being unavailable.

Cloud systems have variable power. They can create additional instances of any of the services (including our example address validation service) to handle a heavy workload. Traditional web services, with only one server, can see slow response times when that server is overwhelmed with requests. (Sometimes a traditional web system would have more than one server for a service, but the number of servers is fixed and adding a server is a lengthy process. The result is the same: the allocated server or servers are overwhelmed and response time increases.)

Cloud services eliminate this problem by instantiating servers (and their services) as needed. When the address validation server is overwhelmed, the cloud management software detects it and "spins up" more instances. Good cloud management software works in the other direction too, shutting down idle instances.

Those are the advantages of cloud systems. But none of them are free; they all require that you build your system for the cloud. That takes effort.


Tuesday, October 11, 2022

Technical interviews

Businesses -- large businesses that have HR departments -- have a problem: They find it difficult to hire new staff.

The problem has a few aspects.

First is the processes that businesses have developed for hiring. Businesses have refined their processes over decades. They have automated the application process, they have refined the selection process to filter out the unqualified candidates, and they have documented job descriptions and made pay grades equitable. They have, in short, optimized the hiring process.

But they have optimized it for the pre-COVID market, in which jobs were few and applicants were plentiful. The selection processes have been designed to filter out candidates: to start with a large number of applications and through multiple steps, reduce that list to a manageable three (or five, or ten). The processes have been built on the assumption that many candidates wanted to work at the company, and were willing to undergo phone screens, interviews, and take-home tests.

The current market is a poor fit for these practices. Candidates are less willing to undergo day-long interviews. They demand payment for take-home tests (some of which can take hours). Candidates are especially reluctant to undergo the process multiple times, for multiple positions. The result is that companies cannot hire new staff. ("No one wants to work!" cry the companies, but a better description might be "Very few people are willing to jump through all of our hoops!")

One might think that companies could simply change their hiring processes. There is an obstacle to this: Human Resources.

Most people think that the purposes of Human Resources are to hire people, occasionally fire them, and administer wages and benefits. They miss an important purpose for HR: to keep the company out of court.

Human Resources is there to prevent lawsuits. Lawsuits from employees who claim harassment, candidates who were not hired, employees whose employment was terminated, employees who are unhappy with their annual performance review, ... you get the idea.

HR meets this objective by enforcing consistency. They administer consistent annual evaluations. They document employee performance prior to termination of employment. They define and execute consistent hiring practices.

Note that last item: consistent hiring practices. One of the ways that HR deflects lawsuits is by ensuring that hiring practices are consistent for all candidates (or all candidates in broad classes). Consistency is required not only across employees but also over time. A consistent approach (to hiring, performance review, or termination of employment) is a strong defense against claims of discrimination.

The suggestion that HR change its hiring practices goes against the "consistency" mandate. And HR has a good case for keeping its practices consistent.

Companies must balance the need for staff against the risk of lawsuits (from a change in practices). It is not an easy call, and one that should not be made lightly. And something to keep in mind: The job market may shift back to the previous state of "many candidates for few openings". Should a company adjust its practices for a shift in the market that may be temporary? Should it shift again when the market changes back?

I don't have simple, definite answers. Each company must find its own.

Wednesday, October 5, 2022

Success with C++

Having recently written on the possible decline of C++, it is perhaps only fair that I share a success story about C++. The C++ programming language is still alive, and still useful. I should know, because a recent project used C++, and successfully!

The project was to maintain and enhance an existing C++ program. The program was written by other programmers before I arrived, over a period of years. Most of the original developers were no longer on the project. (In other words, a legacy system.)

The program itself is small by today's standards, with less than 300,000 lines of source code. It also has an unusual design (by today's standards): The program calculates economic forecasts, using a set of input data. It has no interaction with the user; the calculations are made completely with nothing more than the input data and program logic.

We (the development team) have successfully maintained and enhanced this program by following some rules, and placing some constraints upon ourselves. The goal was to make the code easy to read, easy to debug, and easy to modify. We made some design decisions for performance, but only after our initial design was shown to be slow. These constraints, I think, were key to our success.

We use a subset of C++. The language is large and offers many capabilities; we pick those that are necessary. We use classes. We rarely use inheritance. Instead, we build classes from composition. Thus, we had no problems with slicing of objects. (Slicing is an effect that can occur in C++, when casting a derived class to a base class. It generally does not occur in other OOP languages.) There are a very small number of classes that use inheritance, and in those cases we often want slicing.

We use STL but not BOOST. The STL (the Standard Template Library) is enough for our needs, and we use only what we need: strings, vectors, maps, and an occasional algorithm.

We followed the Java convention for files, classes, class names, and function names. That is, each class is stored in its own file. (In C++, we have two files, for the header file and the source file.) The name of the file is the name of the class (with a ".h" or ".cpp" extension). The class name uses camel-case, with a capital letter at the beginning of each word, for names such as "Year" or "HitAdjustment". Function names use snake-case with all lower-case letters and underscores between words. This naming convention simplified a lot of our code. When creating objects, we could create an object of type Year and name it "year". (The older code using no naming conventions, and many classes had lower-case names, which meant that when creating an object of type "ymd" (for example) we had to pick a name like "my_ymd" and keep track mentally of what was a class name and what was a variable name.)

We do not use namespaces. That is, we do not "use std" or any other namespace. This forces us to specify the namespace for every class. While tedious, it provides the benefit that one can easily see the class for function names. There is no need to search through the code, or guess about a function.

We use operator overloading only for a few classes, and only when the operators are obvious. Most of our code uses function calls. This also reduces guesswork by developers.

We have no friend classes and no friend functions. (We could use them, but we don't need them.)

Our attitude towards memory management is casual. Current operating systems provide a 2 gigabyte space for our programs, and that is enough for our needs. (It has been so far.) We avoid pointers and dynamic allocation of memory. STL allocates memory for its objects, and we assume that it will manage that memory properly.

We do not use lambdas or closures. (We could use them, but we don't need them.)

We use spacing in our code to separate sections of code. We also use spacing to denote statements that are split across multiple lines. (A blank in front and a blank after.)

We use simple expressions. This increases the number of source lines, which eases debugging (we can see intermediate results). We let the C++ compiler optimize expressions for "release" builds.

----

By using a subset of C++, and carefully picking which features make up that subset, we have successfully developed, deployed, and maintained a modest-sized C++ application.

These constraints are not traditionally considered part of the C++ language. We enforce them for our code. It provides us with a consistent style of code, and one that we find readable. New team members find that they can read and understand the code, which was one of our goals. We can quickly make changes, test them, and deploy them -- another goal.

These choices work for us, but we don't claim that they will work for other teams. You may have an application that has a different design, a different user interface, or a different set of computations, and it may require a different set of C++ code.

I don't say that you should use these constraints on your project. But I do say this: you may want to consider some constraints for your code style. We found that these constraints let us move forward, slowly at first and then rapidly.