Showing posts with label efficiency. Show all posts
Showing posts with label efficiency. Show all posts

Thursday, October 1, 2015

The birth of multiprogramming

Early computers ran one program at a time. They were also slow. This wasn't a problem. At first.

Early computers ran in "batch mode" - a non-interactive mode that often saw input on punch cards or magnetic tape, instead of people typing on terminals (much less smaller computers as we do today).

Companies had programs for each task: a program to update inventory, a program to update sales information, a program to print personnel reports, etc. Each program was a "job" with its program, input data, and output data.

The advantage of batch mode processing is that the job runs as an independent unit and it can be scheduled. Your collection of programs could be planned, as each used specific data, generated specific data, and ran for a (usually) predictable amount of time. Companies would run their daily jobs every day, their weekly jobs perhaps every Saturday, their monthly jobs at the end of the month (or more often during the first days of the next month), and their annual jobs at the end of the year.

If your programs all ran successfully, and within their usual timeframes, you were happy. The problem for companies was that they tended to grow, increasing the number of customers they supported and the number of sales they created. Those increases meant an increase in the size of data for processing, and that meant increased processing time to run their computer jobs.

If you have spare computing time, you simply run jobs longer. But what happens when you don't have spare processing time? What happens when your daily jobs take more than twenty-four hours to run?

In today's world of personal computers and cloud processing, we simply order up some additional computers. That was not possible in the early days of computing: computers were expensive.

Instead of buying (or leasing) more computers, we looked for ways to make computers more efficient. One of the first methods was called "multiprogramming" and it allowed multiple programs to run at the same time.

Successful multiprogramming had a number of challenges: loading multiple programs into memory (at different locations), preventing one program from writing to memory allocated to another, and sharing the processor among the simultaneously executing programs. While these are all tasks that any modern operating system handles, in its day it was a significant change.

It was also successful. It took the time spent waiting for input/output tasks and re-allocated for processing. The result was an increase of processing time, which meant that a company could run more programs without buying a larger (and more expensive) computer.

Multiprogramming shared the processor by using what we call "cooperative multitasking". A program ran until it requested an input/output operation, at which point the operating system initiated the operation and switched the processor to a different program. The input/output operation was handled by a separate device (a card reader or tape reader, or maybe a disk drive) so it could continue without the main processor. This freed the main processor to do some other work in the form of another program.

Windows 95 used a similar technique to switch between programs.

Later operating systems used "pre-emptive task switching", giving programs small amounts of processing time and then suspending one program and activating another. This was the big change for Windows NT.

Multiprogramming was driven by cost reduction (or cost avoidance) and focussed on internal operations. It made computing more efficient in the sense that one got "more computer" for the same amount of hardware. (The operating system had to be more sophisticated, of course.) But it did nothing for the user; it made no changes to the user experience. One still had to schedule jobs to run programs with specific input and output data.

Cost avoidance is one driver for IT. Its focus on internal operations is appropriate. But there are other drivers, and they require other techniques.

Monday, November 29, 2010

The return of frugality

We may soon see a return to frugality with computing resources.

Ages ago, computers were expensive, and affordable by only the very wealthy (governments and large companies). The owners would dole out computing power in small amounts and charge for each use. They used the notion of "CPU time", which was the amount of time actually spent by the CPU processing your task.

The computing model of the day was timesharing, the allocation of a fraction of a computer to each user, and the accounting of usage by each user. The key aspects measured were CPU time, connect time, and disk usage.

The PC broke the timesharing model. Instead of one computer shared by a number of people, the PC let each person have their own computer. The computers were small and low-powered (laughingly so by today's standards) but enough for individual needs. With the PC, the timesharing mindset was discarded, and along with it went the attention to efficiency.

A PC is a very different creature from a timesharing system. The purchase is much simpler, the installation is much simpler, and the administration is (well, was) non-existent. Instead of purchasing CPU power by the minute, you purchased the PC in one lump sum.

This change was significant. The PC model had no billing for CPU time; the monthly bill disappeared. That made PC CPU time "free". And since CPU time was free, the need for tight, efficient code become non-existent. (Another factor in this calculus was the availability of faster processors. Instead of writing better code, you could buy a new faster PC for less than the cost of the programming time.)

The cloud computing model is different from the PC model, and returns to the model of timesharing. Cloud computing is timesharing, although with virtual PCs on large servers.

With the shift to cloud computing, I think we will see a return to some of the timesharing concepts. Specifically, I think we will see the concept of billable CPU time. With the return of the monthly bill, I expect to see a renaissance of efficiency. Managers will want to reduce the monthly bill, and they will ask for efficient programs. Development teams will have to deliver.

With pressure to deliver efficient programs, development teams will look for solutions and the market will deliver them. I expect that the tool-makers will offer solutions that provide better optimization and cloud-friendly code. Class libraries will advertise efficiency on various platforms. Offshore development shops will cite certification in cloud development methods and efficiency standards. Eventually, the big consultant houses will get into the act, with efficiency-certified processes and teams.

I suspect that few folks will refer to the works of the earlier computing ages. Our predecessors had to deal with computing constraints much more severe than the cloud environments of the early twenty-first century, yet we will (probably) ignore their work and re-invent their techniques.