Thursday, October 1, 2015

The birth of multiprogramming

Early computers ran one program at a time. They were also slow. This wasn't a problem. At first.

Early computers ran in "batch mode" - a non-interactive mode that often saw input on punch cards or magnetic tape, instead of people typing on terminals (much less smaller computers as we do today).

Companies had programs for each task: a program to update inventory, a program to update sales information, a program to print personnel reports, etc. Each program was a "job" with its program, input data, and output data.

The advantage of batch mode processing is that the job runs as an independent unit and it can be scheduled. Your collection of programs could be planned, as each used specific data, generated specific data, and ran for a (usually) predictable amount of time. Companies would run their daily jobs every day, their weekly jobs perhaps every Saturday, their monthly jobs at the end of the month (or more often during the first days of the next month), and their annual jobs at the end of the year.

If your programs all ran successfully, and within their usual timeframes, you were happy. The problem for companies was that they tended to grow, increasing the number of customers they supported and the number of sales they created. Those increases meant an increase in the size of data for processing, and that meant increased processing time to run their computer jobs.

If you have spare computing time, you simply run jobs longer. But what happens when you don't have spare processing time? What happens when your daily jobs take more than twenty-four hours to run?

In today's world of personal computers and cloud processing, we simply order up some additional computers. That was not possible in the early days of computing: computers were expensive.

Instead of buying (or leasing) more computers, we looked for ways to make computers more efficient. One of the first methods was called "multiprogramming" and it allowed multiple programs to run at the same time.

Successful multiprogramming had a number of challenges: loading multiple programs into memory (at different locations), preventing one program from writing to memory allocated to another, and sharing the processor among the simultaneously executing programs. While these are all tasks that any modern operating system handles, in its day it was a significant change.

It was also successful. It took the time spent waiting for input/output tasks and re-allocated for processing. The result was an increase of processing time, which meant that a company could run more programs without buying a larger (and more expensive) computer.

Multiprogramming shared the processor by using what we call "cooperative multitasking". A program ran until it requested an input/output operation, at which point the operating system initiated the operation and switched the processor to a different program. The input/output operation was handled by a separate device (a card reader or tape reader, or maybe a disk drive) so it could continue without the main processor. This freed the main processor to do some other work in the form of another program.

Windows 95 used a similar technique to switch between programs.

Later operating systems used "pre-emptive task switching", giving programs small amounts of processing time and then suspending one program and activating another. This was the big change for Windows NT.

Multiprogramming was driven by cost reduction (or cost avoidance) and focussed on internal operations. It made computing more efficient in the sense that one got "more computer" for the same amount of hardware. (The operating system had to be more sophisticated, of course.) But it did nothing for the user; it made no changes to the user experience. One still had to schedule jobs to run programs with specific input and output data.

Cost avoidance is one driver for IT. Its focus on internal operations is appropriate. But there are other drivers, and they require other techniques.

No comments: