From the beginning of time (for electronic data processing) we have desired bigger processors. We have wanted shorter clock cycles, more bits, more addressable memory, and more powerful instruction sets, all for processing data faster and more efficiently. With time-sharing we wanted additional controls to separate programs, which lead to more complex processors. With networks and malware we added additional complexity to monitor processes.
The history of processors as been a (mostly) steady upwards ramp. I say "mostly" because the minicomputer revolution (ca. 1965) and microcomputer revolution (1977) saw the adoption of smaller, simpler processors. Yet these smaller processors also increased in complexity, over time. (Microprocessors started with the humble 8080 and advanced to the Z-80, the 8086, the 80286, eventually leading to today's Pentium-derived processors.)
I think that virtualization gives us an opportunity for smaller, simpler processors.
Virtualization creates a world of two levels: the physical and the virtual. The physical processor has to keep the virtual processes running, and keep them isolated. The physical processor is a traditional processor and follows traditional rules: more is better, and keep users out of each others' hair.
But the virtual processors, they can be different. Where is it written that the virtual processor must be the same as the host processor? We've built our systems that way, but is it necessary?
The virtualized machine can be smaller than the physical host, and frequently is. It has less memory, smaller disks, and in general a slower (and usually simpler) processor. Yet a virtual machine is still a full PC.
We understand the computing unit known as a "PC". We've been virtualizing machine in these PC units because it has been easy.
A lot of that "standard PC" contains complexity to handle multiple users.
For cheap, easily created virtual machines, is that complexity really necessary?
It is if we use the virtual PC as we use a physical PC, with multiple users and multiple processes. If we run a web server, then we need that complexity.
But suppose with take a different approach to our use of virtual machines. Suppose that, instead of running a complex program like a web server or a database manager, we handle simple tasks. Let's go further and suppose that we create a virtual machine that is designed to handle only one specific task, and that one task is trivial in comparison to our normal workload.
Let's go even further and say that when the task is done, we destroy the virtual machine. Should we need it again, we can create another one to perform the task. Or another five. Or another five hundred. That's the beauty of virtual machines.
Such a machine would need less "baggage" in its operating system. It would need, at the very least, some code to communicate with the outside world (to get instruction and report the results), the code to perform the work, and... perhaps nothing else. All of the user permissions and memory management "stuff" becomes superfluous.
This virtual machine something that exists between our current virtual PC and an object in a program. This new thing is an entity of the virtualization manager, yet simpler (much simpler) than a PC with operating system and application program.
Being much simpler than a PC, this small, specialized virtual machine can use a much simpler processor design. It doesn't need virtual memory management -- we give the virtual processor enough memory. It doesn't need to worry about multiple user processes -- there is only one user process. The processor has to be capable of running the desired program, of course, but that is a lot simpler than running a whole operating system.
A regular PC is "complexity in a box". The designers of virtualization software (VMware, VirtualPC, VirtualBox, etc.) expend large efforts at duplicating PC hardware in the virtual world, and synchronizing that virtual hardware with the underlying physical hardware.
I suspect that in many cases, we don't want virtual PCs. We want virtual machines that can perform some computation and talk to other processors (database servers, web servers, queue servers, etc.).
Small, disposable, virtual machines can operate as one-time use machines. We can instantiate them, execute them, and then discard them. These small virtual machines become the Dixie cups of the processing world. And small virtual machines can use small virtual processors.
I think we may see a renewed interest in small processor design. For virtual processors, "small" means simple: a simple instruction set, a simple memory architecture, a simple system design.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment