Thursday, October 17, 2013
For PCs, small may be the new big thing
In the early PC days, the large, spacious box with its expansion slots made sense. In the early days, PCs needed expansion and customization. The "base" PC was not enough for corporate work. When we bought a PC, we added video cards, memory cards, serial and parallel port cards, terminal emulator cards, and network cards. We even added cards with real-time clocks. It was necessary to open the PC and add these cards.
Over the years, more and more "extra" features became "standard". The IBM PC AT came with a built-in real-time clock, which eliminated one card. Memory increased. Hard drives became larger and faster. The serial ports and parallel ports were replaced by USB ports. Today's PC has enough memory, a capable video card, a big enough hard disk, a network interface, and ample USB ports. (Apple computers have slightly different communication options, but enough.)
The one constant in the thirty years of change has been the size of the PC. The original IBM PC was about the size of today's tower PC. PCs still have the card slots and drive bays for expansion, although few corporate users need such things.
That's about to change. PCs will shrink from their current size to one of two smaller sizes: small and nothing. The small PCs will be the size of the Apple Mini: a 4-inch by 4-inch box with ports and no expansion capabilities. The "nothing" size PCs will be virtual machines, existing only in larger computers. (Let's focus on the "small" size. We can discuss virtual PCs another time.)
The small PCs have all the features of a real PC: processor, memory, storage, video, and communications. They may have some compromises, with perhaps not the fastest processors and the most capable video cards, but they are good enough. They can run Windows or Linux, and the Apple Mini Mac runs MacOS, of course. All you need is a display, a keyboard, and a network connection. (These small-form PCs often have wire network interfaces and not wireless.)
I suppose that we can give credit to Apple for the change. Apple's Mini Mac showed that there was a steady demand for smaller, non-PC-shaped PCs. Intel has their "Next Unit of Computing" or NUC device, a small 4-inch by 4-inch PC with communication ports.
Other manufacturers had built small PCs prior to Apple's Mini Mac (the Shuttle PC is a notable pioneer) but received little notice.
The Arduino, the Raspberry Pi, and the Beaglebone and also small-form devices, designed mainly for tinkerers. I expect little interest from the corporate market in these devices.
But I do expect interest in the smaller "professional" units from Apple and Intel. I also expect to see units from other manufacturers like Lenova, Asus, HP, and Dell.
Small will be the new big thing.
Sunday, September 15, 2013
Virtualization and small processors
The history of processors as been a (mostly) steady upwards ramp. I say "mostly" because the minicomputer revolution (ca. 1965) and microcomputer revolution (1977) saw the adoption of smaller, simpler processors. Yet these smaller processors also increased in complexity, over time. (Microprocessors started with the humble 8080 and advanced to the Z-80, the 8086, the 80286, eventually leading to today's Pentium-derived processors.)
I think that virtualization gives us an opportunity for smaller, simpler processors.
Virtualization creates a world of two levels: the physical and the virtual. The physical processor has to keep the virtual processes running, and keep them isolated. The physical processor is a traditional processor and follows traditional rules: more is better, and keep users out of each others' hair.
But the virtual processors, they can be different. Where is it written that the virtual processor must be the same as the host processor? We've built our systems that way, but is it necessary?
The virtualized machine can be smaller than the physical host, and frequently is. It has less memory, smaller disks, and in general a slower (and usually simpler) processor. Yet a virtual machine is still a full PC.
We understand the computing unit known as a "PC". We've been virtualizing machine in these PC units because it has been easy.
A lot of that "standard PC" contains complexity to handle multiple users.
For cheap, easily created virtual machines, is that complexity really necessary?
It is if we use the virtual PC as we use a physical PC, with multiple users and multiple processes. If we run a web server, then we need that complexity.
But suppose with take a different approach to our use of virtual machines. Suppose that, instead of running a complex program like a web server or a database manager, we handle simple tasks. Let's go further and suppose that we create a virtual machine that is designed to handle only one specific task, and that one task is trivial in comparison to our normal workload.
Let's go even further and say that when the task is done, we destroy the virtual machine. Should we need it again, we can create another one to perform the task. Or another five. Or another five hundred. That's the beauty of virtual machines.
Such a machine would need less "baggage" in its operating system. It would need, at the very least, some code to communicate with the outside world (to get instruction and report the results), the code to perform the work, and... perhaps nothing else. All of the user permissions and memory management "stuff" becomes superfluous.
This virtual machine something that exists between our current virtual PC and an object in a program. This new thing is an entity of the virtualization manager, yet simpler (much simpler) than a PC with operating system and application program.
Being much simpler than a PC, this small, specialized virtual machine can use a much simpler processor design. It doesn't need virtual memory management -- we give the virtual processor enough memory. It doesn't need to worry about multiple user processes -- there is only one user process. The processor has to be capable of running the desired program, of course, but that is a lot simpler than running a whole operating system.
A regular PC is "complexity in a box". The designers of virtualization software (VMware, VirtualPC, VirtualBox, etc.) expend large efforts at duplicating PC hardware in the virtual world, and synchronizing that virtual hardware with the underlying physical hardware.
I suspect that in many cases, we don't want virtual PCs. We want virtual machines that can perform some computation and talk to other processors (database servers, web servers, queue servers, etc.).
Small, disposable, virtual machines can operate as one-time use machines. We can instantiate them, execute them, and then discard them. These small virtual machines become the Dixie cups of the processing world. And small virtual machines can use small virtual processors.
I think we may see a renewed interest in small processor design. For virtual processors, "small" means simple: a simple instruction set, a simple memory architecture, a simple system design.
Sunday, October 17, 2010
Small as the new big
I attended the CPOSC (the one-day open source conference in Harrisburg,PA) this week-end. It was a rejuvenating experience, with excellent technical sessions on various technologies.
Open source conferences come in various sizes. The big open source conference is OSCON, with three thousand attendees and running for five days. It is the grand dame of open source conferences. But lots of other conferences are smaller, either in number of attendees or days and usually both.
The open source community has lots of small conferences. The most plentiful of these are the BarCamps, small conferences organized at the local level. They are "unconferences", where the attendees hold the sessions, not a group of selected speakers.
Beyond BarCamp, several cities have small conferences on open source. Portland OR has Open Source Bridge (about 500 attendees), Salt Lake City has Utah Open Source Conference, Columbus has Linux Fest, Chicago has a conference at the University of Illinois, and the metropolis of Fairlee, VT hosts a one-day conference. The CPOSC conference in Harrisburg has a whopping 150 attendees, due to the size of their auditorium and fire codes.
I've found that small conferences can be just as informative and just as energetic as large conferences. The venues may be smaller, the participants are usually from the region and not the country, yet the conference speakers are just as passionate and informed as the speakers at the large conferences.
Local conferences are often volunteer-run, with low overhead and a general air of reasonableness. They have no rick stars, no prima donnas. Small conferences can't afford them, and the audience doesn't idolize them. It makes for a healthy and common-sense atmosphere.
I expect the big conferences to continue. They have a place in the ecosystem. I also expect the small conferences to continue, and to thrive. They serve the community and therefore have a place.