Showing posts with label virtual PCs. Show all posts
Showing posts with label virtual PCs. Show all posts

Sunday, October 18, 2015

More virtual, less machine

A virtual machine, in the end, is really an elaborate game of "let's pretend". The host system (often called a hypervisor), persuades an operating system that a physical machine exists, and the operating system works "as normal", driving video cards that do not really exist and responding to timer interrupts created by the hypervisor.

Our initial use of virtual machines was to duplicate our physical machines. Yet in the past decade, we have learned about the advantages of virtual machines, including the ability to create (and destroy) virtual machines on demand. These abilities have changed our ideas about computers.

Physical computers (that is, the real computers one can touch) often server multiple purposes. A desktop PC provides e-mail, word processing, spreadsheets, photo editing, and a bunch of other services.

Virtual computers tend to be specialized. We build virtual machines often as single-purpose servers: web servers, database servers, message queue servers, ... you get the idea.

Our operating systems and system configurations have been designed around the desktop computer, the one serving multiple purposes. Thus, the operating system has to provide all possible services, including those that might never be used.

But with specialized virtual servers, perhaps we can benefit from a different approach. Perhaps we can use a specialized operating system, one that includes only the features we need for our application. For example, a web server needs an operating system and the web server software, and possibly some custom scripts or programs to assist the web server -- but that's it. It doesn't need to worry about video cards or printing. It doesn't need to worry about programmers and their IDEs, and it doesn't need to have a special debug mode for the processor.

Message queue servers are also specialized, and if they keep everything in memory then they need little about file systems and reading or writing files. (They may need enough to bootstrap the operating system.)

All of our specialized servers -- and maybe some specialized desktop or laptop PCs -- could get along with a specialized operating system, one that uses the components of a "real" operating and just enough of those components to get the job done.

We could change policy management on servers. Our current arrangement sees each server as a little stand-alone unit that must receive policies and updates to those policies. That means that the operating system must be able to receive the policy updates. But we could change that. We could, upon instantiation of the virtual server, build in the policies that we desire. If the policies change, instead of sending an update, we create a new virtual instance of our server with the new policies. Think of it as "server management meets immutable objects".

The beauty of virtual servers is not that they are cheaper to run, it is that we can throw them away and create new ones on demand.

Friday, August 29, 2014

Virtual PCs are different from real PCs

Virtual PCs started as an elaborate game of "let's pretend", in which we simulated a real PC (that is, a physical-hardware PC) in software. A virtual PC doesn't exist -- at least not in any tangible form. It has a processor and memory and disk drives and all the things we normally associate with a PC, but they are all constructed in software. The processor is emulated in software. The memory is emulated in software. The disk drive... you get the idea.

Virtualization offers several advantages. We can create new virtual PCs by simply running another copy of the virtualization software. We can move virtual PCs from one host PC to another host PC. We can make back-up images of virtual PCs by simply copying the files that define the virtual PC. We can take snapshots of the virtual PC at a moment in time, and restore those snapshots at our convenience, which lets us run risky experiments that would "brick" a real PC.

We like to think that virtual PCs are the same as physical PCs, only implemented purely in software. But that is not the case. Virtual PCs are a different breed. I can see three areas that virtual PCs will vary from real PCs.

The first is storage (disk drives) and the file system. Disk drives hold our data; file systems organize that data and let us access it. In real PCs, a disk drive is a fixed size. This makes sense, because a physical disk drive *is* a fixed size. In the virtual world, a disk drive can grow or shrink as needed. I expect that virtual PCs will soon have these flexible disk drives. File systems will have to change; they are built with the assumption of a fixed-size disk. (A reasonable assumption, given that they have been dealing with physical, fixed-size disks.) Linux will probably get a file system called "flexvfs" or something.

The second area that virtual PCs vary from real PCs is virtual memory. The concept of virtual memory is older than virtual PCs or even virtual machines in general (virtual machines date back to the mainframe era). Virtual memory allows a PC to use more memory than it really has, by swapping portions of memory to disk. Virtual PCs currently implement virtual memory because they are faithfully duplicating the behavior of real PCs, but they don't have to. A virtual PC can assume that it has all memory addressable by the processor and let the hypervisor handle the virtualization of memory. Delegating the virtualization of memory to the hypervisor lets the "guest" operating system become simpler, as it does not have to worry about virtual memory.

A final difference between virtual PCs and real PCs is the processor. In a physical PC, the processor is rarely upgraded. An upgrade is an expensive proposition: one must buy a compatible processor, shut down the PC, open the case, remove the old processor, carefully install the new processor, close the case, and start the PC. In a virtual PC, the processor is emulated in software, so an upgrade is nothing more that a new set of definition files. It may be possible to upgrade a processor "on the fly" as the virtual PC is running.

These three differences (flexible file systems, lack of virtual memory, and updateable processors) show that virtual PCs are not the same as the "real" physical-hardware PCs. I expect that the two will diverge over time, and that operating systems for the two will also diverge.

Friday, January 31, 2014

Faster update cycles mean PC apps become expensive

Ah, for the good old days of slow hardware upgrades. It used to be that one could buy a computer system and use it for years, possibly even a decade. The software would be upgraded, but the hardware would last. One could run a business knowing the future of its IT (hardware and software) was predictable.

Today we have faster cycles for hardware upgrades. Cell phones, tablets, and some PCs (Apple) are updated in a matter of months, not decades. The causes are multiple: competition (especially among phone vendors), changes in technology, and a form of planned obsolescence (Apple) that sees existing customers buying new versions.

I expect that these faster cycles will move to the PC realm.

The change in the life span of PC hardware will affect consumers and businesses, with the greater impact on businesses. I expect individual consumers to move away from PCs and switch to phones, tablets, game consoles, and internet TV appliances.

Businesses have a challenge ahead. Corporate users typically don't want PCs; they want computing power. Specifically, they want computing power with a user interface that is consistent over time. (When a new version of Windows is introduced to a corporate environment, one of the first actions is to configure the user interface to look like the old version. The inability of Windows 8 to emulate Windows 7 exactly is probably the cause for corporate discomfort with it.)

But the challenge to business goes beyond the user interface. Corporations want stable computing platforms to hold their applications. They want to build a system (or buy one) and use it for a long time. Switching from one vendor's system to another's is an expensive proposition, and corporations amortize the conversion cost over a long life. A new system, or even a new version of a system, can impose changes to the user interface, interfaces to other systems, and interactions with the operating system and drivers. All of these changes are part of the cost of implementation.

In the corporation's mind, the fewer conversions, the better.

That philosophy is colliding with the faster pace of hardware. Apple is not alone in its rapid release of hardware and operating systems; Microsoft is releasing new versions of Windows at a rate much faster than the ten-year gap between Windows XP and Windows 7. (I'm ignoring Windows Vista.)

To adapt to the faster change, I expect corporations to shift from the PC platform to technologies that allow it to retain longer lifespans: virtual PCs and cloud computing. Virtual PCs are the easier change, allowing applications to be shifted directly onto the new platform. With remote access, a (fast-changing) real PC can access the (slow changing) "get the work done" virtual PC. In this case, virtualization and remote access act as a shock absorber for the change in technology.

Cloud computing offers a more efficient platform, but only after re-designing the application. The large monolithic PC applications must split into multiple services coordinated by (relatively) simple applications running on tablets and phones. In this case, the use of small, simple components on multiple platforms (server and tablet/phone) act as the buffer to changes in technology.

The PC platform will see faster update cycles and shorter life spans. Applications on this platform will be subject to more changes. A company's customer base will use more platforms, driving up the cost of development, testing, and support.

Moving to virtual PCs or to the cloud is a way of avoiding that increase in costs.