Be Smart About Virtualization

We are heavy users of virtualization at Likewise Software. Since we develop software for over 100 different platforms (multiple flavors of UNIX, Linux and Mac OS X), we have to be able to boot up a Red Hat 2.1 machine one minute and a Open Solaris machine the next. Developers and testers, both, need access to a wide variety of machines on a regular basis. Without virtualization, it would either be very expensive (we’d need hundreds of machines) or very slow (we’d have to re-image machines all the time) in order to do our work.

We also use virtualization outside of development/test. Over time, we’ve tended to collect an assortment of servers running project management tools, bug databases, internal wikis, HR, financial and other applications. A few months ago, our IT folk examined all the servers in our inventory and migrated many of them to virtual machines.

Unquestionably, virtualization can bring about good things — reduced administrative costs, increased flexibility, reduced energy use, etc. Virtualization doesn’t always make sense, however.

Occasionally, I have a conversation with someone who’s basically saying something like “Virtualization is terrible! I moved my database server and my risk management grid onto VMs and now they run at half the speed they used to!”. Yes, I do want to whack them upside the head when they say this.

Obviously, if you have a CPU-intensive, heavily threaded, application running on a physical server it’s going to slow down if you put it on a virtualized server along with other CPU-intensive applications. If you wouldn’t run these two apps on the same physical server, certainly, don’t run them on two VMs on a single physical server. VM hypervisors can run multiple virtualized machines effectively and with little degradation in performance, but only to the extent that the virtualized systems are amenable to this. If the VMs are running applications that are not heavily threaded and do not heavily tax their CPU and I/O systems then the VM hypervisor can exploit multiple cores and spare CPU cycles to provide acceptable performance.

There are some “textbook” examples of applications/systems that are ideal for virtualization. Web farms, for example, can deploy web sites in their own VMs and give you complete control of a virtualized server. You can muck with system configuration to your heart’s content without worrying about other web sites that might be deployed on the same physical server. Web farms can also quickly duplicate VMs allowing them to provide additional load-balanced capacity on an on-demand basis.

Beyond the textbook examples, here are some others to consider.

Infrequently run applications are great candidates for virtualization. Consider financial apps that might only be run at quarter- or year-end.  Rather than dedicating a machine to these applications that sits idle 95% of the time, these applications can be deployed on virtual systems that are suspended until needed. This approach is ideal for sensitive applications such as financial and reporting systems. It is best to not run these applications on shared hardware. If there are other applications on the same computer this increases the likelihood of intential or unintential access to secure data. With virtualization, physical systems don’t have to be “wasted” on infrequently used sensitive applications. Note, too, that by suspending sensitive VMs while they’re not in use that you’re reducing the attack surface for hackers.

Another great use of virtualization is for old, legacy, systems. If you’re running old versions of Windows NT or SUSE Linux or Solaris x86 and don’t want to update them (why fix something that’s not broken?) why not move these systems to VMs? In all likelihood, these systems are running on flaky outdated (perhaps unsupported) hardware. It’s possible that they’ll run faster on VMs than on old metal.

Demo systems are ideal candidates for virtualization. The systems receive a lot of “wear and tear” – they’re frequently polluted with sample data and often left in weird states. Moving these to VMs allow you to use VM snapshots  to quickly restore them to a recognizable state.

Finally, one of my favorite uses for VMs is as security honeypots. Create a VM (especially a Windows VM) and give it a suggestive name, perhaps, payroll or HR. Create some directories and files in it, again, with suggestive names. Now, turn on all the auditing features available in the OS. Protect this system as you would any other secure server in your network (but don’t use the same admnistrative passwords!). If possible, isolate this VM from your other systems. Put it on its own subnet and disallow routing to other systems, for example. If you have an intrusion detection system, make sure it monitors this VM. There should be no access to this computer (other than by you, to assure its health). If your IDS or audit logs signal that someone is trying to access the system, you know you’re under attack.

Virtualization has been around for 30+ years. I used VM/370 in college in 1977. It offers many benefits that, thanks to VMWare, Xen and others, are now available to any computer user. At the end of the day, however, virtualization is simply multitasking with really, really, good application isolation. Rather than multitasking applications that call a single operating system instance, hypervisors multitask entire operating system instances. The rest of the gory details (how they virtualize hardware, where drivers live, etc.) are just that: details.