Virtualization: Pros, Cons and Security: A Few Things to Consider

My virtualization experience is limited to using VMware’s suite of products, ESX 3.0 and 3i to be specific.  I also have up to 10 virtual machines (VMs) running at my home office using VMWare Workstation 6.5, everything from honey pot boxes, to domain controllers, and of course my “victim” machine(s).  I’ve been using VM’s for about five years now, some personally and some professionally.  Before being introduced to ESX and using VMware in an enterprisedatacenter environment  I was intrigued by the idea of having multiple operating systems (OS’) on a single machine without having to set the computer up as a dual boot machine, which is, let’s face it: a pain.

I became a “quasi” ESX administrator and was introduced into a whole new ball game.  A say quasi because it wasn’t my primary job function but everyone on the team became a surrogate ESX admin.  We setup a dual ESX environment (two Dell PowerEdge M-Series Blade Systems with 16 blades each) with the data (in the form of the virtual disk files: vmdk’s) residing on a storage area network (SAN).   We usually run 700-800 virtual servers, from Windows to Red Hat, with an additional 200 running on network attached storage (NAS) devices.  With the proper configuration of NIC teaming and ESX’s load balancingfailover settings I discovered an immensely fault tolerant environment that was nearly devoid of one of the biggest issues enterprise admins face:  hardware problems.  Another treat was the server provisioning process.  It took nearly a month from start to finish to get a hard standing server; ordering the server, provisioning rack space, getting the server, installing softwarehardware, testinghardening and finally delivery to the internal customer.  With the VM environment the requester simply placed an internal order for the server and we spun it up from a template .iso.   15 minutes, you’re server’s ready for custom applications and your use.   Finally, our disaster recovery was pretty straightforward: move all of the VM’s to a waiting offsite ESX server.  Flip the switch, and you’re done.   Not only did we save money on hardware cost but on energy.  When you don’t have to cool and power a bunch of hardware you don’t spend as much every month on the electric bill.

There were a lot of pros working with the virtual environment.  Centralized administration and fault tolerance, to name a few.  But, on the other hand there were some cons as well.  If anything goes wrong with your VM environment you lose your servers (800 servers in our case).   We had several places that things could go wrong: the ESX’s themselves and the SANNAS.  When the ESX was broken (human configuration error) we lost a ton of servers.

On the admin side of the house you’re focused so much on the technology and getting stuff to work properly that you sometimes forget about that pesky little piece of IT that seems to ruin all of your fun: Security.  It’s not uncommon for admins to “forget” security.  Heck, the Internet was designed with it as an afterthought (if you don’t believe me just look at DNS, or TCPIP).  I, unfortunately, no longer have the leisure of looking at technology with strictly “admin eyes”.  I now must consider the ramifications it has with regards to security; both of the system and the infrastructure.

The first big issue with security is hostguest segregation.  Put simply: stuff that happens on the VM doesn’t affect the server that is running VMware Workstation or ESX.  That was always a big selling point for VMWare, memory and process segregation.  However, leave it to the vulnerability researchers to rain on folks’ parades.  A bug in VMware, discovered by Kostya Kortchinsky, allows code to be executed from a guest OS to run on the host.  Check it out. Scary, to say the least.  So, now you need to worry about bugsflaws in your guests harming your host, and if that host is an ESX that houses 500 VM servers… you do the math.  On a similar note some of the newer VM packages include the ability to cut paste between VM’s and allow actual host disk access from a guest.  Not smart, in my opinion.  I’d flip those switches to “off”.

The next area of concern that should be paid close attention to is virtual networking.  The virtual network layer operates similarly to physical network, using vNics and vSwitches and… hmm… v-VLAN’s.  It is possible to bind specific virtual NICs and virtual switches to physical NICs on a host that lead to separate LAN’s (a DMZ and an internal network, for example).  If it’s configured properly you can have complete network segregation within a single ESX.  However, with a few simple mistakes you can create a bridge between the two networks.  Simply enable IP forwarding on a guest and make one or two configs on the vSwitches and you’ve just bridged the networks.  Which leads me to my next point:

Separation of duties.  What happens when you have ESX admins who also have Root on guests?  You have a problem.  They are the gatekeepers and now have a lot of power.  As a best practice the ESX hosts and guests should be admin’d by separate departmentsteams, and neither should have excessive rights to the other’s systems.

All in all I’d say that I am a fan of virtualized environments, IF they’re setup properly.  Some feel that virtual environments have too many places that could be points of failure for multiple servers, but that’s when careful thought, planning and analysis come into play.  If it’s done right, a virtualized data center is a fluid machine.  Literally!  It’s only one machine…

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s