Virtualization: Pros, Cons and Security: A Few Things to Consider

My virtualization experience is limited to using VMware’s suite of products, ESX 3.0 and 3i to be specific.  I also have up to 10 virtual machines (VMs) running at my home office using VMWare Workstation 6.5, everything from honey pot boxes, to domain controllers, and of course my “victim” machine(s).  I’ve been using VM’s for about five years now, some personally and some professionally.  Before being introduced to ESX and using VMware in an enterprisedatacenter environment  I was intrigued by the idea of having multiple operating systems (OS’) on a single machine without having to set the computer up as a dual boot machine, which is, let’s face it: a pain.

I became a “quasi” ESX administrator and was introduced into a whole new ball game.  A say quasi because it wasn’t my primary job function but everyone on the team became a surrogate ESX admin.  We setup a dual ESX environment (two Dell PowerEdge M-Series Blade Systems with 16 blades each) with the data (in the form of the virtual disk files: vmdk’s) residing on a storage area network (SAN).   We usually run 700-800 virtual servers, from Windows to Red Hat, with an additional 200 running on network attached storage (NAS) devices.  With the proper configuration of NIC teaming and ESX’s load balancingfailover settings I discovered an immensely fault tolerant environment that was nearly devoid of one of the biggest issues enterprise admins face:  hardware problems.  Another treat was the server provisioning process.  It took nearly a month from start to finish to get a hard standing server; ordering the server, provisioning rack space, getting the server, installing softwarehardware, testinghardening and finally delivery to the internal customer.  With the VM environment the requester simply placed an internal order for the server and we spun it up from a template .iso.   15 minutes, you’re server’s ready for custom applications and your use.   Finally, our disaster recovery was pretty straightforward: move all of the VM’s to a waiting offsite ESX server.  Flip the switch, and you’re done.   Not only did we save money on hardware cost but on energy.  When you don’t have to cool and power a bunch of hardware you don’t spend as much every month on the electric bill.

There were a lot of pros working with the virtual environment.  Centralized administration and fault tolerance, to name a few.  But, on the other hand there were some cons as well.  If anything goes wrong with your VM environment you lose your servers (800 servers in our case).   We had several places that things could go wrong: the ESX’s themselves and the SANNAS.  When the ESX was broken (human configuration error) we lost a ton of servers.

On the admin side of the house you’re focused so much on the technology and getting stuff to work properly that you sometimes forget about that pesky little piece of IT that seems to ruin all of your fun: Security.  It’s not uncommon for admins to “forget” security.  Heck, the Internet was designed with it as an afterthought (if you don’t believe me just look at DNS, or TCPIP).  I, unfortunately, no longer have the leisure of looking at technology with strictly “admin eyes”.  I now must consider the ramifications it has with regards to security; both of the system and the infrastructure.

The first big issue with security is hostguest segregation.  Put simply: stuff that happens on the VM doesn’t affect the server that is running VMware Workstation or ESX.  That was always a big selling point for VMWare, memory and process segregation.  However, leave it to the vulnerability researchers to rain on folks’ parades.  A bug in VMware, discovered by Kostya Kortchinsky, allows code to be executed from a guest OS to run on the host.  Check it out. Scary, to say the least.  So, now you need to worry about bugsflaws in your guests harming your host, and if that host is an ESX that houses 500 VM servers… you do the math.  On a similar note some of the newer VM packages include the ability to cut paste between VM’s and allow actual host disk access from a guest.  Not smart, in my opinion.  I’d flip those switches to “off”.

The next area of concern that should be paid close attention to is virtual networking.  The virtual network layer operates similarly to physical network, using vNics and vSwitches and… hmm… v-VLAN’s.  It is possible to bind specific virtual NICs and virtual switches to physical NICs on a host that lead to separate LAN’s (a DMZ and an internal network, for example).  If it’s configured properly you can have complete network segregation within a single ESX.  However, with a few simple mistakes you can create a bridge between the two networks.  Simply enable IP forwarding on a guest and make one or two configs on the vSwitches and you’ve just bridged the networks.  Which leads me to my next point:

Separation of duties.  What happens when you have ESX admins who also have Root on guests?  You have a problem.  They are the gatekeepers and now have a lot of power.  As a best practice the ESX hosts and guests should be admin’d by separate departmentsteams, and neither should have excessive rights to the other’s systems.

All in all I’d say that I am a fan of virtualized environments, IF they’re setup properly.  Some feel that virtual environments have too many places that could be points of failure for multiple servers, but that’s when careful thought, planning and analysis come into play.  If it’s done right, a virtualized data center is a fluid machine.  Literally!  It’s only one machine…

Advertisement

Data-At-Rest Encryption

To encrypt or not to encrypt?  That is the question.  The answer is universally YES!  However, there are two schools of thought when it comes to protecting data at rest (possibly more, but I only care about two).  First of all, let’s define what data at rest (DAR) is so you don’t have to open a new tabbrowser window and hit Google.  I’m sure if this post has come up on a search or has otherwise caught your eye you know what DAR is, but I’ll lay it out for you here nonetheless:   Any data that is not traveling over a network, or is sitting in volatile memory is DAR.  It’s sitting somewhere in storage, hoping to be useful someday.  This could be any data, from old emails to operating system files to cached logon credentials.  Whether or not you feel like you are a target for malicious computer users, you should want to protect your data.  More so if you’re an organization that deals with proprietary data or government information.

Now, back to the two schools of thought:  on the one hand you have File Encryption (FES), on the other Full Disk Encryption (FDE).  Both technologies have their pros and cons, and both have their vehement supporters and nay sayers.  I’d be interested to know what your thoughts are on the matter.  I myself have an opinion, and it is just that—an opinion.  I won’t say that it fits every scenario, but for security centric folks I’d say that FDE provides the most robust security for mobile devices and fixed workstations alike.  That view is not shocking; most experts will agree that FDE’s preboot authentication, which negates the extremely lax and easily bypassed security of BIOS and operating system (OS) passwords, is a highly secure method of protection.  Let’s not forget that FDE prevents the hard disk or the OS from being accessed via a live Linux distribution, such as BackTrack. Once a malicious user has physical access to a device, compromising it can take seconds with a live boot Linux OS like BackTrack; however, if the device is protected by FDE then the OS and the data is unreachable.

Some pundits argue that FDE is cumbersome to the end user and has a low level of acceptance when it is deployed.  Speaking of deployment, others say it is very difficult to deploy FDE software in an enterprise environment.  I can speak to both of those issues:  First, depending on the skill level of the IT staff, it is not difficult to integrate FDE software en masse using directory services or other management platforms.  As far as end user acceptance, the grim picture that has been painted by some is that of a painstaking logon process followed by a horrendously long boot cycle.  This in fact is false.  Most FDE software integrates with the native OS’s logon daemons or services such that it is nearly identical to the logon process the user is familiar with.

File Encryption has its place. FES is most suitable for non-mobile devices, protecting critical data or OS files. Even then, it takes a lot more configuration and monitoring to ensure that you cover all of your bases. And, sometimes companies leave it in the end user’s hands to decide what’s encrypted and what’s not. When it comes to security my rule is to NEVER leave it to the end user. Also, if you give someone an inch, they’ll take a mile. FES leaves the door open for attackers to be able to access the OS and unencrypted portion of the file system. Maybe they’ll drop a few links in a non-critical area, or perhaps they’ll slip a few custom dll’s into a non-encrypted area of the file system, just waiting to be called by a cron job. If you’re going to invest in DAR, why would you give malicious users a foothold?

FDE has come a long way and has had its ups and downs, but for the most part I feel it is the most secure solution for a mobile workforce (heck, if it works on laptops why not workstations?). All of the stars need to align properly for a successful FDE implementation: first, you need upper echelon support; second, you need a skilled technical staff to implement it, and third, you need to communicate its benefits to managers and users alike.