There is no question that the footprint of today’s data center is rapidly moving toward the virtual. This changes so many things about the way IT operations functions that we must start asking hard questions about security, continuity, and control of our data. Perhaps one of the biggest questions is this - what happens when everything is a file?
The trend toward Software-Defined Data Centers (sometimes abbreviated SDDC) is moving fast. Increasingly, organizations are implementing Software-Defined-Networks (SDN), systems, and application instances, with less focus on hardware-based tools and standalone software installation.
As things become software-defined, it’s worth revisiting the ideas behind the “Goldilocks Zone” concept. There is a balance between security context and proper isolation techniques within a data center, but that balance may be wholly different in a virtual environment than a physical one.
A primer can be found here in an article written by Tom Corn, VMware’s VP of Security Strategy.
To start any discussion about security within a virtual or software-defined environment, we have to revisit the questions I posed in my last blog post. Let’s explore each of these here, with more emphasis on SDDC and SDN to come in later posts.
This is likely a question that has no single answer, as both of these are worthy objectives. However, for the vast majority of controls, especially those that will be in a software-defined environment, the hypervisor now acts as a kernel stand-in for any system and application instances running within them. Just as an OS kernel manages hardware calls and resources for the user-mode applications within the OS, a hypervisor manages this for all the virtual aspects of your environment. One of the key tenets I often convey to my SANS students in virtualization and private cloud security courses is this:
This must be a new mantra for teams everywhere. It has always been about the software stack. The hypervisor is now the lowest in the stack, and so integration with the hypervisor kernel becomes paramount for security controls that look to detect and prevent attacks within the virtualized or software-defined components operating above. While hardware integration is a fascinating idea, and may offer the only true means of validating and monitoring hypervisor and operating system integrity, the number of tools and opportunities to work at this level are few.
This is really a question of architecture and resource utilization, and puts us squarely in the “Goldilocks Zone” conversation I mentioned earlier.
There are definitive trade-offs to any of these options, such as:
As we move from virtualization to private cloud, and from private cloud to hybrid architectures that integrate with cloud provider environments (we hope!), the need for security controls that we can configure, install, and maintain from afar grows accordingly. Today, we’re discovering that the traditional security controls we know well, ranging from log and event management to network monitoring to access controls and encryption, don’t translate to external cloud providers and hosting environments. We simply don’t have the right tools, or enough integration at lower layers of the stack, to play at the same level within the cloud (at least not in most cases).
Some tools are getting better, and automation and scripting technologies are playing a big part in this (another topic we’ll be covering in upcoming posts). To some degree, we also need the cloud providers to cooperate and allow more access to the hypervisors we leverage within their environments.
Today, the vast majority of providers are highly disinclined to offer this level of access, especially in multitenant scenarios - there are obvious reasons for this, of course. No cloud provider wants one tenant installing hypervisor-integrated security tools and getting that low in the stack…so how do we compensate for this?
Time will tell how we handle all of these thorny issues, but the fact of the matter is this - security teams need deep access to the hypervisor kernel. We need more and better tools that play at deep levels of the virtual and software-defined data center. Security teams must radically alter their worldview of what the risks are in our data centers today.
Look for upcoming blog posts that delve more into these challenges, and how we’re starting to address them!
Dave Shackleford is the owner and principal consultant of Voodoo Security and a SANS analyst, senior instructor, and course author. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering, and is a VMware vExpert with extensive experience designing and configuring secure virtualized infrastructures. He has previously worked as CSO for Configuresoft, CTO for the Center for Internet Security, and as a security architect, analyst, and manager for several Fortune 500 companies. Dave is the author of the Sybex book "Virtualization Security: Protecting Virtualized Environments", as well as the coauthor of "Hands-On Information Security" from Course Technology. Recently Dave coauthored the first published course on virtualization security for the SANS Institute. Dave currently serves on the board of directors at the SANS Technology Institute and helps lead the Atlanta chapter of the Cloud Security Alliance.View all posts
Don’t miss out on exclusive content and exciting announcements!