Improving Processes, Virtualized Infrastructure

December 26, 2010

Excerpts from Drive Even Greater Efficiency with Virtualization, Forrester Consulting

What is virtualization? The simple answer is multiple instances of an OS on a single server. Virtualization is another slant to server consolidation taken to the next level.

Server virtualization is a method of using untapped server computing potential by running multiple server programs on one physical server. This is done by ‘tricking’ the one physical server into believing it is actually multiple servers, so that there will be no unwanted interaction between programs running on the same physical server. (Appropedia)

Why virtualize your servers? To achieve efficiency savings through consolidation that lowers capital expenditures and server space, cut time in deploying applications and delivering services while lowering power consumption. It takes rethinking server management and the processes associated with a consolidated deployment infrastructure to achieve these results. Virtualization is perfect for those organizations or data centers that are short on IT staff.

In the study conducted by Forrester it is noted that “utilization can be ratcheted up further and operational efficiencies can be dramatically improved by adapting your processes and adopting new ones…”

There will be apparent savings and benefits with the implementation of virtualization also known as virtual machines or VM due to the latter reasons why an organization would virtualized. VMs will increase efficiency in the overall management of the data center and lower the cost of advance disaster recovery due to the immediate and long-term benefits of consolidating servers (http://bit.ly/eA5zeE). Introducing a new architecture into the operational environment requires reviewing current operational processes with changes and more than likely developing new ones. Take a phased implementation approach to make sure virtualization provides value to the organization and no detriment is done during transition. Forrester discovered during the study the SMBs that deployed VMs in a structured and consistent manner had average resource use double 40% to 60% vs. 20% to 40% for the less mature shops, and were considered the most efficient environments due to more stable, consistent deployment platform and more mature management processes. Some of these operational processes included VM life-cycle management, VM templates, and VM staging.

Forrester has four stages for infrastructure maturity; the third is Process Improvement which is the focus of this blog. By the time the organization gets to Step 3, they have evolved from the nuance of virtualization to the actual management of the VM environment. During this phase the focus is on determining what’s different, optimizing the infrastructure, driving up consistency of operations, automating routine tasks, rightsizing the environment and implementing operational and management processes developed during the early phases of implementation and use. This is the phase for replicating the virtualized servers for dedicated disaster recovery.

Adapting traditional server management processes can be used in the VM environment. Life-cycle management, cloning or mirroring and capacity management are process that are not used heavily in the physical server world, but are indispensable for managing the virtual environment. A virtualized environment means more workloads to manage on a smaller pool of resources that can be simplified with sever management tools permitting remote access, power and thermal optimization, plus embedded health monitoring. VM is a pool of resources that should be managed as a whole. It is imperative that consistency take the forefront.

A few management processes to adapt according to Forrester

  • Policy bases automation. VMs are dynamic workloads that increase or decrease according to use patterns and changes in the patterns. Managing and optimizing these dynamic workload patterns and the planning involved to ensure continued utilization goals requires a disproportionate amount of an Administrator’s time. Automated workload management software that uses mathematics to optimize the placement of VMs based on historical performance and actual resource consumption is a tool for policy based automation. This tool can also trigger live-migration when a conflict is apparent. The key to taking full advantage of automation requires consistency in the deployment infrastructure.
  • Tetris thinking. Not all VMs consume the same amount of resources, nor do their consumption patterns uniformly vary during the workday. Take the lessons learned in the popular tile stacking game Tetris and apply them to your virtual environment. A simple example is filling up a server with a pair of highly consumptive, business-critical VMs and myriad low-consumption, noncritical VMs. The available headroom will be ample for the most consumptive applications and in the event that they need even more resources, policy-based automation can live-migrate the smaller VMs away from the system to free up the necessary resources.
  • Standardize configuration management and software distribution. Managing OS configuration, patching, and auditing absorbs a lot of administrators’ time. You can simplify management tasks by standardizing on fewer variants of the software stacks you use and enforcing the use of VM templates. Rather than building configurations from scratch, start with a known good VM (also known as a Golden Master template) and simply clone and modify that base design. In many virtual infrastructures, clones can be linked to their original templates, meaning that changes to the original template — like patches — will propagate to all derived VMs.
%d bloggers like this: