Is your virtualization environment penny wise and pound foolish?
Enterprise virtualization solutions offer a valuable way to reduce operating expenses by removing underutilized servers from data centers. New servers designed around virtualization workloads instead of traditional single application workloads are offered by many manufacturers. Hardware extensions to CPUs and PCI buses allow hypervisors to directly and efficiently present resources to virtual machines to reduce the performance penalty imposed by an additional layer between the application and system resources. Storage vendors offer adaptable configurations and integration with enterprise virtualization solutions. High speed and high bandwidth network interconnects deliver the necessary throughput to service the consolidated network traffic requirements.
These promises come at a cost. Memory and core densities that make the most attractive consolidation pictures are usually top of the line configurations. High end reliability, availability and scalability hardware features usually limited to proprietary UNIX hardware are now available in x86 servers. Multi-configuration blade environments consolidate resources in a single, easy to manage chassis, but have costs beyond the workload blades. The latest features require the latest hardware, requiring a capital expenditure. This capital cost on the server refresh, coupled with the net new cost of the hypervisor environment and tools, can drive the red pen across the entire architecture. This may serve to reduce initial acquisition pain, but at the expense of future utility.
The unfortunate reality is once your architecture hits the budgetary chopping block, redesigning to updated target costs is not a viable option. The must-haves get identified from the plan and work-arounds get shoehorned into place to shave dollar weight. Often, features of the hypervisor platform get leveraged to make those replacement choices, and the biggest victims are often server configuration reductions and elimination of enterprise storage. This creates a 'budget bottleneck' by removing critical design elements without the benefit of a redesign. Instead of a robust solution based on a tripod of storage, servers and hypervisor, the hypervisor becomes the linchpin keeping the wheels from falling off.
But budgetary pressures are mounting and every architecture decision will be scrutinized by the bean counters, so how can we architect the solution that satisfies both the technical and financial requirements?
My answer is simple: design modules not monoliths. Looking at the capabilities required by the project and adopting a modular approach to delivering the requirements budget-proofs the solution while providing scalability and features. Extend this concept to every component of the solution as well. Choose enterprise storage that allows for the addition of more shelves for easy expansion. Choose enterprise storage that allows for multiple connection protocols and expandable I/O cards. Choose server hardware that allows for vertical growth, either within the chassis by adding more resources or with a blade frame model.
The first time I heard one of my managers use the term 'pod' to describe my architecture, I cringed. But it was valid, I had designed minimum capability sets that scaled linearly by the addition of each new set. Maybe the engineer in me would have preferred a Lego block analogy, but I was stuck with pods. My pod was a set of specific hardware that delivered a certain amount of capacity. It contained fixed configurations of limited flex, because the storage and servers available thought scaling meant t-shirt sizing: small, medium, large or extra large. We had to rely on complicated software and application architecture to ensure we could use the additional resources presented by a new pod.
In a virtualized environment, using a modular and scalable enterprise storage solution and current server hardware, we can move from a physical pod design to a logical pod. A certain number of hypervisor hosts using a certain amount of spindles in a storage frame provides a set number of guests. If the workload changes, using a modular approach allows us to modify all or portions of the design to accommodate the solution. Add a disk intensive workload, add more shelves of disk. Add a CPU intensive workload, add more CPUS to the hosts or add more hosts to the cluster. Growing these underlying resource pools online means the software needs are less complex. Database partitioning goes back to being a design choice for performance and scaling, not to overcome limitations of direct attached storage scattered on multiple hosts.
And when the budget axe falls, you have a predictable way to set a reasonable starting point and retain a flexible and scalable path to the desired end state without creating a 'budget bottleneck'. You've designed around delivering capabilities and not a monolithic architecture, eliminating compensation for missing components by leveraging others. Selecting our components to scale in the same modular fashion as the overall architecture, we are shaving capability to meet a budget without compromising functionality. In the long run, this not only meets your capex goals, but presents a more reliable, manageable and scalable system with a lower overall opex during its runtime lifecycle.
"All compromise is based on give and take, but there can be no give and take on fundamentals. Any compromise on mere fundamentals is surrender, for it is all give and no take." Who knew Gandhi was a systems architect?