“There are three kinds of lies: Lies, damned lies, and statistics.” -- Mark Twain
We often claim to be a data driven world. Yet, measuring the wrong data can prove success, while masking a failure to provide value.
The post-mortem of the recent election led to an examination, or maybe an excoriation, of polling. There were claims of bad models, bad information, and bad citizens in the news. Many of the on-air pundits were caught flatfooted during coverage as exit polling didn't match called results. Some campaign officials made statements about how things may have been run differently with better poll data.
I have a co-worker who is a Jets fan with a, surprisingly, fairly evenhanded opinion of Tim Tebow. The sports media, conversely, has a very different, more polarizing point-of-view. Depending on who you listen to, either he can't throw because of a low completion percentage or he can because of his 4th quarter performance.
What do the recent election polls and Tim Tebow have to do with IT?
These are examples of self-selection bias in metrics. Were the election polls measuring the right data needed to provide useful information to the pundits and campaigners? Can you be a good quarterback with a good win record regardless of completion numbers? I'm not talking about the accuracy of the numbers, but the relevancy. Are we measuring what really matters to our success?
Key Performance Indicators (KPIs) and Service-Level Agreements (SLAs) are the “Sword of Damocles” over all IT shops, and many times our SLAs measure things that may not truly reflect the success of the supported applications. One hundred percent uptime means absolutely nothing if it results from the fact that you have no users. Microsecond database response times mean nothing if the web portal is offline. And we've all been in situations where the indicators have been cherry picked to reflect success regardless of reality.
This isn't restricted to the micro level of performance management of applications. True determinations of cost effectiveness, total cost of ownership, or return on investment are all difficult to calculate because of the number of potential variables. We, as IT professionals, don't always look at the right variables when building those sorts of comparisons between current and future architectures.
These measures don't necessarily tell the whole picture of success. What happens when your footprint reduction measures increase overall time to deploy new applications because of new sizing complexity? Or when the infrastructure complexity impacts the overall ability to react to new workloads and requirements?
The Office of Management and Budget (OMB) recently announced that FDCCI monitoring will be added to PortfolioStat. Part of the reason is that data center closures are not the only factor in delivering the projected savings. Closing a single data center eliminates the direct costs of that facility. However, unless the services provided by that facility are completely eliminated, there are migration costs that have the potential to exceed those savings. So the OMB will be searching for a broader set of indicators to measure the true savings of closing data centers across the federal government.
Vendors can be complicit in these misadventures. We've all seen 'benchmark-eting', where some extreme configuration can process billions of unrealistic transactions, faster than the speed of light. There's no useful, real world information there, which is why we have SPEC and other standardized benchmarks that try to relate to real world use cases. Even then, those benchmarks are still open to interpretation and optimization that may not translate to real world performance.
The bottom line is this; the simple act of measurement doesn't always prove success. What we measure is just as important as how we measure, and needs to be appropriate to the situation. EPA MPG is a bad measure for both a Formula 1 car and an electric vehicle, but for entirely different reasons. So when figuring out which dial to watch and knob to turn, make sure you are focused on the right goals.
Edit: Just after publishing my article, FCW posted Forman: FDCCI Cost Savings Are 'Smoke and Mirrors'. Mark Forman, former administrator for e-government and IT at OMB, reiterates a few points I made and presents others worth reading.
Photo courtesy of Filmkarma.blogspot.com