Time for Basic Discipline
Much ado is being made these days about consolidation and reducing the cost of IT, with virtualization being the drivers for storage management improvements. The truth is that storage management, as well as application management and OS management, has always been a critical component of data centers. The fact that virtualization puts more pressure on these tasks is no excuse for overlooking them to date. Virtualization and “cloud” initiatives are increasing the demand on data centers to the point that they have no choice but to seek efficiencies. Or perhaps it is budget pressure that offers no choice and the storage demands of virtualization and cloud initiatives are making it harder to realize the savings.
Symantec and NetApp…more than the sum of the parts
Data protection is an essential part of every IT strategy. A good data protection plan minimizes the risk of downtime and data loss as well as the risk of a compliance incident. Most enterprise level data protection implementations are complex, costly and require thoughtful planning to ensure that the risk of data loss is reduced to an acceptable level.
As with any technology there is no shortage of catch phrases to distract the overburdened administrator as well as the budget conscious executive. Phrases like “Integrated Data Protection”, “Industry Leading”, “End to End” and yes, even “cloud”. Let’s face it. The only reason you spend a dime on this stuff is to reduce risk because risk adds cost to your operation. The cost of data re-entry, the cost of down time or the cost of compliance fines. How simple or complex the system that you create to deal with risk is not the issue. The issue is whether the cost of the system is less than the risk of doing nothing.
The Private Cloud Journey
“Private cloud adoption is a journey both from a technical and business perspective.”
At the recent AFCEA Cloud Lifecycle Management Symposium in DC, the discussion on government cloud computing ranged from acquisition policies to building the roadmaps in which NIST and government guidelines are being centered around. The vision of these roadmaps is to “easily locate desired IT services, rapidly procure access to the services, and use the services to deliver innovative mission solutions.”
But with all of the service providers and offerings available, how can government standardize and corral all of these into one simple menu of options that meets individual agency requirements? How will agencies define a successful cloud program? What are the strategies to assure success?
The Road to Private Cloud Success
I've been asked several times to help agencies evaluate their readiness to build a private cloud. Time and time again, I use the same concepts to find their current levels and what they should be looking at next. Data center automation, service oriented infrastructure, IT service management, resource orchestration, standard operating environments. Why am I bringing up ancient buzzwords in a private cloud conversation? Because without these fundamentals, your private cloud won't get very far off the ground.
An Amazon AWS VP has been quoted saying "If you are buying hardware, it isn't cloud". You may think, "Well of course, that's their business model. They don't want me to buy a private cloud." The argument made isn't a business model, it is architecture and use case. The economies of scale that need to be achieved in order to validate a cloud model only make sense in large deployments. The benefits of the IT department are best realized when the shift from capital to operational expenditures is complete. A set of local resources that takes advantage of the new cloud focused toolsets to move in a service oriented direction may not be a private cloud, but it is still a valuable direction for those IT shops that need to retain in house capabilities.
Implementing Data Center Consolidation
Cindy Cassill is the director of systems integration in the office of the CIO for the US Department of State. Prior to her current position at the Department of State, Cindy has over 30 years of federal IT experience. She was the CIO at the FAA Regions & Centers. She also was the CIO at the US Army Test and Evaluation Command and was the director of IT at the deputy assistant secretary of the army for civilian personnel.
This article highlights portions of Cindy Cassill’s presentation and the steps the agency took for their consolidation. Click Here to download the entire presentation and transcript at length.
Is your virtualization environment penny wise and pound foolish?
Enterprise virtualization solutions offer a valuable way to reduce operating expenses by removing underutilized servers from data centers. New servers designed around virtualization workloads instead of traditional single application workloads are offered by many manufacturers. Hardware extensions to CPUs and PCI buses allow hypervisors to directly and efficiently present resources to virtual machines to reduce the performance penalty imposed by an additional layer between the application and system resources. Storage vendors offer adaptable configurations and integration with enterprise virtualization solutions. High speed and high bandwidth network interconnects deliver the necessary throughput to service the consolidated network traffic requirements.
Quantum’s StorNext Steals the Show
By Jennifer Jackson
Last month I attended Quantum’s First Annual Tech Summit in Englewood, Colorado Not only was the weather a gorgeous 70 degrees for the conference, but Quantum provided an equally great platform for System Engineers around the globe to make suggestions, learn from each other, and become familiar with the directions Quantum is taking.
Green Government Mandates and How to Meet Them
A recent article by a friend of ours, Caron Beesley, editor of [acronym] Online, discussing the innovative steps that the federal government is taking to overcome many of the challenges of “going green” and meet a range of fast-tracked mandates, has been getting a lot of great press lately.
In Fast-Tracking A Greener Government – Meeting Those Mandates, Beesley noted that, as the largest consumer of energy in the U.S. economy, federal government energy efficiency projects have often been hampered by cumbersome infrastructure, regulatory hairballs, and energy upgrade limitations on buildings.
IT Consolidation Executive Forum
After attending the well put together DLT, NetApp, and Quest Software sponsored IT Consolidation Executive Forum I realized how important of a topic this is to our public sector agencies including the State and local sector. At many of these types’ forums and shows, sometimes, IT Consolidation doesn’t seem too relevant. But given the state of data growth, many who attended are planning on implementing data centers which use the technologies included with virtualization and IT consolidation.
Cindy Cassil exemplified this in her presentation by giving specific examples of the implementation at Department of State and showing how it was accepted by her boss immediately because it showed how they could standardize and save money. She also pointed out the need for putting together the right team, she mentioned the NetApp engineers, with the right skills to make sure that everything was put into place correctly.
To Snap or Not to Snap, That is the Question
NetApp has a robust data protection suite that can archive and replicate data as well as integrate with most primary applications like collaboration, database, web and virtualization technologies. The foundation for this data protection is the NetApp Snapshot. A Snapshot is a read-only point-in-time image of the active file system. The Snapshot technology is an integrated feature of WAFL (Write Anywhere File Layout), a block-based file system that uses inodes to reference files that are built into Data ONTAP, the micro kernel that runs on all NetApp storage.
When a snapshot is requested, WAFL creates a new Snapshot by making an exact copy of the root inode. This copy of the root inode becomes the root of the data representing the Snapshot, just as the root inode represents the active file system. When the Snapshot inode is created, it points to exactly the same disk blocks as the root inode, so a brand new Snapshot consumes no disk space except for the Snapshot inode itself.