Virtualization, the dark side
The race to virtualize everything has created a host of unintended consequences, not the least of which is how to meet the SLAs (service level agreements) for application backup. As we move into cloud alternatives this problem will only grow since your cloud provider will have to provide this to you on an application by application basis.
Every virtual machine is essentially a set of large files such as VMDKs in a VMware context. These large files are typically stored in storage arrays which can be connected via iSCSI or Fiber Channel or on NFS volumes. Traditional data protection techniques such as VMware's VADP, or VMware VCB rely on an agent to protect VMDK files associated with virtual servers.
Pssssst... Guess what? Storage Foundation just got its Dedupe on
Alright everyone, stop talking to Siri for a minute and listen up. Storage Foundation just got Deduplicaton. Let me say it one more time. STORAGE FOUNDATION JUST GOT DEDUPLICATION. Just in case you missed anything of what I just said. Here you go…
Time for Basic Discipline
Much ado is being made these days about consolidation and reducing the cost of IT, with virtualization being the drivers for storage management improvements. The truth is that storage management, as well as application management and OS management, has always been a critical component of data centers. The fact that virtualization puts more pressure on these tasks is no excuse for overlooking them to date. Virtualization and “cloud” initiatives are increasing the demand on data centers to the point that they have no choice but to seek efficiencies. Or perhaps it is budget pressure that offers no choice and the storage demands of virtualization and cloud initiatives are making it harder to realize the savings.
Getting Started with NetApp Storage Efficiency
This is my advice for customers who want to get started with storage efficiency:
• Consider SATA drives instead of Fiber Channel
• Enable Dedupe
• (Use Flash Cache as insurance against bad performance)
NetApp has other efficiency features too (thin provisioning, cloning, compression, and so on), but I’ve found that customers often start with SATA and dedupe. SATA because it saves so much money, and dedupe because it’s so easy to turn on and comes free with ONTAP.
When I talk with customers who are using SATA and dedupe, they are usually happy with NetApp, and pleased with their storage costs. When customers are haggling over price but haven’t at least considered these features, I wonder what they are thinking.
SATA with Flash Cache doesn’t always match the performance of Fiber Channel, but when it does, it can cut your costs in half. It’s definitely worth considering! We have many happy customers using it for production data. Home directories are a good place to start. Email, especially with the most recent versions of Exchange. Some customers use it for database, depending on the workload.
Got NetBackup 7.5 Beta?
Come one come all come see the greatest spectacle in the known universe… NetBackup 7.5 Beta. It removes excess body hair, cures mad cow disease, protects against the occasional snake bite, and Justin Beiber.
Okay, so maybe these features are not in the NetBackup 7.5. However you can find these:
- Primary Replication Management: Unified Policy-based Management of Backups, Snapshots and Replication.
- Deduplication of storage in multiple deployment environments.
- Single dynamic data protection solution across physical and virtual data centers including mission critical applications.
- Enable the use of cloud-based offsite storage
- Search metadata associated with backup images
Want to learn more? Symantec’s NetBackup Guy can help! One size does not fit all when it comes to protecting applications- check out more information here:
Business Impact Analysis: The Foundation of a Disaster Recovery Plan
Consider the following statistics taken from the Disaster Recovery Journal (Winter 2011):
• A single incident of data loss can cost a company an average of $10,000.00
• 93 percent of companies that lost their data for 10 days or more, filed for bankruptcy within a year.
• 40 percent of businesses that suffer a loss of data, fail within 5 years.
And while most companies and organizations have taken Disaster Recovery seriously, they often fail to take a proper BIA or Business Impact Analysis and properly test their plan for appropriateness; often resulting in losses.
A BIA or a Business Impact Analysis is exactly what it sounds like; proper research to determine what the business impact would be if an application, website, database, HR document, etc… were not available for given sets of time. Perhaps if a database were not available for an hour there would be little impact, but if it were down for a day, it would be critical. It is important to do an accurate study to determine where those pain points are for all aspects of your organization and review them regularly for changes in criticality. While this sounds like the absolute foundation for all DR plans (and it is) I have regularly encountered both government and private industry that fail to do this most basic step. They either consider everything to be critical (it isn’t) or they only backup a few servers that they think contain their most important documents/data. Neither of these plans accomplishes suitable DR.
Symantec Acquires Clearwell - Finally, an acquisition that makes sense!
Symantec recently announced the acquisition of Clearwell Systems, making a move which came to no surprise to folks working in the Archive, E-discovery arena. However in this day and age of “smash and grab” acquisitions in the IT industry, it was nice to see a partnership come together that makes good sense, both business and technology wise!
Symantec and NetApp…more than the sum of the parts
Data protection is an essential part of every IT strategy. A good data protection plan minimizes the risk of downtime and data loss as well as the risk of a compliance incident. Most enterprise level data protection implementations are complex, costly and require thoughtful planning to ensure that the risk of data loss is reduced to an acceptable level.
As with any technology there is no shortage of catch phrases to distract the overburdened administrator as well as the budget conscious executive. Phrases like “Integrated Data Protection”, “Industry Leading”, “End to End” and yes, even “cloud”. Let’s face it. The only reason you spend a dime on this stuff is to reduce risk because risk adds cost to your operation. The cost of data re-entry, the cost of down time or the cost of compliance fines. How simple or complex the system that you create to deal with risk is not the issue. The issue is whether the cost of the system is less than the risk of doing nothing.
Why NetBackup Appliance?
Data protection architectures are by necessity complex in nature as they involve the cold calculus of many factors. There is not a “one size fits all” approach to data protection because the operational requirements of each organization dictates how data is used and the local risk assessment process dictates to some extent how it will be protected.
Part of the data protection strategy is the backup / restore process. The simplest of these architectures involve a management tier, a process tier and either a storage tier (disk, tape or both). Symantec’s answer to this is the Netbackup Appliance; a 4U 32TB raid 6 stack that includes all three layers in an appliance form factor. It comes with 2 1GigE ports, 2 10 GigE ports and 2 4Gig fiber hbas. It also has a direct out to tape capability for off-site backup replication. Symantec’s de-duplicate anywhere capability is an integral part of this appliance which extends the scope of data protection significantly.
Enterprise Vault for Beginners: What’s Indexing All About?
One of the first tasks that an Enterprise Vault Administrator will perform is the configuring of the Enterprise Vault indexes. Put simply, the indexes allow the searching of the archived items – kind of an important thing. If you were to organize your workspace, wouldn’t you want to know where you placed your Red Swingline stapler or your “Jump to Conclusions” mat? Well, indexes allow you to know where they are.
With indexing there are 3 different levels that an administrator can specify; these levels are Brief, Medium, and Full. The actual index size will be a certain percentage of the original items -- 3% for Brief, 8% for Medium, and 12% for Full. Obviously with the Full indexing level, this level will give you the more granular searches when searching the html and text versions of the items in the archive.