Virtual data protection requires fresh approach to backup

Each year, Enterprise Strategy Group publishes its annual IT Spending Intentions Survey, and for the past three years, “improving data backup and recovery” and “increased use of server virtualization” have been among the top three stated IT priorities. While their positions bounce around within the top echelon, they are always adjacent in the overall and midsize organization lists.

The reason for this is that you have to modernize your protection capabilitiesat the same time you modernize your production infrastructure. Some IT architects understand that from the start. Others begin to modernize their production infrastructure only to come to the realization that their legacy protection capabilities are insufficient and can hinder the performance of the production platforms. Said another way:

  • If you’re 20% virtualized, you can probably use any legacy approach to backup without much issue.
  • If you are 40% virtualized, you’ll likely realize your legacy solution is inconvenient and inefficient, creating additional Opex and Capex costs.
  • By the time you’re 60% virtualized or more, you’ll discover your legacy backup solution isn’t keeping pace and is actually impacting the performance of your production hosts, virtual machines (VMs) and the underlying storage/networking infrastructure.

From a purely technical perspective, older methods of backup presume significantly more free resources on production servers (because so many physical servers are underutilized), but those methods don’t work in highly virtualized environments. When less-than-efficient backup mechanisms are applied to VMs, particularly in dense blades or converged infrastructure solutions, the CPU and I/O burden is felt not only on that VM, but on the underlying host, all of the adjacent VMs, and even the supporting storage and networking stacks.

In addition, older solutions typically don’t support I/O saving benefits such as VMware’s Changed Block Tracking and/or client-side deduplication mechanisms designed for highly virtualized environments. Also, back-end integrations to deduplicated storage typically don’t run well with legacy approaches to VM backup. All of this results in additional I/O consumption, hindered performance, and larger Capex investments for the additional disk storage and network bandwidth.

From an operational perspective, legacy backup mechanisms usually require more daily management and ongoing configuration changes, two things modern virtualization-optimized solutions don’t need. The result is not only less management (manpower), but better protection through automation, orchestration, and integration with the hypervisor and private cloud management stacks.

Putting this all together, a modern data protection solution should do the following:

  • Protect Hyper-V and VMware-hosted VMs, using hypervisor-based backup application programming interfaces
  • Coordinate control of backups, snapshots from primary storage and replication to tertiary storage
  • Enable deduplication, including source/production, backup server and storage array options
  • Integrate with the private cloud or software-defined data center management toolkits, as well as the hypervisor management user interfaces

If you’re already headed full-speed toward a modern and highly virtualized data center, and your data protection solution doesn’t have the above-listed capabilities, your production infrastructure is modernizing faster than your protection solution. That’s like having a car that has its left tires spinning faster than its right tires. And, just like the car, you’ll find yourself going in circles at some point — and perhaps sooner than you think.

 [Originally posted on TechTarget as a recurring columnist]

Leave a Reply