Increasing your use of server virtualization is one of the easiest ways to prove that your backup process is antiquated. In other words:
If 20% of your server infrastructure is virtualized, you may be able to use any approach to virtual machine (VM) backups that you want, including agents inside each VM and a backup server running software that is a few years old and presumed to be “good enough.”
But when you get to 50%, 75% or 90% virtualized, legacy backup will significantly hinder the performance of your highly-virtualized, converged or hyper-converged infrastructure.
The problem is twofold:
Old backup software: In many enterprises, the backup team is intentionally a version or two behind, so that they can be assured that the backup softwareis fully patched and the bugs have already been mitigated. Unfortunately, with some legacy backup software, two versions can be three or more years behind; meaning it doesn’t make use of modern APIs such as Microsoft VSS for Hyper-V or VMware vStorage VADP. So, when those legacy approaches are applied to a highly virtualized environment, their mechanisms decrease the performance of production VMs and the underlying hosts.
Progressive production infrastructure: Meanwhile, server teams are trying to virtualize as many systems as they can, because of the operational efficiencies in provisioning and ongoing management. Also, VMs are also more agile from a data recovery perspective, particularly for BC/DR scenarios. If server administrators are not aware that their backup colleagues are intentionally (or unintentionally due to lack of budget) behind in their versions, the inadequacies in legacy backup software are often undiscovered until production performance starts to degrade.
There are two solutions:
Solution 1: Upgrade your unified backup product. The current versions of most unified — meaning those that can back up physical servers and virtual machines — backup products use hypervisor host-based APIs for their VM backups. This does not mean that their software is equitable with backup products that are specifically designed to protect and recover VMs, but it does mean that the gap isn’t as great as it is if your software is two or more years old.
Solution 2: Add a VM-specific backup and recovery product. At first, it may appear less desirable to add another backup product, but the reality is that if you are 60%+ virtualized, then your VM backup software is arguably your new “primary” backup software … and the legacy backup product is “just” for your remaining physical servers.
The irony is that if the intentionally laggard backup team had maintained the upgrades of their backup software, their legacy product wouldn’t be inadequate or cumbersome. So, some blame can arguably fall on the unified backup vendors for not making upgrades compelling enough — through upgrade deployment complexity, cost of maintenance agreements, lack of partner/services skills, and so on.
IT managers can find themselves at a crossroads as their production team continues to modernize their infrastructure and their legacy protection products prove themselves to be increasingly inadequate. As to which path to take, there isn’t a definitive answer, due to tradeoffs such as deduplication efficiency, implementation complexity, distributed management/reporting,agility of VM recovery (speed and granularity) and so on.
In fact, the only definitive guidance is to ensure that when your broader IT team is planning its production modernization, it should plan for modernizing its backup tool set in parallel.
[Originally posted on TechTarget as a recurring columnist]