Blog categories

My blog posts and tweets are my own, and do not necessarily represent the views of my current employer (ESG), my previous employers or any other party.

I do not do paid endorsements, so if I am appear to be a fan of something, it is based on my personal experience with it.

If I am not talking about your stuff, it is either because I haven't worked with it enough or because my mom taught me "if you can't say something nice ... "

Wrap-Up on Backup from Microsoft Ignite (with Video)

As ESG often tries to do, here is a short summary video of ESG’s impressions from a major industry event – Microsoft Ignite, held in Atlanta over September 26-29, 2016 – from a backup perspective.

In the video, I suggested that Microsoft is a leader in Windows data protection. Certainly, this is not to disparage the many Microsoft partners that have built whole companies and product lines around data protection. And from a revenue perspective, their backup offerings wouldn’t register at all.  But …

  • Almost every version of Windows has shipped with a built-in backup utility to address the immediate, per-machine need for ad-hoc backups or file roll-back, with today’s “Previous Versions” functionality more closely resembling software-based snapshots than “backup” per se. That said, it has always been recognized that a more full-featured, multi-server solution is almost always warranted.
  • Of course, Microsoft has been producing Volume Shadowcopy Services (VSS) for over a decade, which is the underpinnings of how any backup vendor protects data within Windows systems.
  • Microsoft has been shipping its “for sale” backup offering, Data Protection Manager (within System Center), for over a decade. And though a good number of Hyper-V centric environments use DPM, the greater impact is how DPM gave/gives Microsoft insights into how to improve VSS, thereby improving all data protection offerings in market.

The point is – Microsoft is not new to “backup.” It hasn’t previously been a monetary focus, but they have consistently recognized backup as intrinsic to a management story and assured satisfaction with Windows Server and its application server offerings.  All of that may be changing with Azure as the crown jewel of the Microsoft ecosystem and OMS as a cloud-based management stack.

Azure Backup is a key pillar of OMS, just like “backup” is a key pillar to many management strategies, alongside “provisioning,” “monitoring,” and “patching.”  IT Operations folks that are responsible for the latter three activities are continually wanting to do backups (as preparation for recoveries) as well … which makes Azure Backup something to watch for its own sake, and the sake of the data protection market in 2017 as many organizations are reassessing their partners during their embrace of cloud services.

[Originally blogged via ESG’s Technical]

New Veritas = New Vision

This week, the newly unencumbered Veritas (from Symantec) relaunched its premier user event – Veritas Vision. There was a palpable energy that resonated around “we’re back and ready to resume our leadership mantle,” starting with an impressive day one from main stage:

  • Bill Coleman (CEO) started the event by making “information is everything” personal by tying medical data to a young girl with health struggles, which gave context that would resonate with everyone — which then got “real” by revealing her as Bill’s niece.
  • Ana Pinczuk (CPO) introduced us to the journey that Veritas wants to partner with its customers and ecosystem – from what you already rely on Veritas for to ambitious data management and enablement, with an impressive array of announcements, almost all of which coupled Veritas’ established flagships with emerging offerings that unlock some very cool data-centric capabilities.
  • Mike Palmer (SVP, Data Insights), who is arguably the best voice on the Veritas stage in a very long time, delivered a brilliant session describing data and metadata through the movie InsideOut, tying the movie’s memory globes to data, the colors to meta data, combined with formative insights, pools of repositories and outcomes, etc.  I’ll be hard pressed to use any other analogy to describe metadata for a long time to come.  Vendors, if you want to see how to completely nail a keynote, watch the recordng of Mike’s session.

And that was just the first two hours.  Along the way, the new Veritas also wanted everyone to know that they are combining two decades of storage/data protection innovation with a youthful, feisty aggressiveness against perceived legacy technologies, with EMC + Dell being the punchline of many puns and direct takedowns. That could have come across as mean or disrespectful, but was delivered with enough wit that it served to bring the room together. The competitive digs may have had arguable merit, but they clearly cast Veritas as a software-centric, data-minded contrast to hardware vendors – with a level of spunk that ought to energize its field, partners, and customer base.

As further testament to their approach for combining flagships with emerging offerings, many of the breakouts leveraged multiple, integrated Veritas products for solution-centric outcomes — which candidly is their best route to the ambitious journey that Veritas is embarking on. Glueing together their new journey through integrated solutions that are then underpinned by products (instead of jumping straight to what’s new in product X this year) will be a key to watch for as the new Veritas continues to redefine itself.  As a reminder that “Veritas” is much more than “NetBackup,” check out their current portfolio.

For further impressions on the event, check out ESG’s video coverage from the event:


Congratulations Veritas on a fresh vision (and Vision) that ought to propel you into some exciting opportunities.

[Originally blogged via ESG’s Technical]

Is Your Data Protection Strategy Suffering a Civil War?

I am a huge fan of the Marvel movies. Each of the individual hero movies has done an awesome job contributing to the greater albeit fictional universe. Each of the heroes has their unique role to play within the Avengers team. And yet, in the latest movie that released on Blu-Ray today, it appears as if this colorful array of heroes is divided.  They have similar goals, but what seems to be opposing methods that put them at odds with each other. Data protection can have similar contradictions.


The Spectrum of Data Protection activities can seem similar.  We often talk about the spectrum as a holistic perspective on the myriad of data protection outcomes—and the potentially diverse tools that enable those outcomes.  And yet, sometimes, the spectrum can appear opposed to itself:

  • Some in your organization are focused on “data management” (governance, retention, and compliance) which focuses on how long you can or should retain data in a cost-effective way that unlocks the value of the data.
  • Others in your organization are focused on “data availability” (assured availability and BC/DR), as part of ensuring the users’ and the business’ productivity.

Do these goals actually contradict?  No.

But … you have to start with the core of what is common: data protection, powered by a complementary approach of backup, snapshots, and replication. But as backup evolves to data protection, many come to a crossroads where that evolution only goes down one path or the other—data management or data availability.

We’ll have to wait until next year to see how the Avengers reconciles to a single team again—but you can’t afford to wait that long. Start with your core focus areas and then evolve toward the edges, as opposed to coming from the edges in.

[Originally blogged via ESG’s Technical]

Why you still need backup … and beyond

The foundation of any data protection, preservation, and availability strategy is grounded in “backup,” period. Yes, a majority of organizations supplement backups with snapshots, or replicas, or archives, as shown in what ESG refers to as the Data Protection Spectrum:


And as much as some of those other colors (approaches to data protection) can add agility or flexibility to a broader data protection strategy, make no mistake that for most organizations of any and all sizes, backup still matters!

In fact, ESG wrote a brief on the relevance of backup today, within the context how other methods supplement backups and vice versa. ESG is now making this brief publicly available, courtesy of Commvault.

Click here to download the ESG brief

Why You Still Need Backup

In fact, Commvault believes so much in this backup-centric and yet comprehensive approach to data management, protection, and recovery, that they’ve invited me to speak at their Commvault GO conference in October at a session aptly named, Why you still need Backup….and Beyond (session description below)

ESG research shows that for the past five years, improving data backup and recovery has consistently been one of the IT priorities most reported by organizations. However, to evolve from traditional backup to true data management is to get smarter on all of the iterations of data throughout an infrastructure, including not only the backups/snapshots/replicas/archives from data protection, but also the primary/production data and the copies of data created for non-protection purposes, like test/dev, analytics/reporting, etc. Further, the cloud offers a new way to approach data protection, disaster recovery and some of those non-protection use cases. In this session, leading industry analyst, Jason Buffington discusses the trends in data protection today and market shifts that customers MUST understand in order to keep pace with the changing IT landscape.

You can click here to find out more about the sessions at the conference.

My thanks to Commvault for syndicating the brief and the chance to share ESG’s perspectives on how the realm of data protection and data management is changing, and what to look for as it does. See you in Orlando!

[Originally blogged via ESG’s Technical]

The gold standard for data protection keeps evolving

Yes, of course, data protection has to evolve to keep up with how production platforms are evolving, but I would offer that the presumptive ‘gold standard’ for what is the norm for those on the front lines of proactive data protection is evolving in at least three different directions at the same time.

Here is a 3-minute video on what we are seeing and what you should be thinking about as the evolutions continue.

As always, thanks for watching.

Video transcript:

Announcer: The following is an ESG video blog.

Jason: Hi. I’m Jason Buffington. I’m a Principal Analyst at ESG covering all things data protection. The gold standard for data protection has evolved over the years. It used to be, way back in the day, it was disc-to-disc-to-tape, D2D2T. Meaning, we’d first try to recover from disc for rapid restoration or we go from tape as a longer term medium. There were a few folks out there that only used tape. There were even fewer folks out there that said you only needed disc. But most of us figured out we should use both as a better-together scenario.

Fast forward a couple of years and the gold standard changed. Now we’re talking about supplementing those longer term retention tiers of backups with snapshots for even faster recovery. And replication for data survivability and agility. Again, there’s a few folks out there that will try to convince you that one might usurp the others. But most of us have figured out that it makes sense, and sense, to have all of them in complement or in supplement to each other.

Fast forward a few more years and the gold standard continues to evolve. Now what we’re seeing is that backup is actually defragmenting. Where we’re seeing different virtualization solutions than we are for database solutions than we are for a unified solution. But the problem is the unified solutions don’t typically cover SaaS. So now we’re up to four different backup products.

Do you really need four different backup products? Probably not. But based on what we’re seeing in the industry, evidently, there’s a lot of IT operations and a lot of IT decision makers out there that haven’t been able to convince their V-Admin and their DBA colleagues that unified solution might be superior. And that brings up two challenges. Workload owners tend to think about 30-day, 60-day, 90-day rollback in order to keep their platforms productive. Whereas a backup admin tends to think in 5-year, 10-year, 15-year retention windows. Corporate data has to be protected to a corporate standard regardless of how many different people are clicking the mouse.

The other thing to note is that one gold standard doesn’t replace the ones before it. We’re still using disc plus tape and we’re supplementing that with cloud. We’re still supplementing backups with snapshots and replication. And we’re continuing to fragment the virtualization and DBA and all the other ones we’ve talked about. And that’s gonna lead us to what ESG calls the Five Cs of data protection.

We’ve already covered the containers, meaning all those different media types you’re gonna store. We’ve already talked about the conduits, those data movers, the backups, and the snaps, and the replicas, etc. And if you’re gonna have that much heterogeneity, you better have a single control plane to make sure that everything is operating in sync with each other. Mitigating that over and under protection. You better have a catalogue that’s rich enough to tell you what you have, where it is, and how long you need to keep it. And you better have at least one console that can tell you what all is going on within the environment. And make sure they’re actually making this all actionable and insightful.

Those are the Five Cs. I’m Jason Buffington for ESG. Thanks for watching.

[Originally blogged via ESG’s Technical]

A case study in how to #EpicFail at a product launch

Last week provided a case study in how to #EpicFail at a product launch. The vendor in question took a fresh look at the market and then created a completely new offering, built on a trusted brand, but stretching in a new and intriguing direction. And then, it completely failed in its first days in market.

The product (and service) is Pokemon Go, but there are several lessons that a lot of vendors can appreciate. As a disclaimer, when I am not preaching about Data Protection or Scouting, I can often be found with a game controller in my hand. And in one of my past lives, I was a columnist for (archived and was also a product manager for the Microsoft management tools used for Xbox Live, back when Call of Duty and Halo release dates would wreak havoc during week one matchmaking events.

Here are five things that the Pokemon Go folks could learn (and maybe some of us, too):

  • Plan for scalability to accommodate success. Especially when you are using a cloud-service, which is presumably elastic as one of the benefits of being a cloud service, there is no excuse for authentication or user-creation or user-login issues. Quite simply, people couldn’t log on fast enough – and the site couldn’t handle it. Any as-a-Service vendor would kill to have Pokemon Go’s initial sign-up rates, but the game systems weren’t prepared and left a horrible first impression for days.
  • Don’t confuse folks with your branding. Ask most folks, where Pokemon comes from, they’ll answer “Nintendo.” But if you go to the AppStore, you’ll find a game from Niantic. In a world where malware is everywhere, cautious users might shy away or at least reconsider this as a knock-off from someone trying to sneak in. In this case, the splash screen (after you’ve installed it and whatever dubious hidden gems are there) shows “Niantic, a Pokemon Company.”  There have to be better ways to build on brand recognition and assure credibility without creating new company names.
  • Whatever 1.0 is functionality wise, it has to work (everytime). It’s okay to not be feature-rich, or hold new capabilities for version 1.1 or 2.0. But whatever functions are there have to work. Either Niantic didn’t do enough QA testing, or they didn’t run enough of a beta program, or they just don’t care — but there are a lot of folks out there who are habitually rebooting the app. Some of it is likely tied to the failing to connect to the backend server farm, but still — it is broken as often as it is running. For anything besides a treasured game franchase, those customers will never come back or try again.
  • Ensure your initial users’ success if you want them to evangelize for you. Part of ensuring users’ success is telling them how your stuff should be operated. Provide a help file, some tutorial material, a few friendly videos, something! Even the most avid fans will be left with blank stares in this new application, with the community coming to the vendor’s rescue with fan-made materials. A short ‘system requirements’ set of info so that folks don’t install it, but then find it doesn’t work, would also be helpful.
  • Don’t be afraid to reinvent yourself, but respect what made you originally successful. This game is very different than anything you’ve played on a GameBoy, DS, or a console. It plays on your smartphone, uses your GPS, and expects you to get up and walk around. If you want to “Catch ‘em All” then you are going to have to get off that couch and start walking (a lot). In that regard, Pokemon Go actually deserves kudos for challenging the gamer stereotype and encouraging fitness, while extending a coveted brand and a few decades of avid fans.

In the case of Pokemon Go, there really are decades of fans out there who will begrudgingly forgive the catastrophic missteps that would have killed similar projects in the real world, and those folks will keep trying until the overwhelmed developers and system admins at Niantic figure out how to make it better. And they likely will figure it out, because (based on the huge initial attempts) they know that they have a potential hit on their hands, so they’ll (hopefully) devote the extra effort to fixing it.

To be fair, with Nintendo having seen billions in increased valuation after the launch, the company, the Pokemon empire, and (eventually) the game will be fine — but for the rest of us, that won’t happen.

For the rest of us, your stuff has to work at launch, don’t make it hard to trust you or try your stuff, and you can’t be timid to the point that you can’t handle success (especially if it is cloud-based). If you don’t have a plan for what happens when you find success, then you very likely never will.

[Originally blogged via ESG’s Technical]

“Tape” is not a four-letter word

In the 25+ years that I have been in data protection, much of it has been spent hearing about “better” alternatives to tape as a medium. Certainly, in the earlier days, tape earned its reputation of slowness or unreliability. But nothing else in IT is the same as it was twenty years ago, so why do people presume that tape hasn’t changed?

Do I believe that most recoveries should come from disk? Absolutely. But candidly, my preferred go-to would be a disk-based snapshot/replica first, and then my backup storage disks, which would presumably be deduplicated and durable.

Do I believe in cloud as a data protection medium? Definitely. But not because it is the ultimate cost-saver for cold storage. Instead, cloud-based data protection services (cloud-storage tier or BaaS) are best when you are either dealing with non-data center data (including endpoints, ROBO servers or IaaS/SaaS native data) or when you want to do more with your data than just store it (BC/DR preparedness, test/dev/analytics). Of course, ‘cloud’ isn’t actually a medium, but a consumption model for service-delivered disk or tape, but we’ll ignore that for now.

Do I believe that tape is as relevant as it’s ever been?Yes, I really do. As data storage requirements in both production and protection continue to skyrocket, retention mandates continue to lengthen, and while IT teams are struggling to ‘do more with less,’ there are many organizations that need to re-discover what modern tape (not legacy stuff) really can do for their data protection and data management strategies.

Check out this video that we did in partnership a few vendors within the LTO community:

Your organization’s broader data protection and data management strategy should almost certainly use all three mediums for what each of them are best at. Disk is a no-brainer and cloud is on everyone’s minds, but don’t forget abouttape.

As always, thanks for watching.

[Originally blogged via ESG’s Technical]

Why BaaS when you can DRaaS?

The question isn’t as simple as it might seem:

  • There are lots of great reasons to embrace modern Backup as a Service (BaaS) solutions, including governance, extended data preservation, IT oversight of endpoint and ROBO backups, etc.
  • There is also one overwhelmingly great reason to utilize Disaster Recovery as a Service (DRaaS) — enhanced availability of servers.

So, the question really is, can you gain a DRaaS outcome from a BaaS solution? And honestly, it isn’t just a cloud-consideration. You could just as easily ask, can you get BC/DR agility from a backup tool?

The answer is, it’s really, really hard to get BC/DR from a Backup/Restore tool, in or out of a cloud — particularly due to data flow and the need for better orchestration. Here is a short video I’ve recorded on the differences between backup and replication, and the importance of orchestration and workflow:

You may possibly have a data protection product (or service) that offers both. If you do, it is likely to be doing both backups (transformed) and replicas (readily-usable), which builds good cases for:

  1. Deduplicated/optimized protection storage
  2. Orchestration/automation workflows as part of any modern data protection management framework.

As always, thanks for watching.

Video transcript:

Woman: The following is an ESG video blog.

Jason: Hi. I’m Jason Buffington. I’m the senior analyst at ESG covering data protection. For the past two years, we’ve seen significant interest in leveraging cloud services as part of one’s data protection strategy. In fact, in ESG’s last two annual IT Spending Intentions reports, when asked about the use cases for cloud based infrastructure services, improving data backup was number one. Disaster recovery was number three.

There are lots of different ways to gain backup services, including augmenting on-prem backups with cloud storage all the way to a full-fledged backup as a service, BaaS. Similarly, for business continuity disaster recovery goals, you might utilize colo space, you might use infrastructure as a service, hybrid architecture or a full-fledged DR as a service, DRaaS.

With so many choices, it can be really confusing. I’d like to offer what I believe are the single biggest differences between them, which is data flow and orchestration, which will ultimately affect your agility and your business outcome.

From a data flow perspective, most backup technologies transform the data as part of transmitting it to the secondary repository, on-prem or cloud, which is what necessitates doing some kind of a “restore” to get it back. This transformation usually optimizes for storage but can limit the immediate usability or the recoverability of the data unless you restore it or basically un-transform it back to its original state.

In contrast to that, most BC/DR and availability technologies replicate the data in closer to their original state, which makes the data more immediately usable when needed. One method is not better than the other. Backups optimize for multiple versions, while replicas are designed for usability of typically only the most current version.

There are exception to the rule, but in general, the more immediately usable the data, the less transformed it is within secondary storage and that’s a trade-off between storage efficiency and IT resiliency.

The other main differentiator is workflow orchestration or automation. It’s one thing to have copies of VMs sitting in some secondary repository some place, but availability in BC/DR are more than just powering them up. For example, if you have a multi-VM application with a web front ends and connecting to middleware, being serviced by a pair of database servers, all of which have to be authenticated by active directory. You can’t just highlight those eight VMs, right-click and say, “power on.” You have to have a workflow. You have to have automation that’s defined in advance and runs when you need it.

Those same orchestration, automation mechanisms can also give you sandbox testing. You can test the ability to bring VMs online without impacting production, or you can test the recoverability or the restorability of VMs with even granular data within that VM on a regular basis.

There are other differences, but I hope this starts to get you thinking. Both kinds of technologies whether implemented within public or dedicated cloud services, in a hybrid architecture or even just between on-prem locations, all provide huge value and modernizing one’s data protection capabilities. Just understand what you’re getting and be clear on what you need.

I hope this was helpful. I’m Jason Buffington for ESG. Thanks for watching.

[Originally blogged via ESG’s Technical]

Wrap-up on backup from EMC World 2016 — day two keynote

There are a lot of things to like about EMC World this year, especially if data protection is important to you. Kudos on the day two general session with Jeremy Burton and Guy Churchward.

Some notes from the event:

  • It is hard to imagine doing an inside-an-appliance component-level tour from main stage, but the Fantastic Voyage miniaturization schtick worked to keep us entertained and let EMC tell a story of what makesUnity unique, both as a platform and a usability experience. And both are impressive (even to a backup guy like me).
  • A special nod to Beth Phalen on the data protection topics, which EMC briefly covered its recently announced Data Domain Virtual Edition (DD VE) and then a very solid demo on eCDM (enterprise Copy Data Management), which discovers and helps manage all of the myriad copies across EMC production and protection storage platforms. CDM can be a daunting concept to really understand, but the pastry example worked and the solution is one that a lot of folks ought to be excited to explore.
  • Chad Sakac added more data protection goodness, by talking about the built-in data protection within new VCE converged infrastructure and hyperconverged appliances via EMC RecoverPoint for VMs, with replication and failover at a VM-level, already included within VXRail systems, as well as wizard‑based workflows for context-aware backups that are part of the VM provisioning experience. I’m a huge fan of built‑in DP within converged infrastructure, so was pleased to see these from main stage and done well.

As the general session continued, more execs told their parts of the story with new announcements around a faux coffee-shop. At one point; they sat on couches and hovered around a table looking at a new platform – painting a subtle picture that these execs and their myriad technologies really are ‘Friends’ (TV show style) as a genuine family.

I don’t gush on keynotes often, but this one was spot-on in execution (and good content) which is worth studying by others in our industry. Check out the EMC World Day Two’s General Session at  As for the data protection stuff:

You can also see my guest-vBlog on EMC’s site around my first impressions (hands-on) with DD VE.

In addition, check out my recent vBlog on why Copy Data Management matters to everyone.

[Originally blogged via ESG’s Technical]

How to evolve from “data protection” to “data management”

To evolve from backup to data protection is to embrace complementary mechanisms that enable a much broader range of preservation of data, protection of data, and the assurance of productivity through enhanced availability initiatives, as seen in ESG’s Spectrum of Data Protection.


To evolve from data protection to data management is to get smarter on all of the iterations of data throughout an infrastructure, including not only the backups/snapshots/replicas/archives from data protection, but also the primary/production data and the copies of data created for non-protection purposes, like test/dev, analytics/reporting, etc.

Here’s a short video on Copy Data Management that hopefully explains the importance of CDM:

Quintessential ‘backup admins’ are in a unique position to potentially lead this evolution because:

  1. Admins have led many of the evolutions from backup to data protection.
  2. Some DP technologies already have the catalog and the controller (policy) that lend themselves to evolve from data protection to data management.
  3. They, perhaps more than most, understand the diversity of repositories of data that exist throughout an infrastructure.

To be clear, backup admins cannot achieve data management on their own — in fact, the stakeholders should include several groups across IT, non-IT business stakeholders, and other beneficiaries (such as test/dev, compliance, etc.). But if you put those folks in the room, and seek out authentic CDM technologies that can enable what has been envisioned, then you are on your way to data management.

As always, thanks for watching.

[Originally blogged via ESG’s Technical]