Blog categories

My blog posts and tweets are my own, and do not necessarily represent the views of my current employer (ESG), my previous employers or any other party.

I do not do paid endorsements, so if I am appear to be a fan of something, it is based on my personal experience with it.

If I am not talking about your stuff, it is either because I haven't worked with it enough or because my mom taught me "if you can't say something nice ... "

EMC announces another step towards backupless-backups

Last week, in London, EMC made several announcements – many of which hinged on the VMAX3 platform – but the one of most interest to me was ProtectPoint, where those new VMAX machines will be able to send their backup data directly from production storage to protection storage (EMC Data Domain) without an intermediary backup server. 

I mentioned this in my blog last week as an example of while “backup” is evolving, those kinds of evolutions require that the role of both the Backup Administrator (which should not be thought of as a Data Protection Manager/DPM) and the Storage Administrator (or any other workload manager that is becoming able to protect their own data) need to evolve, as well.  Enjoy the video:

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

Workload-enabled Data Protection is the Future … and that is a good thing

When asked about “what is the future for datacenter data protection,” my most frequent answer is that DP becomes less about dedicated backup admins with dedicated backup infrastructure … and more about DP savvy being part of the production workload, co-managed by the DP and workload administrators.

  • In the last few years, we’ve seen a few examples of that with DBA’s using Oracle RMAN to do backups that aren’t rogue outside of corporate data protection mandates, but in concert with them – and being stored in the same deduplicated solution as the rest of the backups (e.g. DDboost for Oracle RMAN).
  • More recently, we are seeing more examples of VMware administrators getting similar functionality, including not only VMware’s own VDPA/VDPA+, but also traditional backup engines that are being controlled through vCenter plug-ins to give the virtualization admin their own solution.

EMC’s announcement of ProtectPoint is another step in that evolutionary journey, enabling VMAX production storage to go directly to Data Domain protection storage, thereby giving yet another group of IT Pro’s more direct control of their own protection/recovery destiny, while at the same time extending the agility and sphere of influence of data protection professionals.

To be clear, as workload owner enablement continues to evolve, the role of the “Data Protection Manager” (formerly known as the “backup administrator”) also evolves – but it does not and cannot go away. DPM’s should be thrilled to be out of some of the mundane aspects of tactical data protection and even more elated that the technology innovations like snap-to-dedupe integration, application-integration, etc. create real partnerships between the workload owners and the data protection professionals. And it does need to be a partnership, because while the technical crossovers are nice, they must be coupled with shared responsibility.

If the legacy backup admin simply abdicates their role of protecting data to the workload owner, because they now have a direct UI, many backups will simply stop being done – because the tactical ability to back up and the strategic mindset of understanding business and regulatory retention requirements are very different. The “Data Protection Manager” should be just that, the role that manages or ensures that data protection occurs – regardless of whether they enact it themselves (using traditional backup tools) or enable it through integrated data protection infrastructure that is shared with the workload owners.

Some naysayers will be concerned that as the workload owners gain tools that enable their own backups, the DP admin role diminishes – but there is a wide range of behaviors that are enabled by this evolution:

Some workload owners will wholly take on the DP mantle, but the DP manager will still need to “inspect what they expect” so that corporate retention and BC/DR mandates still happen.

Some workload owners will be grateful to drive their own restore experiences, but happily rely on the DP managers to manage the backups beforehand.

Some workload owners will recognize that they are so busy managing the workloads, the DP admins will continue to do the backups and restores – but now with better backups/snaps that are continue to be even more workload-savvy.

And there are likely other variations of those workload owner/DP Manager partnerships beyond these. But any way that you look at it, the evolution and collaboration of workload-enhanced data protection that can be shared between the workload owner(s) and the data protection managers is a good thing that should continue.

[Originally posted on ESG’s Technical Optimist.com]

Data Protection Impressions from the Dell Analyst Conference 2014

I recently had the opportunity to attend the Dell Annual Analyst Conference (DAAC), where Michael Dell and the senior leadership team gave updates on their businesses and cast a very clear strategy around four core pillars:

  • Transform — to the Cloud
  • Connect — with mobility
  • Inform — through Big Data
  • Protect

Protect?! YAY!!  As a 25-year backup dude who has been waiting to see how the vRanger and NetVault products would be aligned with AppAssure and the Ocarina-accelerated deduplication appliances, I was really jazzed to see “Protect” as a core pillar of the Dell story.

But then the dialog took an interesting turn:

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]


vBlog: Wrap-Up on Backup from EMC World 2014 – part 2, strategy

Last week, I published a video summary of the data protection product news from EMC World 2014, with the help of some of my EMC Data Protection friends. To follow that up, I asked EMC’s Rob Emsley to knit the pieces together around the Data Protection strategy from EMC:

Essentially, what I call the Data Protection Spectrum, EMC calls the Data Protection Continuum — as an overarching perspective that combines backups, archives, snapshots, and replication (for all of which EMC has product offerings).

My thanks to Rob for his insights and to EMC for a great week.

Thanks to you for watching

[Originally posted on ESG’s Technical Optimist.com]


vBlog: Wrap-Up on Backup from EMC World 2014 – part 1, products

During EMC World 2014 in Las Vegas last month, I had the chance to visit with several EMC product managers on what was announced from a product perspective, as well as overall data protection strategy.  Enjoy the video:

For such a broad range of products within the EMC DP portfolio, it is impressive that while each product continues to innovate on its own, it is obvious that they are doing so in alignment with each other — and with a clear and unifying vision of meeting customers’ overall data protection needs.

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]


vBlog: How to Ensure the Availability of the Modern Data Center

When you really boil down the core of IT — its to deliver the services and access to data that the business requires. That includes understanding the needs of the business, its dependencies on things like its data, and then ensuring the availability of that data.

"Availability" can be achieved in two ways = Resilience and Recoverability.

  • Resilience are the clustering/mirroring technologies that many of us think of as "traditional high availability”
  • Recoverability are the myriad methods of rapidly restoring functionality through the use of technologies like near-instantaneous VM restoration or leveraging snapshots.

The modern data center by definition should be highly virtualized must also be both resilient and recoverable, in order to be dependable enough to then deliver the other promises of modern IT around agility, flexibility, etc. With that in mind, here is a short video of what folks should be looking for to fulfill the recoverability requirements of highly virtualized environments, thus helping to achieve the availability of the modern data center:

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

How do you back up Big Data? Or SaaS? Who will be the next Veeam?

It seems that every time that a new major IT platform is delivered, backing it up is an afterthought – often exacerbated by the fact that the platform vendor didn’t create the APIs or plumbing to enable a backup ecosystem. Each time, there is a gap where the legacy folks aren’t able to adapt quickly enough and a new vendor (or small subset) start from scratch to figure it out. And for a while, perhaps a long while, they are the defacto solution until the need becomes so great that the platform vendor creates the APIs, and then everyone feverishly tries to catch up. Sometimes they do, other times, not so much:

Veeam, while not the only virtualization-specific backup solution, is a classic example of this scenario and are typically the vendor that the legacy solutions measure themselves against for mindshare or feature innovation, in their efforts to win back those who are using a VM-specific product in combination with traditional physical backup solutions.

Before them, Seagate Software’s Backup Exec was synonymous with Windows Server backups, helped by the built-in "Backup Exec lite" version that shipped within early Windows.

Before them, Cheyenne Software’s ARCserve was synonymous with Novell NetWare backups, who was among the first to protect a server’s data from within the server, instead of from the administrator’s desktop (really).

History continues to repeat itself

The challenge for platform vendors is that after the early adopters have embraced a platform (any platform), the mainstream folks will push back under the premise of “If I am going to put my eggs (data) in this basket, it better be a solid basket” (meaning that it is back-up-able) – without which will ultimately hinder the second/broader waves of adoption. Other examples include:

Microsoft SharePoint has a slightly similar architecture to “Big Data”, with its disparate SQL databases being far more protectable than the metadata and “Farm” constructs that link everything together. Many legacy backup solutions that had robust SQL protection capabilities struggled to back up SharePoint (restore was even worse), in large part because Microsoft hadn’t developed the VSS enablers to traverse the Farm. Today, after four major releases of SharePoint, almost anyone can back it up like “just another Microsoft application.”

As mentioned earlier, VMware hypervisors were notoriously difficult to back up in their early days, with legacy backup providers being very late to solve the challenges. Instead, a completely new group of virtualization-specific backup vendors created new approaches to address the market demand for better virtualization protection. Today, with the mature VADP mechanisms provided by VMware, VM backups are now reliable, but agility of VM recovery continues to vary widely among solutions. Some virtualization protection solutions are continuing to thrive, while the legacy backup vendors are rapidly catching up in VM backup features and trying to recapture their lost market share not only from the VM-specific vendors but VMware’s own VDP solution.

Salesforce.com is perhaps the poster-child of software-as-a-service, enabling CRM solutions for organizations of all sizes. And as a cloud-based solution, most early adopters were focused primarily on assuring availability of the data/service across the Internet. Unfortunately, SFDC has not yet published APIs that allow traditional backup solutions to protect SFDC data. The result is that customers are moving from legacy, albeit protected, self-managed CRM systems to an arguably far superior CRM system in SFDC, but are losing the ability to protect their data once it is there. SFDC is rumored to have a rudimentary recovery capability that is purportedly $10,000US per event and doesn’t have an SLA for recovery. Like the disruptive platforms before it, ESG expects that mainstream demand and platform maturity will eventually make protecting and restoring SFDC data easier, but in the meantime, like virtualization, a new group of cloud-backup solutions are among the first to try to solve what legacy vendors are slow to adapt to.

And that brings us back to … How do you back up Big Data?

To begin to answer that question, I partnered with my ESG Big Data colleague, Nik Rouda, in authoring a brief on what you should be thinking about and looking for in the future.

CLICK HERE to check out “ESG’s Data Protection for Big Data” brief.

As always, thanks for reading.


[originally posted on ESG’s Technical Optimist.com]

AA and USAir are not “one airline”

To the “New AA” — please stop misrepresenting yourself as a single airline with US Airlines. AA-USAir-poster_thumb.jpg

You are not conducting yourself as a single airline, and the result is that I won’t see my kids this evening.

Your signage says “We’re bringing you greater schedule options ” but that isn’t entirely true.

I booked an AA round-trip from Dallas to Phoenix (with AA flight numbers) because I am a faithful AA flyer. My outbound flight was on AA, but I was surprised to find that my return flight was “operated by US Airlines“. I didn’t want USAir, even though I saw those options when I booked my flight, so I booked AA … and got USAir anyway.

Because my AA flight number was actually a US Airlines plane, I couldn’t get preferred seating and couldn’t request an upgrade. How is that “one airline”?

I arrived four hours early to the airport and went to the AA counter, whose very friendly, professional and empathetic staff explained that I was booked on USAir, so AA couldn’t help me — even though I booked an AA flight number. The AA staff courtesy was a stark contrast to the brisk, almost rude, US Airlines staff who printed my boarding pass when I requested to standby for the earlier flight.

I went to the US Air gate for the earlier flight (3:30p) and stood at the counter while the gate agent handed out other standby boarding passes, including to non-rev’s … and then said that I wasn’t on her list and it was now less than 10 minutes before departure.

An earlier US Air flight at 3:30 had available seats, but I couldn’t have one — because the front agent hadn’t put me on the list (even though we talked about it, including the likelihood of availability) … even non-rev’s got on, but not me.

An earlier AA flight at 4:00 also had available seats, but I couldn’t have one. I walked to a different terminal but the AA agent, though harried was still more professional than US Air’s staff — and she explained that even though I had an AA flight number, she couldn’t help me either.

While I was surprised by the US codeshare, I expected the same respect and helpful service that I might get when flying on American Eagle. If I had booked an AA codeshare that was “operated by British Air” or another OneWorld partner, then I would presume this disconnectedness — though I have had much better codeshare experiences with AA flights operated by both BA and Qantas than what I’ve seen with US Air. But you are one airline, right?

If your answer is “that sharing is coming” — then TAKE DOWN YOUR SIGNS, because your marketing department is making promises that the rest of your airlines (plural) are unable to satisfy.

 

VIDEO: What to plan for in a Data Protection Spectrum

Continuing on the theme that we started last year that “Data Protection” is the umbrella theme that encompasses a broad range (spectrum) of IT behaviors, including “Backup, Snapshots, Replication, etc.”:

     DPspectrum1

Here is a video that gives some color to those ideas (pun intended):

    

1. Start with a fresh understanding of your business units’ operational requirements, based on feedback from the business owners, department heads, application experts and other stakeholders.  To get more ideas around quantifying the cost of downtime for assessing the Business Impact Analysis (BIA) and Risk Analysis (RA), download this chapter from my book – Data Protection by the Numbers.

2. Once you can quantify their recovery goals (SLA/RPO/RTO) and the business impacts of them not achieving their availability requirements, then choose the “color” of the DP Spectrum that best meets those goals.

3. If the solution’s TCO/ROI is in line with reducing the business impact, meaning that the solution doesn’t cost more than the problem, then add the method (not yet the product) to your DP Solution Architecture

4. Once you understand which colors (i.e. backups, snapshots, archives, replication, clustering, etc.) that really fit your business needs

If you do this, you will likely find that most of the colors of the DP Spectrum fit within your business strategy, but that does not mean that you should necessarily settle for 10 products from 7 vendors – because it is unlikely that you’ll be able to cost-justify it otherwise.  Instead, , look for solutions that consolidate the management of them:

  • Backup software that also manages snapshots or replication, perhaps that also includes archival features
  • Protection storage that can support backups and archives (meaning extended retention), that has effective deduplication AND whose data is very durable
  • Expertise that can knit not only the product components, but also the strategies, together into a living DP plan

Thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

JBuff on the Cube at IBM Pulse 2014

While at IBM Pulse 2014, I was invited to sit in at ‘the Cube’ to talk about what’s new in Data Protection.

We had the chance to talk about:

  • Virtualization Protection
  • Endpoint Protection
  • How the Cloud is changing Data Protection Options
  • And how IBM is evolving its data protection offerings to meet those demands.

My sincere thanks to IBM & SiliconAngle for letting me join the program – and to you who read this blog.