Blog categories

My blog posts and tweets are my own, and do not necessarily represent the views of my current employer (ESG), my previous employers or any other party.

I do not do paid endorsements, so if I am appear to be a fan of something, it is based on my personal experience with it.

If I am not talking about your stuff, it is either because I haven't worked with it enough or because my mom taught me "if you can't say something nice ... "

Hyperconverged Infrastructure for BCDR using EVO RAIL

Shortly after VMware announced its EVO Rail initiative, whereby hardware partners could start delivering a very dense compute, storage, and networking solution within a wholly-contained appliance – I started exploring the data protection aspects of an EVO Rail solution.  The nice folks in ESG’s video studio have produced a whole series on this … in two-minute increments.

In the earlier video capsules, we looked at the protection of EVO Rail systems in production.  For part 6 in the series, let’s look at EVO RAIL as perhaps the ideal BC/DR infrastructure.

After all, it has everything you need for a second site … so why not drop one in an alternate location (or co-lo site) and start replicating to it?  And for DRaaS providers, perhaps a self-contained EVO might be an interesting option over a shared infrastructure for some clients?  Check out the video for more details:

  

Please feel free to check out the earlier installments in the series:

DP considerations for EVO Rail – part 1 – an overview of EVO RAIL and its data protection ramifications

DP considerations for EVO Rail – part 2 – VMware’s own data protection options

DP considerations for EVO Rail – part 3 – EMC’s EVO RAIL and its data protection possibilities

DP considerations for EVO Rail – part 4 – Dell’s EVO RAIL and its data protection possibilities

DP considerations for EVO Rail – part 5 – HP’s EVO RAIL and its data protection possibilities

In the next few weeks, I will be releasing the last of the seven segments  … on channel partner considerations for EVO RAIL systems.  As always, thanks for watching.

– jason

[Originally blogged on ESG’s Technical Optimist.com]

Better Catalogs Make for Better Data Restores (video)

It really is just that simple: backup products without robust catalogs are just that, backup products – not restore products.

There are lots of reasons that just maintaining a browse-able file list is not enough today, including not only a lack of search-ability but also because the catalog is the key to really leveraging primary storage snapshots and replication capabilities with traditional backup for a modern recovery capability.


As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

How to Protect an EVO RAIL (video series)

VMware’s EVO RAIL is an architecture for a hyper-converged, software-defined data center in a single appliance form-factor … to be delivered by various hardware partners.  But how do you protect that all-in-one solution?

For the next several weeks, ESG will be releasing a seven-part series of ESG Capsules, 2 minute video segments, where I’ll talk more about some of the protection possibilities and caveats in an EVO world:

part 1 – Introductory ideas for protecting EVO RAIL

part 2 – Solution Spotlight : VMware

part 3 – Solution Spotlight : EMC

part 4 – Solution Spotlight : Dell

part 5 – Solution Spotlight : HP

part 6 – BC/DR possibilities

part 7 – Channel considerations

Here’s part 1 on ideas for protecting an EVO RAIL.  Check back here for updated hyperlinks … or follow @JBuff on twitter to see more of this series.

Thanks for watching

[Originally posted on ESG’s Technical Optimist.com]

Data Protection Appliances are better than PBBAs

Too many folks categorize every blinky-light box that can be part of a data protection solution as a "Purpose Built Backup Appliance" or PBBA.

But the market isn’t just a bunch of apples with an orange or two mixed in, data protection appliances (DPAs) can be apples, oranges, bananas and cherries — but if you lump them all together, all you have is a fruit salad.

So, let’s reset the term to understand the market:

  • "Backup" alone isn’t enough — so call the all-encompassing category what it should be delivering = "Data Protection"
  • And there isn’t just one kind of appliance, there are at least four:
    • (real) Backup Appliances
    • Storage / Deduplication Appliances
    • Cloud-Gateway Appliances
    • Failover Appliances

Check out this video to see how I look at Data Protection Appliances:

As always, thanks for watching

[Originally posted on ESG’s Technical Optimist.com]

Could VeeamON be the next MMS?

This week is the first VeeamON, Availability for the Modern Data Center, conference in Las Vegas. As I listened to the side conversations and such, I was reminded of the special-ness of Microsoft Management Summit (MMS). Not MMS 2010+, when Microsoft started shoe-horning everything in the Server & Tools line-up, before eventually killing it and Tech-Ed behind it … but MMS 1995-2005, which was as much about "community" as it was "technology".

Veeam has very smartly done something that other data protection vendors several times larger have failed to do — create a community of avid influencers and advocates that are made up of Microsoft MVPs, VMware vExperts and an army of well-intentioned backup folks that are passionate about telling people how Veeam saved their jobs by reliably and quickly recovering a a VM. Many larger companies have tried to programitize that community initiative, and most haven’t seen success on any scale. But Veeam has … so a conference is the next logical step.

The question will be whether Veeam can convert the cyber-community that advocates their products year-round and parties with Veeam at TechEd/VMworld. Can Veeam maintain or build on that community vibe in an in-person event? If they can, and then build anticipation for VeeamON 2015, then lightning will have struck and VeeamON could be for many what was revered about MMS.

There is a notable difference with VeeamON over MMS, though. Veeam is adamantly 100% channel to the degree that they don’t even maintain a direct sales team. So, VeeamON is as much for partners (vendor, channel and cloud), as it is the customers — which is different than the much more enterprise-vibe of MMS in its latter years. Another difference is the accessibility of Veeam exec’s walking throughout the venue and striking up personal conversations throughout the day — something that again shows the strength of the "community" of the Green Army. With Veeam aspiring to be the next $1B player in IT, there are more parallels that one could make with MS System Center, which also became a $1B during MMS’s hay-day. Veeam is doing it without a juggernaut behind them, though their partnerships with MS, VMware, NetApp, Cisco, HP, Exagrid and others doesn’t hurt.

The event itself is as much style (at the Vegas Cosmopolitan) as it is substance (deep technical breakouts) — so the rest is left to be seen. Congrats to Veeam on what is looking to be a great start to what could be a powerful event in IT availability, through data protection.

[Originally posted on ESG’s Technical Optimist.com]

Why Doesn’t IT back up BYOD!?

ESG recently started offering TechTruths … single nuggets of data and the analyst perspectives of why they matter. Check out all of them via the link above, but here is my favorite so far on BYOD data protection:

TechTruths%20June%202014_Endpoint%20Backup

So, why doesn’t IT back up BYOD endpoints?! It isn’t a rhetorical question.

I have always been confounded why IT, the custodians of corporate data, doesn’t feel obliged to protect that corporate data when it resides on an endpoint device, and more particularly when the corporate data resides on a BYOD endpoint device.  I understand the excuses – its hard to do well, the solutions are expensive, its difficult to quantify the business impact and therefore the ROI of the solution. In fact, in ESG’s Data Protection as a Service (DPaaS) trends report, we saw several excuses (not reasons) to not back up endpoint devices.

     DPAASblogimage1

The myth that endpoint protection is hard and without justifiable value is old-IT FUD, in much the same way that tape is unreliable and slow. Both were true twenty years ago, and neither are true today.

Today, with the advent of all devices being Internet-connected, it’s never been easier to protect endpoint data, whether employer- or employee-owned. The real trick and one of the most interesting areas of data protection evolution is in the IT enablement of endpoint protection. It used to be that most viable endpoint protection solutions were consumer-only, meaning that IT was not only excluded from the process, they were unable to help when it matters. Today, real business-class endpoint protection should and does enable IT to be part of the solution, instead of the problem:

  • Lightweight data protection agents that reach through the Internet to be protected, with user experiences that look like they came from an AppStore, instead of from the IT department.
  • Encrypted connections in-flight and at-rest ensure the data is likely more secured while protected than it might be on the device itself.
  • And most importantly, IT oversight – so that IT can do the same diligent protection of corporate data on endpoints that it does with corporate data on servers.

If you aren’t protecting your corporate data on endpoint devices, then you aren’t protecting your corporate data, period – and with today’s technologies, you are out of excuses (and reasons).

[Originally posted on ESG’s Technical Optimist.com]

HDS bought Sepaton – now what?

Have you ever known two people that seemed to tell the same stories and have the same ideas, but just weren’t that in to each other? And then one day, BAM, they are besties.

Sepaton was (and is) a deduplication appliance vendor that has always marketed to “the largest of enterprises.” From Sepaton’s perspective, the deduplication market might be segmented into three categories:

  • Small deduplication vendors and software-based deduplication … for midsized companies.
  • Full product-line deduplication vendors, offering a variety of in-line deduplication, single-controller scale-up (but not always with scale-out) appliances from companies that typically produce a wide variety of other IT appliances and solution components … for midsized to large organizations.
  • Sepaton, offering enterprise deduplication efficiency and performance to truly enterprise-scale organizations, particularly when those organizations have outgrown the commodity approach to dedupe.

Aside from the actual technology differences between the various deduplication systems, the Sepaton approach is somewhat reminiscent of the different marketing philosophies between American cars that appear to be commonplace and some well-engineered, European beast that is positioned for the select few – justifiably so, or not.

To be fair, Sepaton’s technology really is markedly different in a few aspects that do lend it to enterprise environments, but their challenge until now has been gaining penetration into those enterprise accounts – and defending against the other deduplication vendors in those enterprise accounts whose solution portfolio typically includes production storage systems (not just deduplication secondary systems) and other key aspects of the overall IT infrastructure, often with higher relationships and more flexibility in pricing due to the broader portfolio … and that is where this gets interesting.

HDS tells a similar story around understanding and meeting the needs of truly large enterprises, so Sepaton story is congruent with what the HDS teams know how to talk.

HDS’s core DNA is a conservative approach to knitting together solution elements, while trying to help the customer see the bigger picture in IT – which should allow the Sepaton product lines to seamlessly be evangelized within a broader HDS data protection and agility story. I for one am eager to hear more about what that new broader story sounds like, with the Sepaton pieces integrated in.

Similarly, HDS serendipitously did not have its own deduplication solution already (unlike almost every other big storage/IT vendor that HDS competes with), so there should be very little overlap – and in fact, the primary data protection products (e.g. Symantec NetBackup) that HDS often offers its customers already work well with the Sepaton platform (due to OST support).

But most importantly, HDS has the broader enterprise gravitas that Sepaton alone could not achieve. HDS ought to be able to have similar C-level meetings to those of Sepaton’s (and HDS’s) competitors, and that means that more enterprises will be exposed to Sepaton and HDS will have a broader story to tell (win-win).

Put it all together:

  • Sepaton gets better enterprise reach and more enterprise sales folks that align with the Sepaton story.
  • HDS broadens its storage portfolio beyond primary storage with a complimentary secondary/deduplication solution that is aligned with its core story and customer base for incremental sales and broader customer penetration, due to a more complete story.

Congratulations to HDS and Sepaton on what looks like a good fit for both of them – now, let’s start talking about that cohesive data protection story!

[Tardiness disclaimer: The HDS-Sepaton announcement occurred while I was on vacation in August, but it was interesting enough that “late” seemed better than “silent”]

[Originally posted on ESG’s Technical Optimist.com]

Backing up SaaS : an interview with Spanning

One of the more frequent topics that I get asked about is “How do you back up production workloads, after they go to the cloud?”

A few months ago, I blogged on that – essentially saying that history is repeating itself (again). As new platforms usurp the old way of doing things (NetWare to Windows to VMware to SaaS), it is not typically the existing data protection behemoths that are first to protect the new platform. Instead, it is often smaller, privately-held innovators who are willing to do the extra work and “protect the un-protectable.” And in most cases, those early innovators ended up leading the next generation while the APIs eventually made standardized backups possible by anyone.

  • ARCserve didn’t back up midrange systems, but led NetWare’s backup market
  • Backup Exec and CommVault weren’t overly known for backing up NetWare, but dominated the early years of protecting Windows NT, later Windows Server
  • Veeam didn’t back up anything before VMware, but it is the defacto VM-specific solution to beat today

So, as traditional workloads like file/collaboration and email move from on-premises servers to cloud services like Office365 and GoogleApps and SalesForce, there will likely emerge new dominant innovators that could put all of the legacy solutions on notice.  That dominance has historically been based on two things: 1) early brand awareness in the space and 2) their influence on the platform provider that the rest of the backup ecosystem will eventually depend on. 

So, I recently took the opportunity to visit with Jeff Erramouspe, CEO of Spanning Cloud to hear his thoughts on SaaS backup:

Thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

vBlog: Everyone Should Archive (period)

  • Even if you aren’t in a highly regulated industry.
  • Even if you aren’t a Fortune 500 enterprise.
  • Even if you don’t keep data over five years.

Everyone should archive, as a means of data management – because storage (both primary and secondary) are growing faster than storage budgets, so you can’t keep doing what you have been doing.  Here is a video on the simple math of archiving/grooming your data.

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

Riverbed Announces the End to Excuses in Trying D2D2C

For several months, I’ve been talking about the inevitability of D2D2C (meaning that data is goes from primary/production storage to secondary protection storage and then to a tertiary cloud). In fact, I blogged a few months ago that iit s seems hard to imagine organizations of any size meeting their recovery SLAs with a straight-to-cloud solution. Instead, the intermediary backup server or appliance provides a fast and flexible local restore capability, while the cloud provides longer-term retention.

But even D2D2C has several permutations, including:

  • Backup-as-a-Service intermediary caching devices before the BaaS service itself
  • Traditional backup servers/appliances writing to a cloud tertiary storage tier
  • Traditional backup servers/appliances replicating to a cloud-hosted copy of the backup engine
  • Traditional backup storage/dedupe platforms replicating to a cloud-hosed appliance

And there are other options, all of which equate to D2D2C with various benefits and drawbacks.  But many people are still not convinced to evaluate any D2D2C solution, for a few valid reasons:

  • Security concerns – many folks are justifiably concerned about data privacy in the cloud
  • Complexity concerns – Appliances can be difficult to set up or a challenge to operate
  • Interoperability concerns – Not all backup software works with every appliance or cloud-solution
  • Cloud-Evaluate-ability concerns — Those that haven’t started using cloud-services yet believe it to be expensive and complicated to initially configure, even for a simple evaluation.

Riverbed, long known for disrupting the status quo through its technology innovations around WAN optimization has taken that know-how to deliver a backup appliance called SteelStore (formerly WhiteWater), which provides a local appliance for fast recovery and compressing/deduplicating data before it is sent to the cloud – and then extended that with cloud-storage.  SteelStore provides yet another permutation in D2D2C, by offering a deduplication-capable on-premise appliance (physical or virtual) that extends it storage capacity directly from the cloud-storage provider of your choice.

Riverbed’s announcement today isn’t so much about the technology, as much as how Riverbed is removing your excuses for not trying D2D2C:

  • Security solution – Because the cloud-storage is simply an extension of Riverbed’s own storage container, the data is unreadable without a SteelStore in front of it. And yes, the data is encrypted in-flight and at-rest.
  • Complexity solution – This week’s announcement includes a free virtual appliance. So, nothing to install other than a VMware .OVF file.
  • Interoperability solution – The SteelStore appliance presents itself as a NAS. So any backup software that can write to a file share (all of them) can leverage the SteelStore appliance.
  • Cloud-Evaluate-ability solution – this is the cool one. Riverbed is partnering with Amazon to provide 6 months of AWS S3 storage for the Riverbed evaluation.

Specifically, the free virtual appliance supports 2TB of on-premise deduplicated/compressed storage that is then extended with up to 8TB of cloud storage. So, for the six-month evaluation, Riverbed and Amazon will cover the costs of those 8TB of cloud storage for six months, by issuing (8 x 6) 48 monthly TB credits. Sure, there is some fineprint:

  • If you later want a bigger appliance than the 2TB virtual model, Riverbed will be happy to sell you one – but the free appliance is yours.
  • When you finish those six months, Amazon will start charging you for the storage if you continue using it.
  • And you will have to talk to friendly Riverbed person, to qualify for the evaluation and get started with the technology and such.

Check out their website for more details, but the bigger picture is that there are many that would benefit from a better data protection infrastructure – with deduplication and fast agility on-prem, but with a scalable and economic tertiary capability in the cloud. And for those folks (and you all know who you are), you may be out of excuses. In fact, this kind of offer with a virtual appliance and free Amazon storage, may have singlehandedly removed the barrier to evaluation for D2D2C more than any single other announcement in 2014, thus far.

So what is your excuse for not trying D2D2C yet? Riverbed may have an answer for that, too.

 

[Originally posted on ESG’s Technical Optimist.com]