Blog categories

My blog posts and tweets are my own, and do not necessarily represent the views of my current employer (ESG), my previous employers or any other party.

I do not do paid endorsements, so if I am appear to be a fan of something, it is based on my personal experience with it.

If I am not talking about your stuff, it is either because I haven't worked with it enough or because my mom taught me "if you can't say something nice ... "

Event Marketing Folks Don’t Get Enough Credit

With 25 years of attending tradeshows and launch events, I can attest that the Marketing/Events team does not get enough credit.

  • Booth folks show up and find a massive display ready for them to click their mouse and start talking to customers. And when the show floor closes, the staff leave and the booth magically dismantles itself.
  • Execs show up to private venues that are full of style and ready to ensure that whatever is discussed will be better remembered, because of the atmosphere around them.

Of course, not all marketing events are awesome (or memorable) – but I wanted to highlight a few recent examples of how to really do a marketing event well:


Launch Event: EMC ProtectPoint in London

This should be a case study for launch events: with what I believe to be the perfect combination of style and substance. The style came in the form of its Doctor Who theme, complete with an unassuming Tardis (blue police phonebooth) on the outside of a nondescript building, which immediately led through a tunnel of lights and sound that led you to believe that you were transporting to a different place and time. And on the other end of that tunnel, we learned about “backupless backups” (among other things). The day included:

  • Executive “vision” in the morning
  • Opportunities to meet 1:1 with early-adopter customers in the afternoon
  • Optional technical deep-dives mid-day by product experts

Many launch events include two out of three of the above, but miss the ability to tell the whole story by omitting one of the three facets. Throw in a small plastic Tardis that somehow found its way from one of the many discussion tables to my bookshelf of geekstuff, and it’s an event and a product that will stick with me for a while. See my vBlog from the launch.


Influencer Event: Acronis at VMworld 2014

VMworld, like most major tradeshows, is often capped every night with various parties and gatherings. Many of them are in loud venues that are might show appreciation to customers, but can be challenging to find colleagues or have meaningful conversation — with most often bereft of any memorable-ness.

Acronis took a decidedly different approach that I thought was really smart – they rented out the Cartoon Art Museum in San Francisco. Hint: One reason for IT stereotypes like “we like comics” is because some of us do or did, and all of us “know someone” who likes art.  The walls were adorned not just with classic strips like Dick Tracy but modern exhibits from recent definitive artists for Captain Marvel, the Avengers and Punisher. Even those who didn’t geek out as kids can be impressed at the true “art” when drawings that originally appeared as 3” squares in a monthly comic are blown up to canvas size drawings. Pair that up with a relaxed atmosphere to actually talk to folks and a very cool souvenir from a modern day artist doing caricatures, and it leaves a lasting impression about both the evening and the vendor who sponsored it. PS> for those that think that Acronis is just the image-level migration utility that shipped with your new flash drive, you really need to look again (without blinders).

Sidenote: to be fair, Rackspace had a caricature artist on the show floor. But the difference is that there were likely several folks that got Rackspace caricatures who really didn’t (and still don’t) understand what Rackspace does – good draw, but hard to convert into a meaningful conversation. The Acronis artist was simply an integrated part of a well-planned event, which makes the artist less of an attraction and the hosting vendor more so.


Customer Event: Veeam Party at VMworld

The annual Veeam party at VMworld is described by some as “legend….ary,” but what I find impressive is the balance that Veeam makes between its influencer engagement and the broad appreciation that it shows its customers and prospects. The main party starts at 8, admissible only by the coveted green Veeam necklace – something which I can personally attest to being rigorously enforced by two very inhospitable bouncers. But for a few, there was a second small event that started a little earlier, included candid conversation by Veeam executives for press/analysts, and gave folks a chance to really understand Veeam’s strategy and aspirations before their big customer event happens. In this case, the venue-itself may not be overly memorable (other than the smart layout for the dual events), but the net effect and company perceptions will be remembered long after the battery in my green necklace expires.


Booth Experience: Zerto

There were many well-attended and well-staffed booths on the VMworld Expo floor, often with presumably knowledgeable subject matter experts and some with charismatic or pretty people to pull you in. PS, never, ever assume that those two groups (experts & hosts) are mutually exclusive – which I was reminded of while talking to extremely knowledgeable Zerto booth staffer wearing an “I am not a booth bunny” badge and her horrible story of one attendee who insultingly presumed otherwise – and the humorous retort that put him in his place. Those buttons alone make for an honorable mention on this list, but what really impressed me was what happened after the expo closed.

I was in a meeting until 20m after the show floor closed. When I came out, carpets were being rolled up, shipping boxes were being packed, and not a single booth staffer could be found (except for the obligatory event managers) … and the entire Zerto team. They were in a circle, celebrating each other’s successes throughout the week. Having done show booths for years, the norm is that your collective staff immediately become individuals when the announcement is made. Those with flights run, those who don’t clump together to wind down, but almost never does the entire staff stay – to celebrate each other and learn from each other. And that culture doesn’t happen by accident, nor as a one-off gimmick; which makes me wonder how that highly-energized, community-approach must be within the Zerto team throughout the rest of the year … and how Zerto’s customers must benefit from it.

[Originally blogged on ESG’s Technical Optimist.com]

EMC RecoverPoint for VMs is another step forward in vAdmin enablement

This week, EMC released RecoverPoint for VMs (RP4VM). For storage administrators, RecoverPoint has long been seen as the seamless synchronous/asynchronous storage replication of choice for EMC storage, to deliver higher levels of resiliency for enterprise workloads. But for virtualization administrators, it was part of the “magic” that made the storage under the hypervisor surprisingly durable – or perhaps not even recognized at all.

With the RP4VM release, enterprise-grade storage replication is now in the hands of the VMware Administrator (vAdmin). RP4VM is made up of three core components:

  • A virtual appliance for replicating to another appliance on a different host
  • An IO splitter that captures disk IO from the hypervisor, and weans a copy for the appliance
  • A vCenter plug-in for management

With this approach, any storage that the hypervisor can use (iSCSI, FC, DAS, vSAN) can be replicated – even to heterogeneous storage on another ESX hypervisor. Via its vCenter plugin, this solution is designed to be administered by a virtualization specialist, instead of a storage specialist. It is another huge example of vAdmin enablement. In broader terms, it is another example of workload owners (e.g. virtualization, database, storage) taking on traditional infrastructure management responsibilities that might have previously been managed by a backup administrator – similar to how EMC recently released ProtectPoint, as a way to enable storage administrators to do protect their own data.

Both of these releases are HUGE WINS for workload administrators who want to achieve better levels of resilience and recover-ability for their various workloads (storage & virtualization). But in both cases, there is another side to the story that should also be included: the need for better communication and shared strategy between the workload owners and the traditional facilitators of infrastructure services – see previous blog post on the evolution of workload protection.

With ProtectPoint, storage administrators can now protect their own data, but they will likely be more successful and their organizations will be more compliant, if that protection is strategized and then enacted through a partnership between the now-empowered storage administrator and the backup administrator who typically helms that responsibility (and still likely maintains the Data Domain platforms).

Now, with RecoverPoint for VM, virtualization administrators can ensure higher resilience of their virtualized infrastructure, but they will likely be more successful if that resilience is strategized and then enacted through a partnership between the now-empowered virtualization administrator and the storage administrator who traditionally facilitated the underlying capacity for the virtualization infrastructure (and likely still maintains the storage platforms themselves).

Key Takeaway: There are other examples, but the point is – as workload administrators gain new capabilities that were previously helmed by their IT peers, their shared success will not be JUST from the new technology innovations themselves (often as elegant as they are impressive), but also in the shared strategies and collaboration between those who understand the workloads, and those that understand the organization’s mandates and expectations of resiliency, recovery, retention, etc.

[Originally posted on ESG’s Technical Optimist.com]

Backup Alone Just Isn’t Enough! (vBlog)

If you haven’t already checked them out, ESG recently started delivering ESG “Video Capsules” – video wisdom in 140 seconds or lesswww.esg-global.com/esg-video-capsules.

One of the more recent ESG Video Capsules, “Backup Is No Longer Enough,” discusses that IT organizations of all sizes struggle to achieve the SLAs that their business units require, if their only recovery solution is a traditional backup solution. In fact, when looking at core platforms like server virtualization systems (VMware & Hyper-V), less than 10% of folks are only protecting their VMs with backups; the rest are using a combination of snapshots and replication to supplement their backup mechanisms – a strategy which is consistent with the Data Protection Spectrum that I often discuss.

With average SLAs for all systems (not just “critical” or Tier-One platforms) shrinking to <3 hours or <1 hour, its really hard to diagnose the problem, restore the data set, and resume business in those timelines. Instead, one should strongly consider combining snapshots, replication and backup for a comprehensive data protection strategy – ideally using a single management UI to control all of it.

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

CommVault announces “You Can Have It Your Way”

There is a famous hamburger chain that used to tout, “You can have it your way,” whereby instead of getting your burger fully-loaded (with all the fixin’s), you can choose whether you wanted pickles, tomatoes, or anything else.

For the last two decades, CommVault has been offering a fully-loaded data protection solution that encompassed backup, archiving, replication, snapshots, etc. Over the course of time, and based on customer feedback, it continually added features – just like the burger chains that now add bacon, steak-sauce, grilled onions instead of fresh, etc. The challenge was and is that not everyone wants their burger fully-loaded, nor their data protection solution fully-featured.

Often IT organizations are either intentionally diversifying their data protection tool set (instead of seeking out one solution to protect it all) – or there are addressing a point problem, where one particular kind of workload needed better protection than the status quo. Unfortunately, a fully loaded solution doesn’t always align with that point product approach that many IT organizations are looking for – so CommVault did something rather atypical, they segmented the Simpana licensing around four solution-sets:

  • Virtualization-protection & Cloud-management
  • IntelliSnap (snapshot) recovery
  • Endpoint protection
  • Email archiving

While many companies have a broad spectrum of data protection capabilities, those capabilities often come from running multiple products – even if they are from the same vendor, perhaps even licensed in bundles or suites. CommVault has never had that challenge – as its overall feature-set has grown through its one and only Simpana codebase. It’s always been “fully loaded,” just like the big burger.

So, while some other vendors are trying to converge their products for better interoperability, CommVault is going the other direction – separating out the licensing, while maintaining a single product line. The result is very similar to the burger options:

Everyone gets the same top bun – the management UI – though only the licensed functions will appear within it. When you add new functionality, new features will simply appear in the UI.

Everyone gets the same bottom bun – the Simpana ContentStore – which is the converged storage infrastructure consisting of tape, cloud and disk repositories, which will vary based on business goal, but has always been shared by the higher-level Simpana functions.

Everyone gets the same meat patty – the Simpana engine – most typically used for backups, but is in its essence a job scheduler, catalog of recoverable points, etc.

Choose your toppings … based on which workloads need better protecting and/or which point products that you are considering or needing replacement for.

Certainly, when adapting a data protection product to be utilized by secondary IT Pros (those seeking point solutions, such as storage, virtualization, archive or endpoint administrators), there will be some adjustments in the sales dialog, as well as the evaluation and adoption cycles. On the upside, this does allow secondary IT decision makers to choose Simpana for their particular workloads, while the backup administrator will still have the skills and tools to help those IT professionals be successful in protecting their organizations’ data – see earlier blog on workload-centric protection enablement.

In short, by offering an ala-carte way to consume Simpana, CommVault continues to demonstrate its willingness to adapt to customer demand – without sacrificing the single, core-platform that it has built its success upon for many years of data protection evolution.

[Originally posted on ESG’s Technical Optimist.com]

What I am looking for at VMworld 2014 … a Data Protection Perspective

For the past few years, the big data protection trend in virtual environments was simply to ensure reliable backups (and restores) of VMs. That alone hasn’t always been easy, but with the newer Data Protection APIs from VMware (VADP), that is becoming table-stakes – and the real differentiation coming from the agility to restore (speed and granularity), as well as manageability and integration.

And while there is certainly still a lot of room for many vendors to improve in those areas, the industry overall needs to move past the original question of “Can I back up your VM?” and even past “How quick can I restore your VM?

The new questions to be answered are:

Does your data protection solution understand which VMs should be protected and how?

Have protection/recovery enabled is your Virtualization Administrator?

The answer to the latter question may in fact inform the former one, in that a Backup Administrator isn’t always the best person to determine how the VMs should be backed up – because they don’t know what is running in those VMs. The only folks that really know that are the folks that provisioned those VMs in the first place, which are typically not the backup admins … it’s the virtualization administrators.

I covered that in some detail in a TechTarget article – discussing that the provisioning process is the right place to quantify how the VM should be protected, including retention length and RPO/RTO, which would then affect how the data protection process(es) are enacted. Maybe the provisioning process links directly to the backup engine, or the snapshot engine, or replication engine, or ???

Remember, “data protection” is not synonymous with “backup” – especially as it relates to server virtualization. In fact, when ESG asked IT professionals how they protect VMs, less than 10% stated that they only used VM-centric backup mechanisms. The other 90%+ used a combination of snapshots, replication or both to protect VMs in combination with VM-centric backups, as reported in ESG’s Trends in Protecting Highly Virtualized Environments in 2013.

      VM-protection methods -- from ESG Research Report Trends in Protecting Highly Virtualized Environments 2013

Short of augmenting the VM provisioning process to include data protection, the next best answer is to enable data protection management from within the virtualization administrators’ purview – because those folks understand the business requirements of the VMs. That doesn’t always mean ensuring that your data protection (snapshot/backup/replication) tool has a vCenter plug-in, though that helps. It does mean:

Have you truly built your data protection product or service to understand highly virtualized environments?

Is the solution VM-aware (per VM or VM-group), or simply hypervisor host-centric?

Are the management UI’s (standalone or plug-in) developed with the virtualization administrator in mind? Or are they backup UIs that you hope the virtualization administrator will learn?

And of course, how agile is the restore? How fast? How granular? How flexible to alternate locations (other hosts, other sites, other hypervisors, cloud services)?

Yes, it’s a long list of questions – and I expect to be very busy at VMworld 2014, trying to find the answers from the exhibiting vendors, as well as from VMware who enables them.

[Originally posted on ESG’s Technical Optimist.com]

What if Cloud-Backup-Storage was Free?

Not Backup-as-a-Service, but just cloud-storage that could be used to supplement a backup. Sure, there are a lot of STaaS (storage-as-a-service) folks that will give you a small amount of capacity to try their platform, knowing full well that you are going to want more and be willing to pay for it.

But there used to be a company that would give you as much storage in the cloud as you wanted – Symform. Symform was my 2011 “Coolest Disruptive Technology that most folks hadn’t heard of yet” award winner.

Essentially, Symform would make a copy of some subset of your data, encrypt it, shatter it into 64 chunks, add 32 parity chunks, and then scatter those pieces into the Internet – specifically, onto 96 remote/nameless locations. As long as any 64 of the 96 locations were accessible, you data was accessible in the cloud. The catch: you needed to offer some of your local storage to store 1/96th of other folks’ data. So, you could quite literally go purchase a 3TB USB hard drive from a local retailer, throw it in/on your server, and then get 3TB of cloud-based storage – I know, because I did it with my own environment. Sure, if you needed more capacity than you wanted to offer, you could pay for it – but the idea that I could buy cheap, slow local storage and gain durable, tertiary cloud-based storage was really interesting.

The reason that I bring up Symform is because they were just acquired by Quantum, who also offers deduplicated disk (DXi appliances, both virtual and physical), tape solutions and another cloud offering (Q-Cloud). We’ll see if the mechanics and service model changes, now that they are part of a broader portfolio of data protection offerings from a commercial vendor, but the idea that such a different and potentially exciting technology will now be available to a much broader set of customers and service providers will be interesting to watch. Congrats to Quantum.

As always, thanks for reading.

[Originally posted on ESG’s Technical Optimist.com]

A Replication Feature is not a Disaster Recovery Plan

A few years ago, I blogged that “Your Replication is not my Disaster Recovery” where I lamented that real BC/DR is much more about people/process than it is about technology.

To be clear, I am not bashing replication technologies or the marketing folks at those vendors … because without your data, you don’t have BC/DR, you have people looking for jobs

But that does not mean that if you have your data remotely, you have a BC/DR plan.  Having “survivable data” means that you have the IT elements necessary to either roll up your sleeves and attempt to persevere, or (preferably) the means by which to invoke a pre-prepared BC/DR set of mitigation and/or resumption activities.

BC/DR is not a “feature” or a button or a checkbox in a product, unless those elements are part of invoking the orchestrated IT resumption processes that are part of a broader organizational set of cultural and expertise-based approaches to resuming business, not just restarting/rehosting IT

Replication needs to be part of every Data Protection plan, to ensure agility for recovery – and often to increase the overall usability/ROI of one’s data protection infrastructure by enabling additional ways to leverage the secondary/tertiary data copies.  Replication, whether object-, application- or VM-based and whether hardware- or software-powered, is also the underpinnings of ensuring “survivable data.”  Only after you have “survivable data” can you begin real BC/DR planning.

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

EMC announces another step towards backupless-backups

Last week, in London, EMC made several announcements – many of which hinged on the VMAX3 platform – but the one of most interest to me was ProtectPoint, where those new VMAX machines will be able to send their backup data directly from production storage to protection storage (EMC Data Domain) without an intermediary backup server. 

I mentioned this in my blog last week as an example of while “backup” is evolving, those kinds of evolutions require that the role of both the Backup Administrator (which should not be thought of as a Data Protection Manager/DPM) and the Storage Administrator (or any other workload manager that is becoming able to protect their own data) need to evolve, as well.  Enjoy the video:

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]

Workload-enabled Data Protection is the Future … and that is a good thing

When asked about “what is the future for datacenter data protection,” my most frequent answer is that DP becomes less about dedicated backup admins with dedicated backup infrastructure … and more about DP savvy being part of the production workload, co-managed by the DP and workload administrators.

  • In the last few years, we’ve seen a few examples of that with DBA’s using Oracle RMAN to do backups that aren’t rogue outside of corporate data protection mandates, but in concert with them – and being stored in the same deduplicated solution as the rest of the backups (e.g. DDboost for Oracle RMAN).
  • More recently, we are seeing more examples of VMware administrators getting similar functionality, including not only VMware’s own VDPA/VDPA+, but also traditional backup engines that are being controlled through vCenter plug-ins to give the virtualization admin their own solution.

EMC’s announcement of ProtectPoint is another step in that evolutionary journey, enabling VMAX production storage to go directly to Data Domain protection storage, thereby giving yet another group of IT Pro’s more direct control of their own protection/recovery destiny, while at the same time extending the agility and sphere of influence of data protection professionals.

To be clear, as workload owner enablement continues to evolve, the role of the “Data Protection Manager” (formerly known as the “backup administrator”) also evolves – but it does not and cannot go away. DPM’s should be thrilled to be out of some of the mundane aspects of tactical data protection and even more elated that the technology innovations like snap-to-dedupe integration, application-integration, etc. create real partnerships between the workload owners and the data protection professionals. And it does need to be a partnership, because while the technical crossovers are nice, they must be coupled with shared responsibility.

If the legacy backup admin simply abdicates their role of protecting data to the workload owner, because they now have a direct UI, many backups will simply stop being done – because the tactical ability to back up and the strategic mindset of understanding business and regulatory retention requirements are very different. The “Data Protection Manager” should be just that, the role that manages or ensures that data protection occurs – regardless of whether they enact it themselves (using traditional backup tools) or enable it through integrated data protection infrastructure that is shared with the workload owners.

Some naysayers will be concerned that as the workload owners gain tools that enable their own backups, the DP admin role diminishes – but there is a wide range of behaviors that are enabled by this evolution:

Some workload owners will wholly take on the DP mantle, but the DP manager will still need to “inspect what they expect” so that corporate retention and BC/DR mandates still happen.

Some workload owners will be grateful to drive their own restore experiences, but happily rely on the DP managers to manage the backups beforehand.

Some workload owners will recognize that they are so busy managing the workloads, the DP admins will continue to do the backups and restores – but now with better backups/snaps that are continue to be even more workload-savvy.

And there are likely other variations of those workload owner/DP Manager partnerships beyond these. But any way that you look at it, the evolution and collaboration of workload-enhanced data protection that can be shared between the workload owner(s) and the data protection managers is a good thing that should continue.

[Originally posted on ESG’s Technical Optimist.com]

Data Protection Impressions from the Dell Analyst Conference 2014

I recently had the opportunity to attend the Dell Annual Analyst Conference (DAAC), where Michael Dell and the senior leadership team gave updates on their businesses and cast a very clear strategy around four core pillars:

  • Transform — to the Cloud
  • Connect — with mobility
  • Inform — through Big Data
  • Protect

Protect?! YAY!!  As a 25-year backup dude who has been waiting to see how the vRanger and NetVault products would be aligned with AppAssure and the Ocarina-accelerated deduplication appliances, I was really jazzed to see “Protect” as a core pillar of the Dell story.

But then the dialog took an interesting turn:

As always, thanks for watching.

[Originally posted on ESG’s Technical Optimist.com]