Eliminate Branch Office Backups to Simplify Disaster Recovery

by Michael Fiorenza on February 27, 2018

There's nothing like disaster recovery planning to force you to think about what data is actually valuable to your business. If you wouldn't recover it after the hurricane, it's also probably not worth preserving in the first place.

In fact, a lot of IT organizations don't even understand what they own until they start trying to develop their disaster recovery strategy. They may have a list of servers and applications, but they can't always state what value those assets are delivering to the business.

As they compare what's in production to what's coming in from other environments, and they map out how their applications retrieve data from different storage sources, they also realize just how many tools and efforts they're duplicating. To illustrate, let's imagine a PowerPoint file that multiple users are creating to present to a company's board. It starts as a presentation template, and the file is accessed by navigating to the company's cloud-based file storage solution.

  • Human resources, IT, finance, and risk management all download the template and save it locally. The users working on these files are in branch offices distributed around the country.
  • Each user creates their own slides to add to the presentation and uploads them to the cloud-based file storage folder. The VP of communications puts the slides together in a single presentation and saves the new file version in the namespace.
  • The CEO reads through the presentation and sends notes back to each team. Each team downloads the combined presentation, saves it locally and makes changes to their own slides, but they also notice inconsistencies in other teams' slides. For instance, the CFO notices that HR has used the wrong name on their slide for the new plant manager in Guatemala. He updates their slide in his copy of the file, uploads his changed copy and assumes the change will end up in the authoritative file.
  • The VP of communications takes everyone's edited files and again, assembles a combined presentation. However, she doesn't realize the CFO made changes to HR's slides. She sends the PowerPoint file to the company's copy director and copy editor for editing, but she continues to make her own edits in case the copy team misses them. Now, there are three copies of the presentation, all inconsistent with one another. It's anyone's guess which version ends up in front of the board.

It's not just the duplication of effort here that's a disaster. It's the number of copies of this file, in all its versions, that now exist at branches all over the world. How many versions of files like this are stored in branches all over this enterprise? And does the right version get recovered after the hurricane, to be given to regulators or attorneys who ask to see it?

Finding an Easier Way

Businesses pursue opportunities all over the world, and the world is a complicated place. Between the acceleration of extreme weather events and pockets of political unrest, and the necessity of avoiding downtime, businesses have to set priorities for what they'll restore and how quickly each service gets put back together.

Because it's Tier 2 or 3 in terms of priority, a lot of unstructured data isn't in that first wave of RTO targets. Most of that data—we estimate between 60 and 80 percent—is stored in branch offices. As we saw in the illustration above, there's limited transparency on which version of a file is the authoritative one to recover. Unnecessary resources are dedicated to backing up multiple versions of so many files, most of which aren't necessary to have.

If you've provisioned a few virtual servers per branch that are dedicated to local backup, you can clone those backups at your best available recovery point and restore the branch office environment. But keeping those copies on hand requires a portion of your resources, not only for data management but also to pay your cloud provider's rates for data egress. Why pay to snapshot 20 versions of a file, pull the changes into your public cloud environment, and perform full periodic backups on them, when it's much easier and more cost-effective to back up one authoritative version of every file?

Caching Saves Cash

At any given time, we estimate that at most, 10 percent of your unstructured data is active. Why dedicate resources to data that no one is using most of the time? We developed our Intelligent File Cache, which connects to Talon FAST™ fabric, as a way to help enterprises support a more centralized data storage strategy. It's also a solution that, by eliminating branch backups along with much of your branch server complexity, helps you simplify processes and lower costs associated with your disaster recovery strategy.

When you connect your distributed environments to the FAST™ fabric, the FAST™ core instance back at your data center extends centralized file shares to each distributed location. Users then work on large file sets hosted on these file shares, with local file copies stored in the Intelligent File Cache. While the user works on a file, it's locked; no one else can access it to make changes. The FAST™ fabric updates the cache as the user makes updates, streaming only the changes back to the data center to update the authoritative copy.

The Intelligent File Cache is maintained by an edge FAST™ instance, where it's stored and automatically managed on an NTFS volume. Integration with the NTFS file system makes the cache simple to manage, and it can quickly scale to meet the needs of a branch office's growing workforce (or workload). Managing the Intelligent File Cache on an NTFS volume also keeps the active dataset close to the user. This setup allows for immediate access to important files and projects with the low-latency experience of working with a local file server.

Get more specifics on how the FAST™ fabric works.

When data in the Intelligent File Cache is no longer active, the cache is automatically purged. You simply don't need local storage and backups the way you once did; there's an authoritative copy of each file that's centrally stored, and the cache always has room for more active data when users need to work with it.

If 80 percent of a company's data is unstructured, and 60 to 80 percent of that unstructured data is stored at the branch, moving to a centralized and consolidated model means eliminating a lot of hardware and licensing costs. It also eliminates the need for local backups, which means eliminating a lot of effort spent preserving what's not needed. We've seen our clients cut their storage costs by as much as 70 percent by utilizing the FAST™ fabric and Intelligent File Cache. It's storage agnostic, running on Windows Server 2012 and above and utilizes commodity hardware, so it works within the traditional data center or cloud-based file storage environment you already have. You can keep using the FAST™ fabric as your storage strategy evolves; you're never locked into any particular hardware vendor or cloud services provider.

Learn More About FAST™

Disaster recovery is just one process that the FAST™ fabric makes simpler. Think how many processes, like disaster recovery, become easier when you have fewer machines and fewer files to deal with. For a case study of how eliminating branch backups worked for one global business, check out the work we did with Capita Property & Infrastructure, a company with 68,000 staff across the UK, Europe, South Africa, and India.