Disaster Recovery after Hurricane Sandy

Written by Tim Haight

I'm VP of Technology Services for CGNET. I love to travel and do IT strategic planning.

January 24, 2013

Hurricane Sandy scrambles disaster recoverySince Hurricane Sandy caused widespread damage last October, a number of our clients have expressed renewed interest in locating their backup data centers in sunny California. While this is a perfectly good alternative, has Sandy really changed our standards of data center disaster recovery, so that the wisest solution is to separate your main and backup data centers by thousands of miles? Yes and no.

For those of us capable of thinking probabilistically, we should note that Hurricane Sandy had the largest diameter of any hurricane on record: 1,100 miles. It did damage in 24 states, plus many of the islands in the Caribbean. Thus, it is unlikely that such a storm will happen soon again. But – and this is how a lot of people think – we now really know it’s not impossible.

So what do we do? One idea that has surfaced is that geographical separation is not enough. People are paying more attention to the resilience of each site. Does it have a backup generator? Is it located high enough in the building to avoid the effects of a flood? Does it have lots and lots of fuel, so it can keep going for a really long time? What electrical power grid is it on, compared to its counterpart in the home office? What is the local electrical utility’s reputation for recovering from outages?

These are valid questions, and the main casualty of discussing them is probably the strategy of locating your disaster recovery servers in a remote office instead of a robust co-location center.

But given that the sites are robust, how far apart should they be? Before Sandy, the conventional wisdom in the DR industry was that data centers should be at least 105 miles apart to avoid the effects of a single hurricane. Now, that seems to be a lot less than the area Sandy took down.

What To Do?
I can think of two approaches to getting beyond the traditional numbers. First, you can go down the list of disasters and really analyze each prospective site on the basis of how likely it is to be affected by each disaster: Hurricane, volcano, snow/sleet/ice, earthquake, tsunami, forest fire, military attack, power grid failure, tornado, telecommunications failure, etc. You could then build a profile of each site and choose remote sites with totally different profiles than the home site.

For us, for example, earthquakes are a major consideration, so our remote data center is on a different tectonic plate, about 80 miles away. California isn’t noted for its volcanos, hurricanes, and tornados, although tsunamis, forest fires, and electrical failures are not unknown.

The second approach is to ask yourself why not simply locate the disaster recovery site very far away? The first question to ask is probably whether you want your staff to personally maintain any of the remote equipment. It may be worth it to work out a new strategy that doesn’t require this. The days of the co-location facility may really be passing to the era of the cloud.

It also means thinking through the recovery procedures and seeing whether relative proximity of the backup really makes much difference, given today’s bandwidth and courier service standards. A third question may involve the skill sets of the people running the remote data center.

At this point, the bottom line is probably that more separation is better, unless there are major trade-offs, and that the resiliency of each site is more important., But what do you think?

You May Also Like…

You May Also Like…


Translate »
Share This