By Georg Lindsey
(Originally published February 15, 2007)
For many of our clients, high availability and disaster recovery have become top priorities, so here is a model incorporating some of the most interesting developments.
The big shift is to online replication of key services to remote locations, usually over the Internet. The idea is to have redundant capacity in a remote location, so that demand across the organization’s network can be shifted to the safe, remote location in the event of a disruption at headquarters.
Two huge forces have been driving this: the increasing desire for as little interruption as possible of critical applications, and the emergence of technologies that make this affordable, such as much cheaper bandwidth and storage, virtualization, and competitively priced replication software.
Two basic choices must be made in designing such a system. First, do you want near instantaneous failover, or is a slower, less expensive recovery acceptable? Second, are you going to locate the replica in your own facilities, such as a branch office, or at a managed service provider?
Combining these choices yields four alternatives:
1. Storage replication to a branch office
2. Storage replication to a managed service provider
3. Rapid application failover to a branch office
4. Rapid application failover to a managed service provider.
Today, storage replication often involves storing “snapshots” of virtual machines. Restoring such a VM snapshot restores the entire “server,” current up to the instant the snapshot was taken. This eliminates the need to reinstall the server software, add patches and configure it to the separately stored data.
Regardless of whether you store VM snapshots or other backups, such as from a SAN or NAS, storage replication involves a period of rebuilding servers at the remote location and a gap between the time the data was stored and when the outage occurred. Thus, recovery from a major failure can take a day. Furthermore, it is difficult to rehearse and test the process without disrupting the system.
High availability dynamic replication uses application-aware software to update a full replica server on a transaction-by-transaction basis. Thus, failover takes only a few minutes and very little data is lost. Some products also allow for testing the replication while the application is running, so recovery can be assured. The downside of high availability dynamic replication is that it continues to be more expensive than storage replication, even though its cost has dropped significantly, compared to earlier versions.
If a remote location, such as a branch office is available, it may provide an inexpensive site for the backups or replicas. This assumes, of course, that the location’s space, cooling and bandwidth are already provided for as part of the existing facility.
Alternatively, a managed service provider can host the backups or replicas. This provides professional hosting and an on-site 24/7 engineering staff ready to implement the solution, which can mean more reliable service and, in some cases, more rapid recovery. This solution does involve monthly charges for the managed hosting, however.
CGNET has implemented all of these alternatives and would be happy to assist in designing or implementing one for your organization.
Georg Lindsey is CEO of CGNET. He can be reached at g.lindsey@cgnet.com.
0 Comments