For decades, disaster recovery has mostly followed a model: An organization copies data from its primary data center and stores it somewhere else (often on tape) for safekeeping, in case something goes wrong.
Many organizations continue to think of DR in those terms, but the practice has moved forward. In fact, even the objectives behind DR have evolved. Disaster recovery isn’t really about responding to disasters these days. Natural disasters probably rank relatively low on the list of reasons an organization might need to implement its DR tools and plans, behind factors such as ransomware, data corruption and simple performance issues in the IT environment. And DR requirements have changed due to the emergence of cloud software, increased interdependencies among applications and other industry trends.
To ensure that DR tools and plans meet their organization’s current needs, IT leaders should assess their strategies to identify these potential gaps.
Gap 1: Alignment with Cloud Strategy
The placement of IT workloads is changing, and DR strategy needs to change with it. Consider Software as a Service, for instance. In theory, SaaS providers should offer DR services for the applications they host, but that’s not always the case. And a SaaS provider’s recovery time objectives and recovery point objectives may not meet the business needs of the organization.
While a shortcoming in providing disaster recovery may not in itself be enough to torpedo a SaaS deal, DR strategy should at least be discussed when moving resources to the cloud. This way, business and IT leaders will understand the level of risk involved with the move and will have enough information to make the right decision for the organization.
Gap 2: Monitoring Capabilities
Application performance monitoring tools, such as AppDynamics from Cisco, allow IT administrators to observe the behavior of all their services, which helps to establish a baseline for “normal” behavior and to identify anomalies. If an application typically completes certain data transactions within half a millisecond, for example, and there’s suddenly a full one-second lag in performance, it’s a safe bet that this application is causing a slowdown in an IT environment. Without monitoring tools, IT administrators looking to identify the cause of a performance problem will have a much more difficult time locating the source.
By using monitoring tools to identify problems quickly, IT administrators have more information to help them decide whether to initiate DR protocols. Also, if the decision is made to fail over workloads to a secondary site, monitoring tools can help ensure that workloads are running as expected in the DR environment.
Gap 3: Application Dependency Mapping
More than ever, IT applications and infrastructure are interdependent on one another, with a dozen or more tools commonly working together to deliver a given service. But too often, organizations lack visibility into these interdependencies, which can be a big problem if one or more solutions go down. Application dependency mapping tools provide this visibility, giving IT administrators the information they need to bring workloads back up in a coherent manner.
Gap 4: Business Impact Analysis
It’s easy to cite global statistics about the cost of IT downtime, but many organizations lack any real sense of the cost to their own business when an application goes down. Through conversations with line-of-business stakeholders, IT leaders can get a better sense of the real-world impacts (including lost revenue, unhappy customers and other negative outcomes) when a critical tool is unavailable for a minute, an hour or an entire day. Then, they can work to revise their DR strategies to ensure that business needs are prioritized.