Byron Horn-Botha, lead: Arcserve Southern Africa Channel and Partnerships, says that there is no excuse for not building a successful DR plan at a time when tolerance for critical application downtime is rapidly dwindling. “Today, a tolerance of less than fifteen minutes is not uncommon. With availability requirements like that, companies are pressured to get it right,” he says.
Building a successful DR plan requires active participation across all business units, so that everyone at the table has a clear understanding of both data risk and expectations for recovery.
“The right resources and technology to deliver against recovery objectives have their place, of course, but without a foundational knowledge, businesses can end up guessing and that can translate into catastrophe,” he says.
The following highlights some of the key elements of Arcserve’s recommended DR planning process.
1. Set recovery expectations
We live in a world where customers expect data and applications to be available anytime, anywhere, and with touch-of-a-button ease. Furthermore, there’s an expectation that if something goes wrong, recovery can happen swiftly, and without data loss.
But this is not always the case and it’s a conversation companies should be having regularly across their business units. It’s crucial that everyone understands what the organisation wants versus what can be delivered.
2. Document business objectives and availability requirements
Business objectives and the criticality of the data and applications being protected in the organisation must be documented.
To create an effective business continuity and disaster recovery (BCDR) plan, it is essential to be intimately familiar with the organisation so that you can determine an acceptable level of risk. This can only be achieved through engagement across the company, which will determine the actual amount of downtime that is sustainable for each system and application.
Then it is necessary to identify interdependencies to ensure no single piece of the DR puzzle has been neglected. This means mapping out how data flows from one application to the next and facilitates a clearer picture of what needs to be protected. It also underscores the level of availability with a view to spotlighting what applications in the value chain cannot be recovered with the requisite speed necessary to support another critical application.
3. Think beyond costs
Getting buy-in for infrastructure improvements, given the competing demands for business investment, can be difficult. It is crucial to discuss any discrepancy between the cost of a company’s DR solutions, which are recurring, versus the loss expectancy – should systems go down for an extended time, or be lost entirely. The improvement of IT infrastructure as a cost must be considered as an ongoing investment in the health of the organisation.
4. Test the reliability of the DR solution
Testing the recoverability of critical apps should be done consistently. DR testing really needs to be a continuous effort, so the organisation is confident with both recovery points and times that can be achieved. This is where a backup and recovery solution that offers automated, application-level testing capabilities and reporting becomes critical.
5. Test the disaster preparedness of your people
Of course, automated testing covers the technical component of your DR plan but it would be unwise to rely solely on automated reports. The value of a full DR drill is that it illuminates how people behave, and identifies which processes work and which don’t. It also helps to verify whether or not these processes have been fully documented.
6. Is your DR plan up to the task?
Ransomware only represents one of many threats that must be considered when creating a DR plan, but the likelihood of infection – a near certainty now – is changing the game. As risks of ransomware infection escalate, the importance of a thorough, effective, and rehearsed DR plan has never been more crucial.
© Technews Publishing (Pty) Ltd. | All Rights Reserved.