A Data loss can cripple a business to such an extent that it can never make a comeback after an interruption. According to the Disaster Recovery Preparedness Council, 73 percent of companies on a worldwide note are failing in terms of disaster readiness and among them 12 percent of them are failing to be back in competition.
As per an Aberdeen and TechTarget survey, losses from disasters ranged from a few thousand dollars to millions of dollars with nearly 20 percent indicating losses of more than $80,000.
The most prominent reason for companies to face such scenarios is that too few businesses take a proactive approach to disaster recovery (DR). And many simply just sign up for DR and assume they’re protected which couldn’t be further from the truth.
First of all there are a number of misconceptions about DR, and only those who truly understand the elements of a successful recovery can get backup and running without missing a step.
Here are seven things businesses typically get wrong about DR and here are some suggestions on how to address them.
Get rid of the thinking that recovery means just about data and systems- This kind of mindset often leads to all-or-nothing approach in which organizations treat all applications as equally important. In real time, some applications are more important than others. Every minute that your customer-facing transaction processing application is down could cost you thousands of dollars. However your back-end HR applications, for example, could potentially be down for a week without major business impact. Therefore, tiering of applications in terms of business impact and setting recovery options based on where the fall in the spectrum is extremely important to make recovery more resilient. For example, high-availability applications may be architected to fail over to another live instance with all data replicated. At the other end of the spectrum, backups that restore data within a few days might be good enough for less business-critical applications.
Most CIOs fail to understand application dependencies- While tiering applications makes to the first step, many organizations may not know how those applications map to their underlying systems and infrastructure. Without knowing that, it’s difficult to understand which systems and components you need to recover to bring a specific application back up. Some medium to large scale companies can’t even say for sure how many business applications they use, let alone how they map to the system. Hence, as a business evolves and IT becomes more critical, the applications and mappings become much more complex. If you don’t use the right tools to discover and map applications and their dependencies, you’ll have a false sense of security about your readiness for a disaster.
For instance, one company tested its DR and was able to recover 99 percent of its environment successfully. All its servers were successfully restored. But users still couldn’t make a login into the systems. The problem was that the company recovered everything except their Active Directory, which was a critical component in order for applications to be functional. If you don’t have dependencies mapped then you may not be able to recover your business applications effectively at the right time.
Data protection is often neglected- For businesses that manage their own data protection and backups, backup issues are the cause for recovery failures most of the time. Similarly, the backup and recovery technology selection is based on RTO and RPO. Also, identifying where your backups are actually being stored and whether they’re safe is also vital. For instance, if your data center is right across the street, that is sufficient in the event of a hardware failure, but not in the case of a hurricane.
Can backup alone be a savior- Let’s understand this with a small example- Suppose you have a laptop and have the data duplicated to another media. If you loose it, you can buy a new laptop and just simply restore your data and start working. But it’s a different story across enterprises as there are different kinds of compute infrastructure. In a major disaster, even if you have done a good job protecting all of your data, you also need the infrastructure and procedures to put it all back together. Some companies plan to recover to their test and development infrastructure in case of a disaster. This approach can only make complete sense when you keep a tab of details such as how old is that equipment? Has it been updated to keep up with changes in your production? Would your production environment run effectively on test equipment post-recovery?
Companies forget about users, process and governance- Companies make a mistake of failing to define processes and have a DR team which has good experience to deal with such scenarios. Because that’s the only way a recovery will be successful. Beyond having the right people and experience, you’ll need people who are going to be available in the wake of a disaster. During hurricanes, for example, you can have the data, procedures, experience, and all the other ingredients, but if your employees are not available or cannot get access to the recovery data center, none of that matters.
Companies forget to test- The only way to know for sure that you’re DR ready is to test. Yet most businesses ignore this aspect. There are two aspects to testing: One is frequency of testing. Some businesses feel like they’re protected because they have the equipment and they’re confident they can figure it out. At that point, it’s too late. The gaps and issues that a test will reveal can’t be fixed after a disaster strikes. Organizations that are ready for a disaster typically test twice a year, and more frequently for mission-critical applications. The second aspect is that your system is always changing. The rate of change has increased, especially over the last few years, demanding that processes keep up with those changes and ensure regular testing. Your DR readiness is only as good as your last test.
Often we ignore the different types of risk- A few years ago; “disaster” typically meant a hurricane or fire. Today there are infrastructure failures, security breaches, malware, and other threats that are no different in terms of potential to cause data loss and application downtime. This means that disasters can occur at any time in a variety of ways. If hackers access your system and data, that’s bad enough, but if they start propagating malware or viruses and deleting or corrupting your data, that’s just as bad, if not worse, than a hurricane. You have no warning, and sometimes you might not even know about it until it’s too late. In that scenario, is your data safe offsite somewhere? Is it protected from a network perspective, so hackers can’t access the DR copy of your data? Do you have multiple copies so that you can access an uncorrupted copy before the breach? The nature of disasters has changed, and so better familiarize yourself with the first six points which help you prepare for whatever which happens as time passes on.
Consequently, with critical data at risk, businesses have to be more concerned with developing a comprehensive disaster recovery plan. Any lenience in adopting a backup and disaster recovery plan in a proactive way will lead to a financial disaster.
DNF can help in offering an efficient insurance policy against disaster disruption.