Security Staff Acquisition & Development, Leadership, Vulnerability Management

BC/DR Planning isn’t a “Someday” Activity

By Katherine Teitler

Bad moon rising

Security teams spend a fair amount of time thinking about incident response. The probability of an information security incident occurring forces teams to consider how to manage intrusions, leaks, and other security vulnerabilities or exploits. When data is stolen or when admin credentials have been pilfered, for instance, security and incident response teams generally have a plan of action, even if the plan isn’t well documented or practiced at optimal intervals. 

When it comes to business continuity and disaster recovery (BC/DR), planning and mock exercises are lacking further still. Failure to create and exercise BC/DR guidance can put a company in dire straits, though (as we’ve seen with ransomware), leaving it unable to function at normal levels (or at all, depending on the incident), and leaking revenue, profits, and new business opportunities.

I see trouble on the way

Last week, a company occupying space in the same building as MISTI experienced an issue with its physical security alarm system. The alarm started beeping on Saturday morning (according to one of MISTI’s employees who came into the office that day) and as of Monday afternoon, the problem had not been fixed. According to the administrative assistant at the company, the CFO was responsible for the contract with the alarm company and his approval was needed to make any requests or changes to the account. Apparently, dealing with a malfunctioning alarm system fell under those terms. In information security, we know that proper, validated authorization is an important component of warding off threats and misuse. The fact that the CFO, a senior member of the executive team, was required to handle a security situation is therefore a good control. Here’s the problem: The CFO was on vacation and unreachable. No mobile connectivity. No email access. He was, deservedly, enjoying a summer break with his family.

Before embarking on his getaway, the CFO had not arranged contingency plans. No person at the organization was designated as an emergency contact or authorized decision maker to deal with the alarm company. According to information MISTI learned, there was “no way” to contact the CFO who could make the incessant beeping stop, because only he held the golden key.

Obviously I can’t know what other systems fell under this chain of command, but it’s likely that if one, fairly minor system was set up without a decision tree and backup decision maker in place, other, more critical systems were also configured this way. A day-long, high-pitched beep is annoying, but it’s not going to render a company useless and unable to function. The only thing lost is a tiny piece of everyone’s sanity.

I see earthquakes and lightenin’

Bridging the analogy, what if the downed system was email and employees couldn’t send, receive, or even view messages until the proper authority returned to the office? How about an intrusion detection system? Or a data center? What if an adversary was not only stealing but encrypting and/or deleting the company’s data, and the one person with authority to make decisions on how to proceed was completely off the grid, leaving no one else in charge? Some readers are thinking, “This a stupid analogy. Only a moronic company would operate this way.” Yet many organizations, especially small ones, haven’t made provisions for dealing with disasters. The “it hasn’t happened yet” syndrome provides a false sense of security, and teams who intend to “get to it someday” are banking on the fact that “someday” is before something bad happens. For the meantime, they’re assuming that the decision maker or authority figure will always be available, and if an incident occurs, the company will be able to power through a categorical annoyance (e.g., a beeping alarm) versus a true disaster (e.g., a flooded data center).

Situations arise, though, when people aren’t available. If this alarm had been the company’s server, none if its employees would have had access to work resources for three full days. If a threat actor were siphoning data out of the network, gigabytes at a time, and no one but an unavailable executive had the authority to take systems offline, the data would be long gone (this can happen even when the right people are at-the-ready; now imagine if the security team has its hands tied). Let’s say a fire or flood consumed the company’s data center and only the CFO could authorize disaster recovery efforts. Business continuity effectively wouldn't exist in this situation. The company is left without the ability to carry on, and operations lie dormant until the CFO returns to the office.

I see those bad times today

Do these examples seem ridiculous? Does it read a little like a Dumb and Dumber movie? You bet! However, this is a reality for many organization (probably more than some organizations care to admit). A decision tree for business continuity and disaster recovery hasn’t been established, and no one knows where or whom to turn to when things lurch sideways.

During an incident isn’t a good time to start thinking about what happens when a person in position of authority takes a vacation, falls ill, or is struck with personal disaster of his or her own. Organizations need the ability to operate with as little disruption as possible, even when the disaster is 10X bigger than a malfunctioning security alarm.  

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds