The Rockefeller-Snowe bill that's being considered by the Senate is supposed to "address our nation's vulnerability to cybercrime, global cyberespionage, and cyberattacks that could potentially cripple the United States' critical infrastructure." Some parts of what this bill tries to do make sense, while other parts may cause unexpected consequences. This could happen if the federal government creates national standards for information security.
Regulations and negligence
Negligence is the failure to use reasonable care. If you don't meet an appropriate standard for care, then you're negligent, and you're liable for any damages that you might cause. If you do meet the standard of care, then your unlucky victim typically bears the full cost of their injuries. As a rule, government regulations can be used to establish a standard for reasonable care. If you comply with government regulations, then you can argue that you're using reasonable care and you're not liable for injuries that you might accidentally cause.
Suppose that the government creates a national information security standard. If businesses comply with it, there's a good chance that they wouldn't be considered negligent if their security is defeated by a hacker as long as they had implemented government-approved security. This could be a problem because "government-approved security" and "good security" don't necessarily mean the same thing.
The government can be slow to react to new developments, so if it's creating and maintaining a national standard for information security, this standard probably wouldn't address the most recent threats. This means that it's easy to think of scenarios where security researchers would discover new attacks, but businesses would be free to ignore the threat posed by them until the government information security standard was updated to reflect the new threats. And businesses could be relatively free of liability if they did this.
It's also easy to believe that a government information security standard wouldn't allow the most recent advances in technology to be used until they're considered "approved" in some way. This already happens now. Some security vendors are caught in a Catch-22 in which the government can't use their technology until it's approved for government use, while being widely used by the government is a requirement for the new technology getting this necessary approval. If this type of reasoning is applied to more than the government market, it's easy to see how a national information security standard could easily kill innovation in the information security industry.
Hackers, of course, wouldn't be constrained by the same government regulation that businesses would, so they would continue to create new and innovative types of attacks. In the long run, this would lead to a situation that benefits absolutely nobody. Except, perhaps, the hackers.
A place for government
Another interesting possibility that a national information security standard presents is that of a careful cost-benefit analysis of security technologies. The Office of Information and Regulatory Affairs (OIRA) was formed in 1980 to weigh the costs against the benefits in government regulations, and if national regulations for information security were implemented, the OIRA would periodically assess both the costs and benefits from complying with the regulations. It has been hard to get accurate estimates of the costs and benefits for using many security technologies, and it would be very interesting to see what numbers the OIRA would come up with.
But that fact that the OIRA would do a cost-benefit analysis of national information security regulations doesn't mean that excessive costs would be avoided. The Environmental Protection Agency, for example, needs to do a careful cost-benefit analysis for regulations that it proposes, but its mandate to protect the environment means that it doesn't need to restrict its attention to regulations whose benefits exceed their costs. This has resulted in some controversial regulations that have been criticized for costing far too much. In an extreme example, the 1991 ruling by the EPA (56 FR 50,978) that established standards for solid waste disposal facilities has been estimated to cost $36 trillion per life that it saves.
Just as there are benefits from having a healthy and clean environment on which it is hard to put a dollar value, there's also a clear benefit from having a hacker-free IT environment that's just as hard to accurately value. Because of this, it's certainly possible that government regulations for information security could have the same sort of exemption from being justified by a cost-benefit analysis, particularly in the face of political pressure from consumers to reduce the chances of data breaches and the possibility of identity theft that they can cause. If this happens, we could end up with the use of expensive security technologies that provide minimal benefits being mandated. That's probably not a good idea.
The government probably has a useful role to play in making the IT environment reasonably safe, but they should carefully consider the implications of making their role too large or of them doing it in a careless way. That could easily make things worse instead of better, and Congress should keep this in mind as it looks at how to move forward with the Rockefeller-Snowe bill.
Luther Martin is chief security architect at Palo Alto, Calif.-based Voltage Security, where he has been involved in key projects such as porting Voltage's identity-based encryption technology to several wireless platforms. He is author of a new book on identity-based encryption (IBE); the IETF standards on IBE algorithms and their use in encrypted email; and numerous reports and articles on varied information security and risk management topics. He can be reached at [email protected].
Regulations and negligence
Negligence is the failure to use reasonable care. If you don't meet an appropriate standard for care, then you're negligent, and you're liable for any damages that you might cause. If you do meet the standard of care, then your unlucky victim typically bears the full cost of their injuries. As a rule, government regulations can be used to establish a standard for reasonable care. If you comply with government regulations, then you can argue that you're using reasonable care and you're not liable for injuries that you might accidentally cause.
Suppose that the government creates a national information security standard. If businesses comply with it, there's a good chance that they wouldn't be considered negligent if their security is defeated by a hacker as long as they had implemented government-approved security. This could be a problem because "government-approved security" and "good security" don't necessarily mean the same thing.
The government can be slow to react to new developments, so if it's creating and maintaining a national standard for information security, this standard probably wouldn't address the most recent threats. This means that it's easy to think of scenarios where security researchers would discover new attacks, but businesses would be free to ignore the threat posed by them until the government information security standard was updated to reflect the new threats. And businesses could be relatively free of liability if they did this.
It's also easy to believe that a government information security standard wouldn't allow the most recent advances in technology to be used until they're considered "approved" in some way. This already happens now. Some security vendors are caught in a Catch-22 in which the government can't use their technology until it's approved for government use, while being widely used by the government is a requirement for the new technology getting this necessary approval. If this type of reasoning is applied to more than the government market, it's easy to see how a national information security standard could easily kill innovation in the information security industry.
Hackers, of course, wouldn't be constrained by the same government regulation that businesses would, so they would continue to create new and innovative types of attacks. In the long run, this would lead to a situation that benefits absolutely nobody. Except, perhaps, the hackers.
A place for government
Another interesting possibility that a national information security standard presents is that of a careful cost-benefit analysis of security technologies. The Office of Information and Regulatory Affairs (OIRA) was formed in 1980 to weigh the costs against the benefits in government regulations, and if national regulations for information security were implemented, the OIRA would periodically assess both the costs and benefits from complying with the regulations. It has been hard to get accurate estimates of the costs and benefits for using many security technologies, and it would be very interesting to see what numbers the OIRA would come up with.
But that fact that the OIRA would do a cost-benefit analysis of national information security regulations doesn't mean that excessive costs would be avoided. The Environmental Protection Agency, for example, needs to do a careful cost-benefit analysis for regulations that it proposes, but its mandate to protect the environment means that it doesn't need to restrict its attention to regulations whose benefits exceed their costs. This has resulted in some controversial regulations that have been criticized for costing far too much. In an extreme example, the 1991 ruling by the EPA (56 FR 50,978) that established standards for solid waste disposal facilities has been estimated to cost $36 trillion per life that it saves.
Just as there are benefits from having a healthy and clean environment on which it is hard to put a dollar value, there's also a clear benefit from having a hacker-free IT environment that's just as hard to accurately value. Because of this, it's certainly possible that government regulations for information security could have the same sort of exemption from being justified by a cost-benefit analysis, particularly in the face of political pressure from consumers to reduce the chances of data breaches and the possibility of identity theft that they can cause. If this happens, we could end up with the use of expensive security technologies that provide minimal benefits being mandated. That's probably not a good idea.
The government probably has a useful role to play in making the IT environment reasonably safe, but they should carefully consider the implications of making their role too large or of them doing it in a careless way. That could easily make things worse instead of better, and Congress should keep this in mind as it looks at how to move forward with the Rockefeller-Snowe bill.
Luther Martin is chief security architect at Palo Alto, Calif.-based Voltage Security, where he has been involved in key projects such as porting Voltage's identity-based encryption technology to several wireless platforms. He is author of a new book on identity-based encryption (IBE); the IETF standards on IBE algorithms and their use in encrypted email; and numerous reports and articles on varied information security and risk management topics. He can be reached at [email protected].