Security Program Controls/Technologies, Vulnerability Management, RSAC

How AI subverts democracy and how to fight back

Share
security technologist Bruce Schneier, speaking at RSA Conference 2023

SAN FRANCISCO — Democracy is hackable. It is a complex set of economic, social and political systems that mimic the algorithms and protocols of computer code. When abused, democracy can be exploited by special interest hackers to find tax loopholes and ways to abuse obscure regulations and laws for their own self-interest.

This subversion of democracy is real, said security technologist Bruce Schneier, speaking on April 25 at the RSA Conference 2023. But he said, the threats hackers pose to democracy can be mitigated by applying the same defensive mindset that cybersecurity professionals use to defend information systems.

“Like any protocol, democracy has vulnerabilities and exploits. We can design them to be more secure,” he said.

Schneier argued artificial intelligence is accelerating the subversion of democracy, but also holds the promise of making it more secure.

“AI will be able to figure out how to take advantage of more ... [It will help] find new tax loopholes, find new ways to evade financial regulations, creating micro-legislation that surreptitiously benefits one particular person or group,” Schneier said.

Click here for all of SC Media's coverage from the RSA Conference 2023

Hacking democracy isn’t a new concept and one that Shneier has opined about in previous RSA Conferences. But, he argued, “today's hacking scales better and it's overwhelming the security systems in place to keep hacking in check.”

Artificial intelligence: the problem and the solution

Speaking to the audience of cybersecurity experts, he argued that as technology becomes an even more powerful influencer in our democratic society that it's time for security professionals to apply their problem-solving skills of finding vulnerabilities, patching bugs and harden network defenses on protecting the cyber mechanisms underpinning democracy.

However, AI is both the problem and the solution to mitigating threats to democracy isn’t as straightforward as the blocking and tackling done by network administrators.

“How do you take knowledge in the computer field and apply them to non-computer socio-technical systems?” he asked.

Here, Schneier made no apology for not having all the answers yet, but proposed several heady ideas, one being "liquid democracy," as he called it. One of its tenants is to create a new system based on an egalitarian AI technology that tracks micro and macro preferences of an individual that could then be turned into policy.

“Imagine if we had an AI (device) in our pocket that voted on our behalf 1,000 times a day based on preferences it inferred we have. … It would be just an algorithm for converting individual preferences into policy decisions,” he said.

Empowering a trusted proxy to communicate your policy positions would more accurately reflect the will of the people versus a less-than-ideal gerrymandered (or hacked) will of few, he argued.

This liquid democracy, he said, assumes a benevolent flavor of AI not controlled by large corporate monopolies or an oppressive government and works towards a societal greater good.

“Any AI system should engage individuals in the process of democracy, not replace them,” he said. “The goal has to accommodate plurality” and not foment tribalism within society.

The imperative is now for a more balanced shaping of AI’s applications in the real world. AI, he argued, ups the ante on a timeline for catastrophic risks. Those risks include climate change, synthetic biology, molecular nanotechnology and nuclear weapons. Liquid democracy, Schneier said, allows the will of the people to help shape future policies, not just the interests of the few.

“Misaligned centers of hacking have catastrophic consequences for society,” he said.

Building present day and future AI systems, Schneier said, must be a transparent process.

“These [AI] systems aren't super opaque, and that's become increasingly not OK,” he said. “Even if we know the training data used and understand how the model work there are all these emergent properties that make no sense.”

“I don't know how to solve all these problems. But this feels like something that we as security people can help the community with,” he said.

Tom Spring, Editorial Director

Tom Spring is Editorial Director for SC Media and is based in Boston, MA. For two decades he has worked at national publications in the leadership roles of publisher at Threatpost, executive news editor PCWorld/Macworld and technical editor at CRN. He is a seasoned cybersecurity reporter, editor and storyteller that aims always for truth and clarity.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.