Infosec World, AI/ML, Risk Assessments/Management, Generative AI

Digital personhood: InfoSec World 2024 delves into the legal implications of AI

Share
Iconic Law and Justice Symbol, Classic Balanced Scales

The first day of InfoSec World 2024 ended with upbeat predictions about a coming AI-powered golden age. The following two days brought that lofty outlook down to earth as difficult questions were posed about the moral, social and especially legal ramifications of artificial intelligence.

"We're not against AI," said Adriana Sanford, a privacy-law attorney and professor in the College of Business at the University of Texas at Arlington, in a Tuesday (Sept. 24) session. "There's a lot of positive potential. But it can do a lot of damage in the wrong hands."

Sanford and retired federal district judge Noel Hillman examined the ethical and legal issues surrounding AI, focusing on how those issues might impact corporate liability.

Earlier Tuesday, Gartner senior partner Brandon Dunlap and Troutman Pepper colleagues Andrea Hoy and Gene Fishel discussed assigning legal responsibility and liability to AIs, including whether an AI could be a "person."

"Where does human liability end and the AI take over?" asked Fishel, general counsel at Troutman Pepper. "If you're developing or using AI, you need to consider the legal ramifications."

On Wednesday (Sept. 25), AI came up again during a discussion about "radical transparency" in cybersecurity. Moderator Al Yang, CEO of SafeBase, wondered whether it would even be possible to explain the inner workings of an AI "black box."

Other presentations Tuesday and Wednesday examined phishing attacks from trusted sources, vishing and phone scams, safe use of open-source software, Russian disinformation campaigns and the CrowdStrike outage's lessons for security preparedness.

"Every company should have a business continuity plan in place," said Protect AI CISO Diana Kelley during the CrowdStrike panel discussion. "And it should test that plan."

Can an AI ever become a responsible adult?

It quickly became clear during Tuesday's legal discussions that the AI-fueled renaissance predicted by Monday's keynote speaker Zack Kass might, in fact, require a total overhaul of the American legal system, which is currently based on the individual responsibility of human adults.

For example, Dunlap, Fishel and Hoy asked about who is liable for an AI's decisions when they go wrong: the AI's user, the AI's designer or the AI itself?

In a recent San Francisco case in which a Cruise driverless taxi hit a woman walking across the street and dragged her 20 feet, Cruise ended up paying millions to settle the woman's lawsuit.

The settlement came even though the woman was walking outside the crosswalk, crossed against the light and was initially hit by a human-driven car, which tossed her into the path of the robotaxi. The human driver who first hit her kept going and left the scene.

A Cruise driverless taxi is clearly experimental, and General Motors owns the car and designs the software. But what happens after a privately owned, fully autonomous mass-market vehicle makes a mistake and kills someone? Fishel said it's not yet clear.

"This is all in the nascent phase," he said. "There's not much case law on AI, and what there is is coming out of New York state."

He pointed out the case of two Manhattan lawyers sanctioned by a judge for using ChatGPT to write a brief that cited nonexistent legal decisions.

To be convicted in a criminal case or held liable in a civil one, Fishel added, the person being held liable usually must exhibit a mental state ranging from negligence to full malicious intent. But can an AI really "know" what it's doing?

"You might have to treat the AI almost like you would an employee," said Dunlap. "Is an AI going to hallucinate any more than a first-year law student would on the tail end of an all-nighter?"

The defense calls ChatGPT to the stand

Fishel drew a scenario in which an AI might have to provide evidence in a courtroom, or even demonstrate its functions. That raised the questions of who would perform the demonstration — the plaintiff, accuser, respondent or defendant — and whether consistent outcomes from identical prompts would even be possible.

"You will almost always fail, because GenAI is non-deterministic," said Dunlap. "You can get a completely different answer to the same question."

Then what if an AI accuses a person of a crime? Fishel pointed out that under the Sixth Amendment to the U.S. Constitution, the accused has the right to confront the accuser in court. How can you confront or cross-examine an AI system?

"Build in a system prompt for your AI so that it immediately invokes the Fifth," joked Dunlap.

Or, Fishel noted, if your company is using a proprietary AI system, and you enter litigation that involves the AI in any way, the AI's methods, programming and inputs will be subject to discovery and your trade secrets may be exposed.

Yet for the kind of AI-powered utopia described by Kass on Monday to become real, humans would have to place full trust in benevolent, wise artificial general intelligence (AGI) systems to make the right decisions. That would mean transferring legal responsibility to the machines.

So, the panel was asked, would we as a society ever be able to accept machines as autonomous beings with legal agency? And if not, can AI development truly move forward?

Dunlap said a paper already existed that argued for legal personhood for AIs that leveraged the U.S. Supreme Court's 2010 Citizens United decision, which said corporations had a First Amendment right to free speech.

"It's fascinating," Dunlap said. "If you feed the paper into an AI, the AI will start setting up corporations."

Watch how you handle that AI

In the later discussion, Sanford echoed Fishel's warning that corporations need to be very careful when dealing with AI.

"AI is a new space for everyone, including you and the lawyers," she said. "The legal advice you get may not be correct. The insurance you have may not cover AI-related liability."

However, co-presenter Hillman pointed out, the U.S. government has already taken an interest in AI. The White House has drafted an "AI Bill of Rights" and issued an executive order on secure development of AI. U.S. Deputy Attorney General Lisa Monaco told a University of Oxford audience that AI "may well be the most transformational technology we've confronted yet."

"If they care, so should you," said Hillman, a former federal prosecutor. "The DoJ at least knows what it doesn't know about AI, and it's reaching out to others to learn."

He added that the U.S. Department of Justice's main concerns about AI, aside from its involvement in cybercrime and social engineering, were:

  • "AI washing," or using AI to exaggerate or misrepresent the efficacy of a service or product
  • AI-enhanced pump-and-dump investment scams
  • Breaches of fiduciary trust when proprietary data is used to train AI
  • AI hallucinations, and
  • Systemic risk caused by automated "herding," such as when AI-generated fake news might trigger AI-powered stock-trading programs.

AI-related fraud such as phishing and most deepfakes are already well-covered by federal wire-fraud statutes, although Sanford felt it was likely that using AI to commit a crime would result in stiffer penalties.

However, there's one form of deepfake that the federal government won't touch, Hillman said. With a general election coming up, the DOJ is very concerned about political deepfakes and other forms of AI-powered misinformation — but it feels its hands are tied.

"The DOJ takes the position that there is no federal crime when one candidate says misleading things about another candidate," Hillman said.

He explained that prosecution of such actions might violate the First Amendment and that furthermore, election law falls within the jurisdiction of states and local officials. More than a dozen states, including California, Minnesota and Texas have already criminalized deepfakes related to political issues, Hillman said.

"Until we achieve singularity, we are still only human," Hillman concluded. "When it comes to AI, we need human filtering, selection, management, and monitoring at every step of the way."

Can radical transparency go too far?

A fascinating panel discussion Wednesday morning examined the concept of radical transparency  — the theory that corporations can succeed by divulging as much information as possible — and how it applies to cybersecurity.

Cybersecurity practitioners have long insisted that companies suffering data breaches or creating vulnerable software should disclose everything, but the general idea is also catching steam in the corporate world. Even the U.S. government insists that "technology providers" should "foster radical transparency into their security practices."

But, asked moderator Al Yang, should there be limits on transparency? Can you give too much away?

"There's already a level of transparency in government that's beyond what we have in the commercial world," said Jim Alkove, CEO of Oleria, former Chief Trust Officer at Salesforce and former Corporate VP for Windows Enterprise and Security at Microsoft.

"Don't put your customers under more risk by sharing the information," he added. "Do no harm, but share enough information so that people can make informed decisions."

That didn't mean Alkove was opposed to radical transparency in general, however.

"At Salesforce, you had to talk to consultants or lawyers before speaking to the public, but we disclosed vastly more to customers," he said. "They relied on us, so we had a moral obligation to let them know what was going on."

Avani Desai, CEO of IT auditing and compliance firm Schellman, pointed out that radical transparency is part of her company's mission.

She agreed with Alkove about the benefits of oversharing with customers but added that her company also had to overshare with regulators, especially those from the European Union, because candidness can sometimes build rapport.

"They are more flexible if you are willing to provide a risk assessment," Desai noted.

Heather Ceylan, Deputy CISO at Zoom, said that radical transparency about cybersecurity had helped the company's public image: "Now we're able to show that security is not just a cost center for us."

Shining light into the 'black box'

Ceylan, however, admitted that it was difficult to achieve transparency regarding AI, which, as Yang pointed out, is sometimes impenetrable. Zoom, she said, gained trust by providing as much information as possible, even if it wasn't complete.

"When we first started releasing AI products," she said, "businesses had a lot of hesitation and wanted to know how these tools worked. So we developed really in-depth documentation about each of these features and how they collected and used data. By providing that information, you're providing transparency, trust and assurance."

"You have to be able to explain an AI or an LLM," added Desai. "We're the first company to be able to certify ISO 42001 [which standardizes AI management practices]. But it's going to be hard to be transparent through an algorithm."

Alkove related an incident while he was at Salesforce in which a security incident led to the decision to take the entire Salesforce network offline. In the wake of that incident, he said, radical transparency was painful but ultimately beneficial.

"We just told customers and the media why we did what we did," Alkove said. "We came out of it with a greater level of trust from customers because we were honest with them. I think CrowdStrike will come out of their incident the same way."

Asked about the future of radical transparency, Ceylan urged companies to "start automating as much as you can now."

"We're going to need to scale very quickly," she said. "It's not enough to be trusted; you also have to be trustworthy."

"Get comfortable being uncomfortable," Alkove added.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Paul Wagenseil

Paul Wagenseil is a custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.