Infosec World, AI/ML, AI benefits/risks, Phishing

It’s a bright future at InfoSec World 2024

Share
AI evangelist Zack Kass speaks at InfoSec World 2024 in Orlando, Florida, Sept. 23, 2024.

ORLANDO, Fla. — All cybersecurity conferences try to strike an optimistic tone, but the first day of InfoSec World 2024, which began here Monday (Sept. 23), had to be one of the most upbeat ever.

It began with CyberRisk Alliance Executive VP of Communities Parham Eftekhari welcoming attendees to the conference. He struck the requisite high notes, noting that 2024 marked 30 years of InfoSec World.

Eftekhari reminded attendees that because most critical infrastructure is privately owned, they were actually "in the national-security business."

"Governments are doing a good job," he added, "but they rely on you."

The day ended with a paean to the coming golden age of artificial general intelligence, which former OpenAI Head of GTM Zack Kass said would be the "next Renaissance."

In between, there were glimmers of darkness from Amy Bogac, former CISO of Clorox, who related how she managed a huge data breach while coping with personal loss, as well as from famed cybersecurity curmudgeon Ira Winkler, who may have found a way to predict which employees may be most susceptible to phishing scams. Both of their talks nonetheless ended upbeat.

L'audace, l'audace, toujours l'audace

The day's true pacesetter may have been Olivia Rose, CISO and consultant, whose opening keynote touted the virtues of audacity.

Commanding while on stage, Rose admitted that just before walking out under the spotlights that morning, she had been as nervous and worried as anyone would be. But like singer Beyonce and "Sasha Fierce," Rose said she projects a fearless alter ego, a public persona she calls "The Olivia," that lets her confidently dominate a room.

Rose said that as a woman in a very male-dominated industry, "all of my successes have come from being audacious."

One way to do this, Rose said, was to become "completely self-unaware" — or at least far less self-conscious.

"We're far too fixated on what everyone thinks of us," Rose said. "Keep in mind that nobody is looking at you in meetings, and that you're not the dumbest person in the room."

But how can one be audacious without getting fired? Rose said she has trained herself to be stubborn, loud and contrary — up to a point.

"Learn what you're comfortable saying, and don't cross that line," she said. "Create your own ideas — don't parrot other people's."

Lessons from the Clorox 'bleach breach'

Next up was Elevate Textiles CISO Amy Bogac, who left a similar position at Clorox following a highly publicized 2023 cyberattack at the cleaning-products giant that cost the company tens of millions of dollars in remediation and lost business. 

Interviewed by CyberRisk Collaborative VP of Cybersecurity Strategy Todd Fitzgerald, Bogac detailed the steps she and her team took during the Clorox breach. At the same time, Bogac also had to deal with the sudden death of her father and arrange his affairs.

"But I had to keep working," Bogac said. "When there's an incident, the people closest to remediating it are usually those most reluctant to go home."

Bogac urged the security staffers in the audience to test out incident plans and run through playbooks but added that CISOs might want to consider media training as well.

"Media coverage was not something I thought of when I took a CISO role," she said.

Fitzgerald found the silver lining, observing that breach-management experience can be good for a cybersecurity career. Smart employers might be more likely to hire someone who's walked through fire, after all.

"I'm not saying you need to go out and engineer a breach, but make incident response playbooks and conduct tabletop exercises," Fitzgerald said. "Your next employer will appreciate it."

'Fix things and light the way'

Kevin Johnson, CEO of the Jacksonville penetration testing firm Secure Ideas — slogan "Professionally Evil" — gave a hilarious presentation on the virtues of pen testing.

"My job is to go into an organization and tell them their baby is ugly," Johnson said "My job is also to make you understand what on your baby can be fixed."

Pen testers catch what vulnerability scanners and DevSecOps miss, Johnson said. One organization he worked with tested its website registration app thoroughly, but forgot to test the external API, letting Johnson increment numbers in the URL to get other members' data.

Another firm, a defense contractor, left a security hole in the portal for its wireless network because the hole was visible only on the guest interface. The company had simply failed to test it on an unauthorized machine.

People are overconfident about security, Johnson said, and security people can do the most stupid stuff of all.

One of the first things his pen testers do when they get into an internal network is to set up listeners, he said. Those listeners often capture domain admin credentials within 10 minutes because organizations plug in network scanners, and those scanners have domain admin rights.

Yet while it may pain a firm to find out it has these security flaws, Johnson said, it's better that a pen tester find the flaws than an actual attacker.

"Our job is to fix things and light the way," he said.

Finding the dumbest users

Ira Winkler, CISO of CYE Security, began with a painful statistic regarding phishing attacks.

"Four percent of users are responsible for 80% of the damage," he said. "Everyone makes mistakes once in a while. But some people make mistakes much more often than others."

For some reason, Winkler said, there are a few people in every organization who will click on phishing emails much more frequently than all other users, despite training and phishing tests.

"There are users that are that bad," Winkler said. "Stupid exists."

These may be very valuable employees, or people without other faults, and companies are reluctant to even single them out.

Winkler said he has heard of companies that have a "three strikes, you're out" policy on phishing tests. But while that might make sense for a financial firm, it seems unfair to punish an employee who constantly falls for phishing. 

Instead, Winkler said, if you could identify those users most susceptible to phishing, you might be able to add more controls around them, limit their access to sensitive data, and increase the frequency of their awareness training.

Curious if there might be psychological traits that indicate susceptibility to phishing attacks, Winkler said he analyzed data collected by Dr. Matthew Canham at the University of Central Florida, who had studied what Winkler called "repeat clickers."

Canham had administered a variety of psychological assessments on 130 volunteers and collected more than 900 data points for each subject. Winkler ran the data through statistical analysis and found that susceptibility to phishing isn't based on one factor, but a combination of different traits, some of which might be apparent in a person's online behavior.

One trait that stood out, Winkler said, was a low "locus of control" — a feeling of helplessness and lack of control of one's environment. Such people, Winkler said, might post conspiracy theories.

Another was depression, which Winkler said might manifest in compulsive behavior or even posting excessive selfies on social media.

Smart attackers might spot someone exhibiting a combination of these traits online and target them for phishing attacks, Winkler said. That's why companies need to identify which of their users need the most protection.

Winkler admitted that this was "a small study, but one with massive statistical significance."

"The applicability and results are intuitively obvious," he added.

We welcome our benevolent AI overlo — oops, assistants

Any gloom created by Winkler's presentation was hard to maintain against the nuclear blast of optimism from closing keynote speaker Zack Kass.

"We're on the verge of the most profound industrial revolution in human history," Kass said.

He painted a picture of an idyllic future in which humans work less, live longer, and are happier and healthier — all thanks to artificial general intelligence (AGI), which Kass thinks will be achieved by 2030.

A "wave of scientific breakthroughs" is in the offing, Kass said, observing that "we recently discovered the first antibiotic in 60 years because of AI."

Curing cancer and achieving fusion technology will be just a matter of time once AGI is real. Businesses will be better run. Thanks to self-driving cars, vehicle fatalities will plummet. Food will cost less, and Kass even predicts "a massive deflationary event," which could alarm some economists.

"We're much closer than most people realize," Kass said. He laid out three phases to come in the development and adoption of AI.

First, up through 2027 or so, will be enhanced versions of the applications we already use — the Copilots and ChatGPTs that assist us in our tasks.

Then, Kass said, we'll see autonomous agents. Give them assignments and goals, and they'll figure out how to do them for you instead of having to be shown how. This might be up through 2035.

The third phase will be natural language operation systems, AGIs that can be spoken to, and speak back, as intelligent, self-aware assistants.

"We'll see the end of the PC as we know it," Kass said. Instead, devices will be wearable and ubiquitous.

But, he admitted, "there are a lot of things that can go very wrong."

One is a real-life version of the movie "Idiocracy." Humans, Kass said, are using TikTok, Twitter and TV to create a lot of "especially addictive, especially banal content that contributes nothing." 

AI might feed our hunger for more stupid stuff and "we may end up getting dumber," Kass said. That would be especially dangerous coupled with the decline in social skills exhibited by teenagers (and older people) who spend way too much time online.

Kass also worries about AI creating job displacement, but not because of the economic implications. Instead, it's because so many people, especially Americans, define themselves by what they do for a living, and losing a job creates a sense of loss of purpose.

"It will be an identity displacement crisis, not a job displacement crisis," said Kass.

He does worry about the nature of AGI. It may be intelligent and all-knowing, but also lack empathy or wisdom.

"How can we train a machine to accomplish tasks while respecting the human experience?" Kass asked "How do you teach a machine to be kind?"

Regardless, he said, we'll be working fewer hours with higher job satisfaction. Right now, he said, "most of your job is bullsh*t — meetings, emails, TPS reports." AI will pick up those tasks.

To prepare for the coming AI revolution, Kass said, one should learn how to learn new skills and focus on those things that AI can't do — "humanistic stuff" like adaptability, courage, curiosity, wisdom and empathy.

"Today's the best day ever to be born," Kass concluded. "The future is great, but only if we imagine and build it together."

Paul Wagenseil

Paul Wagenseil is a custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.