AI benefits/risks

Why citizens and campaigns need to improve AI literacy in this very political year

Share
Deepfakes and

COMMENTARY: More than 60 countries around the world are holding national elections this year, making 2024 one of the biggest election cycles in history. Already a time of political unrest, this year's election season faces increased instability to because of AI-driven disinformation.

The most common forms of disinformation are AI-generated text for news articles and social media posts, as well as manipulated images and deepfake videos. For example, just recently, Taylor Swift was the victim of a doctored image of her endorsing former President Donald Trump’s campaign. If Swift didn’t make a public statement about the false image, her fans could have been swayed to vote in a particular direction — and, given her massive fan base, this could have significantly influenced election results.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Bad actors spread disinformation for three reasons: to manipulate public opinion, undermine trust in institutions and exploit societal divisions. They do it for political, financial or ideological gain. For the U.S. election, bad actors aim to disrupt democracy by altering narratives to impact public opinion, erode trust, blur reality, and even cause panic.

Exploiting human emotion at scale

The tactics and techniques behind disinformation are nothing new. Like phishing attacks and other social engineering threats that have been around for decades, disinformation plays on human emotions to incite action — for example, clicking a link, opening an attachment, or sharing false information. However, Generative AI, amplifies the scale and precision of these campaigns, and lets bad actors target individuals and groups based on behavior or tendencies — making disinformation more pervasive and harder to detect.

It other words, Generative AI makes it easier for bad actors to do more complex attacks to trick us. Think about the way social media algorithms work to give us content we want to see. Disinformation works in a similar way. When fake content aligns with ideologies or plays into existing fears, people tend to believe it — and then act on it. It’s a natural human reaction.

The need for AI literacy

From the content we consume to how we interact with it to the influence it has on our decisions, the influence AI plays in our digital lives compares to the role the subconscious mind plays in influencing our behavior. Because of this, we need to increase our AI literacy by understanding the abilities of AI tools and how we can protect ourselves from disinformation. Here are some ways to start:

  • Learn how AI gets used: Regarding disinformation, AI gets used to create hyper-realistic deepfake videos, drive automated bots on social media, analyze audience data to narrow down targeted campaigns, manufacture synthetic social media accounts, and manipulate or distort text and quotes.
  • Understand the data landscape: We should always remain vigilant when consuming information. Learn how to identify trusted sources, and always cross-reference news posted in the media.
  •  Adopt skepticism: Readers should verify everything they see in the media, especially if they don’t have a direct relationship with the source or first-hand evidence to support the news.
  • Recognize signs of disinformation: Be wary of sensationalized news, articles that lack quotes from experts, claims without supporting evidence, and contradictory information.
  • Leverage potential AI benefits: Use AI as a thinking assistant to help challenge assumptions and augment the ability to make informed decisions.

It’s challenging to educate the public – and it's also hard to make sure the campaigns focus on security. Frankly, cybersecurity usually isn’t at the top of the list for the campaign infrastructure itself, but they need to think about it more. Here are some baseline best practices campaigns can follow to fight disinformation and remain secure:

  • Implement security awareness training: Campaigns are made up of senior policy officials and thousands of volunteers, many of whom are not cyber experts. Develop a security awareness training program that includes an AI literacy component.
  • Master the security basics: There are a number of basic, yet critical security elements that campaigns must implement, including enforcing strong password policies, turning on multi-factor authentication, deploying access controls, and developing and practicing an incident response plan.
  • Take a security-in-depth approach: Campaigns must adopt a multi-layered approach to security. This includes using products such as firewalls, anti-virus software, and anti-malware software and encryption. Cybercriminals aim to get maximum reward for the least amount of work and are often drawn to high-profile political targets.
  • Adopt a  zero-trust approach: Ensure employees take a “never trust, always verify” mentality and align technology tools and processes accordingly.

The 2024 election year underscores a critical juncture in the intersection of politics, technology, and cybersecurity. The misuse of AI to spread false information, manipulate public opinion and undermine trust poses a severe threat to the integrity of electoral processes worldwide.

Both citizens and political parties have a responsibility to enhance AI literacy and prioritize baseline security best practices to mitigate the risks associated with disinformation. The convergence of AI and politics demands our collective effort to ensure that the power of technology gets harnessed to enhance and secure the electoral process – not erode it.

Tiffany Shogren, director of services enablement and education, Optiv

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.