Article

Deepfake Defense

Philip Henery
Philip Henery
March 14, 2025

The AI tools used to create digital humans for legitimate business purposes, like those discussed previously, have safeguards in place that require the consent of any individual that will be imitated by a clone. But other similar technology is available on the market, either via open-source licenses or perhaps sometimes hacked versions of proprietary software, giving bad actors access to this powerful capability as well. The result is that we are entering into an era where deepfakes – realistic digital imitations of real people – are going to exacerbate the misinformation problem on the internet and supercharge the phishing and social engineering tactics employed to commit fraud.

The Issue:

“Seeing is no longer believing. And that’s really, really scary because you know for human beings, because of the way that our visual cognitive system works, we’re hard-wired to accept visual inputs pre-attentively … visual input just goes directly into our brain, sometimes without engaging critical thinking, which means we can form beliefs about states of affairs visually before we’ve even had a chance to engage critically with what that visual input is really communicating.” – Victoria Lemieux, Blockchain@UBC Cluster Lead, Professor of Archival Science at the School of Information, University of British Columbia.

Instances of deepfakes leading to misinformation or fraud have been seen in the past, but the advent of widely available generative AI tools to create the deepfakes and generate content with them escalates the threat posed. In the Global Risks Report 2024, the World Economic Forum ranks misinformation and disinformation as the most severe threat the world faces for the next two years. That’s ahead of extreme weather events, social polarization, and cyber insecurity.

Report authors are clear that AI-generated content producing falsified information is the main driver behind the misinformation threat, including deepfakes. “Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years,” they write. “New classes of crimes will also proliferate, such as non-consensual deepfake pornography or stock market manipulation” (WEF, 2024).

Cyber-attackers can use deepfakes to impersonate decision-makers on audio and video channels, making phishing attacks targeting employees, previously mostly limited to emails, all that more convincing. Or prominent organizations could see their reputation damaged by deepfakes of leaders released into the public realm.

“Deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public across national security, law enforcement, financial, and societal domains.” – Department of Homeland Security, 2021.

The Obstacles:

The risk is perhaps greatest in swaying political outcomes, with deepfakes threatening to confuse an electorate by impersonating candidates or elected officials. Aspen Digital is a non-profit organization that seeks to empower people and organizations to be responsible stewards of technology and media, and it has warned about several risks associated with AI and the upcoming US presidential election (2024):

  • Hyperlocal voter suppression: Bad actors could spread false information to discourage voters in specific communities from casting their ballots.
  • Language-based influence operations: AI allows for instant translation of text between languages, which can enable the spreading of lies when in the wrong hands.
  • Deepfaked public figures: As already seen in other elections around the world, public figures can be depicted saying or doing something that they did not.

AI VIEWED AS A DISRUPTIVE FORCE:

The potential of AI to be harnessed by bad actors, while creating new vulnerabilities or pitfalls for organizations to contend with internally, feeds into AI being viewed as an overall disruptive force. IT leaders rank AI as the second most likely factor to disrupt their business in the next 12 months, a close second to the talent shortage. AI ranks ahead of cybersecurity incidents, government-enacted policy or regulatory changes, and changing customer behavior among other factors.

IT LEADERS WORRY ABOUT DEEPFAKES:

While IT leaders are likely factoring in the disruptive impact of their competition, harnessing AI to push them out of the market, the potential for cybercriminals to harness it to cause more direct damage is front of mind. Our survey respondents rate their concern over AI-powered cyber-attacks in general at a 5 out of 5 – as high as possible. Deepfake phishing attempts are also causing concern, rated at 4.5 out of 5 on the concern scale. (When using median scores.) These AI-powered threats are causing more concern than encryption being broken or IoT devices not being secure, among other threats. New technologies are giving old threats a new dimension. How can organizations respond?

Our Approach to mitigating this issue:

Stick to what you know. Some long-tested tactics to defend against malicious actors will still help ward off new attacks powered by AI. It’s where most organizations plan to start.

A human-centric approach to cybersecurity has been preached for years to strengthen the weakest link in the cybersecurity chain: people. Too often, it’s a worker clicking on a malicious link in an email or falling for a social engineering attack through social media that leads to a data breach. With deepfakes threatening to make those types of attacks even more common and more sophisticated, IT security leaders plan to focus on training and education for employees to identify and respond to deepfakes appropriately. Average IT departments are slightly more likely to prioritize this method than Transformers, saying they will employ the tactic 73% of the time compared to 63% of the time.

Transformers are twice as likely as the Average group to use tools that detect AI-generated audio and video to identify deepfakes, with one in five saying they will do this compared to 9% of the Average. Transformers are also thinking about how to prevent AI deepfakes of their executives in the first place, with 20% saying they will work to limit the amount of available material such as high-quality audio and video for training deepfakes. Only 13% of the Average group is doing this.

“[Even with training, however, it] can be very difficult to dislodge false beliefs that are established through these deepfakes. We can train ourselves. We can become more conscious and more aware. I think we are as a society, but it really is kind of fighting our human nature and our own visual cognitive systems … it’s asking you to be less human in a way, to become more like machines in how we process information. ” – Victoria Lemieux, Blockchain@UBC Cluster Lead, Professor of Archival Science at the School of Information, University of British Columbia.

Overall, most organizations are relying on employee training and verification and authentication processes, including multifactor authentication or watermarking, as the main tools to protect themselves.

Our Insights:

Despite the shortcomings of individual approaches, some are pursuing efforts to implement them in hopes they could take part in a multi-faceted approach to protect society’s chain of knowledge creation. Individual organizations will have to think about their part in the ecosystem and determine where they want to take action to add trust to their own content distributed to employees or the wider public. They’ll also have to track vendor and regulator initiatives seeking to protect against misinformation to be aware of what content should be trusted online, learn how employees can fend off sophisticated AI-powered cyberattacks, and determine what measures should be applied to their own content creation.

Some examples of specific solutions being pursued:

SWEAR WANTS TO UNDERPIN TRUSTED VIDEO ON THE INTERNET:

SWEAR Inc. offers technology that brings blockchain-based authenticity to digital media assets. SWEAR’s technology demonstration is powered by an iPhone app that allows users to take recorded videos with unique hashed signatures in real time. The technology could be integrated with Android smartphones or embedded in other recording devices such as cameras. The video content is embedded with a “cryptographic fingerprint to map every frame, pixel, sound bite, and layers of attribution data,” according to SWEAR’s solution brief. The data includes information on which user shot the footage or edited the footage and when it was done, among other data points. The hash created is stored on a blockchain and the digital media assets are stored separately in a secure environment. Users are shown a confidence score indicating how likely it is that the content is authentic.

“We’re putting on our tinfoil hats and asking how can you try to fake this? We try to capture as much information as we can, and we watermark it directly into the video in real time.” – SWEAR CEO Jason Crawforth.

According to founder and CEO Jason Crawforth, the mission of SWEAR is not to create its own platform where users will exchange authentically certified videos. “Let’s be honest, we’re an acquisition target,” he says. “We wanted to show Apple and Google that we have the ability to use this technology should we integrate it directly into the OS level.” Or social media platforms seeking a solution for preserving user trust could consider implementing it, he adds.

While SWEAR is blockchain agnostic and could use different options available, its technology demo that includes an iPhone app is relying on Hyperledger. The permissioned distributed ledger is recognized by large technology vendors and works efficiently, Crawforth says. SWEAR was awarded the 2024 Judges’ Choice Award by the Security Industry Association. (Interview with Jason Crawforth)

USING METHODS OF THE PAST ON THE CONTENT OF THE FUTURE:

Archival sciences provide a toolset that practitioners use to verify the authenticity of recovered historical documents. That could provide some insight on how to separate AI-generated misinformation from real content, according to Victoria Lemieux. She is collaborating with researchers from Carleton University on a prototype solution that users can apply to assess content’s authenticity.

“Epistemic security is trying to figure out what to do when we realize we disagree about the facts. When we have such divisions in society, we need to harmonize around a consensus … a shared truth. A society that doesn’t have a shared truth is going to be divided.” – Victoria Lemieux, Blockchain@ UBC Cluster Lead and Professor of Archival Science at the School of Information, University of British Columbia.

THE WHITE HOUSE PLANS TO CRYPTOGRAPHICALLY SIGN ITS COMMUNICATIONS:

After reports of a deepfake of President Joe Biden’s voice made robocalls discouraging voters during a New Hampshire primary election, the threat of AI-generated mimics of elected leaders became clear. The White House is responding by planning cryptographic verification of communications from text statements to videos. This would allow users to verify the source of content that looks like it might come from the President. (Cybernews, 2024)

CONTENT AUTHENTICITY INITIATIVE BUILDS A COALITION FOR CONTENT CREDENTIALS:

Founded by Adobe in 2019, the CAI has grown to include 2,000 media and technology companies committed to using open-source tools to record the provenance of any digital media, even if made with generative AI. The coalition leverages the technical standards created by the Coalition for Content Provenance and Authenticity (C2PA) and seeks to build a community around the movement. Technology members include camera and chip manufacturers working to embed verification methods directly into their tools. (CAI)

Like This Article? Help us Spread the Word

About the Author

Philip Henery
Philip Henery
Marketing Administrator

Philip is a writer, editor, voiceover narrator, and producer of several forms of media from news articles to biographies, novels, podcasts, and even local music artists. He is ROCIMG's Marketing Administrator, and is partially responsible for pushing his company's presence to the forefront of their localized industry.

Get Front-Row Industry Insights with our Monthly Newsletter

Looking for more exclusive insights and articles? Sign-up for our newsletter to recieve updates and resources curated just for you.