New & Next

In Deep

Deepfake technology is an accelerant on a wildfire of misinformation, one that is already impacting businesses, executives, governments and bottom lines

Illustrations By Nicolás Ortega

Imagine logging on to Twitter one morning to see your company’s CEO trending nationwide.

You click on her name and the video autoplays. What you see is shocking—and out of character.

It is, without a doubt, going to be damaging to her reputation and to the company. Your phone buzzes—it’s your CEO.

It’s not real, your CEO says. She did not say—or do—anything shown on the video.

So, how does this video exist? And what do you do about it, before the market opens and the value of your company plummets?

Chances are, the video is a deepfake—a convincing but technologically fabricated video generated through machine learning. The term was introduced to many by former U.S. President Barack Obama. Or, rather, by four fake Obama heads all saying the exact same thing on a giant screen in front of a live audience. None, of course, was the real Obama, as each had been entirely computer-generated.

This Obama example was shown during a 25 July 2018 TED Talk, given by computer scientist Supasorn Suwajanakorn, called “Fake Videos of Real People—and How to Spot Them.”

Suwajanakorn showed how he used a mixture of existing photos and videos, artificial intelligence, deep learning and 3D modeling to create photorealistic—and completely false—videos of the former president.

Edward Delp, the Charles William Harrison distinguished professor of electrical and computer engineering at Purdue University in Indiana, says some of the earliest deepfakes were fairly easy to spot. The eye blinking seemed off or the edges of a face were blurred. Often the audio wasn’t synced up to the lip movement.

But the latest deepfakes are getting harder to identify thanks to improved video technology. It’s also possible that this could spread to text as well. Machine learning systems can create text using what’s known as Generative Pre-trained Transformer-2 (GPT-2), a program created by the nonprofit OpenAI.

That work led researchers to the stunning conclusion that “governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems,” according to OpenAI researchers Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever in a blog post.

Now, open-source deepfake tech is available to anyone who’d like to give it a try.

The tech was quickly used to create fake pornographic videos of celebrities and revenge pornography—increasingly released online.

Managing deepfake crises

Jonathan Hemus, founder of Insignia, a U.K.-based crisis management company, says deepfakes have potential to wreck carefully curated reputations. Any one of his clients could be next, he warns.

“In order for the fakery to be worthwhile, the person in question needs to be recognizable in the first place,” Hemus says. “As a result of apparently stating a controversial, critical or inaccurate view, [deepfake victims] could suffer reputational harm or even spark a crisis.”

Jonathan Bernstein, the president of Bernstein Crisis Management, headquartered in California, explains, “We have long taught our clients that there is no such thing as privacy. Forgeries or modifications of existing audio or print has gotten so good that people can get away with it. We train our clients to just accept that it’ll happen and to be ready when it does.”

Deepfakes have already caused headaches in the marketplace. Bernstein has dealt with deepfakes on two occasions already, both occurring in 2018 and involving high-profile individuals who were “leaders in their field.” His legal and forensics teams got to work immediately, while he began preparing briefing cheat sheets and backgrounder handouts clients could use to explain what actually happened to them.

“In both cases there were logical suspects, and forensic work did the rest,” Bernstein says. “We were lucky. Counsel was then able to use tactics such as court orders to get the deepfake authors to back off.”

Public relations goes deep

Aviv Ovadya, founder of the Thoughtful Technology Project, says AI-augmented audio and visual manipulation is already a real challenge for public relations. Deepfake technology can make such manipulation more powerful and accessible.

Yet despite the real threat it poses to communicators, deepfake tech also presents some opportunities, he says. For example: It’s already being used for entertainment purposes in Hollywood, he notes, and will continue to be used “for storytelling purposes.”

It could also be used by corporations looking to connect with a global audience by creating and controlling their own synthetic videos, Ovadya adds. “You could potentially translate your CEO’s speech to other languages,” he says, noting that you could sync the lip movements to whatever language you desire. “There’s benefit to companies being able to communicate across languages, to create a benevolent leader figure that is cross-lingual.”

Ovadya notes companies could avoid potential ethics issues by disclosing the video is indeed synthetic and was created in the interest of inclusion. All videos would also be created with the consent of the individual in the video—and should say so with text in the video itself.

Fighting fake with tech

The AI Foundation, a community based in California that “incubates and contributes to the ideation, research, design, development, manufacturing, launch, sales, operations and the evolution of revolutionary personal AI,” is working on Reality Defender. This is a web plug-in that flags potentially fake content right in the browser. Another plug-in called SurfSafe performs a reverse-engineer search to identify original images being used to create fakes.

Delp and members of his computer engineering team at Purdue University are focusing on machine learning and using deep neural networks to determine whether videos have been modified or altered.

The team feeds hundreds of hours of deepfakes to the computer, which is continuously learning new tricks to identify them. “What’s nice about this is as these [deepfake production] methods get more sophisticated, our machine can continue to learn,” Delp says.

Suwajanakorn, a pioneer of deepfake technology, pointed early on to the potential fallout arising when real people are seen to say and do things they never actually did nor consented to.

“Our goal was to build an accurate model of a person, not to misrepresent them. But one thing that concerns me is its potential for misuse,” he said in his TED Talk.

Still, Suwajanakorn defends the technology. “From a scientific standpoint, advancing state-of-the-art technology is how we move forward into the future. But that should always come with cautions and safeguards,” he says. “The way I see it is that it’s just a matter of time before this kind of technology arrives. And it’s much better that it is discovered and publicly exposed in the scientific community rather than within a limited group.”

Crisis of trust

When that happens, our crisis of trust could deepen. “Because they are so realistic, deepfakes can scramble our understanding of truth in multiple ways,” wrote John Villasenor, a nonresident senior fellow in governance studies, Center for Technology Innovation at the Brookings Institute, in his article “Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth.

“By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: They undermine our trust in all videos, including those that are genuine. Truth itself becomes elusive, because we can no longer be sure of what is real and what is not.”

3 Ways to Deep-Six a Deepfake

1. Build goodwill and brand trust

When fighting back against a deepfake, it helps if everyone already likes your client or company.

“It’s all about creating a cushion of goodwill,” says Jonathan Bernstein, president of Bernstein Crisis Management, headquartered in California. “That’s a big part of our advice: creating a cushion of goodwill allows an organization to survive any crisis much better.”

Even though the technology keeps getting more advanced, tried-and-true PR practices are still as effective as ever, adds Jonathan Hemus, founder of Insignia, a U.K.-based crisis management company.

“The emergence of this technology increases the importance of a business leader building a strong and positive personal reputation ahead of the crisis event,” Hemus says. “Stakeholders will be less willing to believe negative comments from a trusted business leader such as Richard Branson, as opposed to a more controversial CEO.”

2. Prep your crisis response team

Bernstein also recommends that PR professionals conduct “contractors wanted” searches to add legal and forensic expertise to your stable of on-call experts.

3. Respond with the facts

The next—and essential—step, Bernstein says, is to release a statement immediately.

“In the absence of a statement, rumor and innuendo spread very fast. We’ve seen that over and over again,” Bernstein says.

“The quicker you can knock down and clear the brush around, the greater the chance you won’t have a giant fire raging.”


Learn how AI could impact your work by watching the webinar “Communicating AI: Building the Playbook for Communication Professionals.” 

Jonathan Streetman

Post a Comment

Your email address will not be published. Required fields are marked *