What are Deepfakes?
Deepfakes are videos and audio created using Artificial Intelligence (AI). Authors of these videos collect a database of photographs of specific people and then an AI based application takes these photographs and inserts them into a video, the technique is known as generative adversarial networks (GAN). GAN was developed back in 2014 by Stanford alum Ian Goodfellow, who currently works as a director for Apple.
The Issue
There are a few concerns with this rising technology.
The first is fake news, a popular term used by U.S. President Donald Trump in recent years. Deepfake technology allows for the creation of convincing videos of major news outlets or even prominent politicians used to manipulate the truth. This raises huge concerns over how people consume credible news.
The second concern about reputation management, this is important for both individuals and businesses. The truth is tech companies have been mining our data for years, and even with our consent, most people don’t know how many photographs of them are available online. If those photographs get into the hands of these Deepfake creators, they can seriously jeopardize the reputation of that individual or brand.
The third one is cybersecurity. Imagine this, you receive a RingCentral, Zoom, or Microsoft Teams video call and it’s a high-ranking executive in your company asking for personal identifiable information (PPI) of an employee or requests your CFO to transfer bank funds. Believe it or not, Deepfake is evolving from video manipulation to replicating the exact sounds of our voice. Dr. Karoly Zsolnai-Feher a researcher from the Institute of Computer Graphics and Algorithms at Vienna University of Technology, Austria introduced us to a technology called Neural Voice Puppetry (NVP). One of the leading applications of NVP is Tacotron 2, an AI based voice cloning. It only takes a 5 second audio clip of our voice to synthesize and replicate.
The Solution
The best solution will be to detect Deepfakes using the same kinds of AI that are used to develop them. In April of 2019, the U.S. Defense Advanced Research Projects Agency (DARPA) awarded SRI International, an independent-nonprofit research institute based in Menlo Park, California, a contract to research the best ways to automatically detect Deepfakes.
Even when the technology exists to scour the nearly 2 billion websites that exist in the world, what will happen when it finds these types of videos on the internet? Because of this, a new technology will need to help point out these manipulative videos.
Conclusion
How Deepfakes will evolve in the future remains to be seen, but the threat exists and it’s important to continue to educate yourself on ongoing trends. We can not take what we see or hear for granted.
G.K. Chesterton, an English philosopher and writer, once wrote “It isn’t that they cannot find the solution. It is that they cannot see the problem.”