Understanding the deepfake ecosystem

As I have frequently written, deepfake technology has been commoditized. Progress in synthetic media has been phenomenal in the last few years and 2020 is definitely the tipping point for these technologies to become pervasive, for the good but also for the bad.

 

 

3 November 2020 (Chania, Crete) – Last week I published a draft of my introduction to The struggle (futility?) of controlling or regulating technology in the modern world, a new monograph I am writing. I noted the news about Photoshop’s “Deep Fake” edition, plus some notes on GPT-3. The relevant paragraphs:  

Photoshop just added a set of ML-based tools to allow you to manipulate images in new ways – to change the expression of a face in a photograph, for example. Technically impressive, but now everyone can fake any image or video even more than they could before. Obviously this is enormous in the context of legal investigations and journalistic investigations. Up to now, deepfake detection has relied on semantic and contextual understanding of the content, and the basic forensic methods available. But as I have noted in previous posts, it will get so much harder to detect these images. The more advanced technological developments in image manipulation currently underway via GAN models, plus synthesized video technology which combines video and audio synthesis together in one tool to create realistic voices, will make these images undetectable. 

So the AI to create photorealistic faces, objects, landscapes, and video isn’t that far behind GPT-3 which can already write dialogue and articles (and movie plots) almost indistinguishable from ones written by humans. When OpenAI released its latest GPT-3 this past summer, it marked the first time a general-purpose AI saw the light of the day. But it also marked just the latest in a long history of knowledge technologies, the subject of  my second monograph due next year.

And the level of abstraction? There is a running joke in Silicon Valley that says with GPT-3 and GAN models you can write a Taylor Swift song about Harry Potter. But soon, you’ll be able to make a new Harry Potter movie starring Taylor Swift. 

I have a few more brief points to make based on reader feedback from that chapter.

A couple weeks ago, cyber firm Sensity revealed that a pornographic “deepfake ecosystem” has spread across the messaging app Telegram. Its researchers discovered that bots have created over 104,000 fake, AI-generated nude pictures of real women. Images were shared in private or public channels beyond Telegram, for “public shaming or extortion-based attacks.” The bot network was boosted on VK, Russia’s largest social media network. Telegram was unresponsive (read: had no reaction) when Sensity reached out with its findings. 

As I have noted: deepfake technology has been commoditized. Progress in synthetic media has been phenomenal in the last few years. 2020 is definitely the tipping point for these technologies to become pervasive, for the good but also for the bad. A few years ago, someone may have needed AI training to generate deepfakes. Now, they can leverage off-the-shelf models for malicious or harmless purposes. 

Most countries don’t have clear laws governing the use of deepfakes, and instead resort to existing privacy or anti-harassment laws that are already on the books. In the U.S., Virginia has imposed criminal penalties on distributing nonconsensual deepfake pornography. California has passed a law that outlaws dissemination of deepfakes featuring politicians within 60 days of an election.

Detection 

I will explain this in more detail in my monograph, but in brief what Sensity does is it blends “automated detection of visual threats” – via web crawlers and computer vision – with the work of threat intelligence experts, who search for malicious activity among underground communities on the dark web. Private groups and encrypted messaging apps pose a challenge to effective deepfake detection (and content moderation writ large). In the above case, Sensity uncovered activity in public channels by searching for keywords on Telegram. Since these deepfake services are not “illegal” or perhaps live in a grey area of the law, the bot network’s operators can freely promote and monetize their malicious service. 

Big picture? I do not believe most platforms or messaging companies will develop internally the needed layer of security for deepfakes. It’s not their business, at the end of the day.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top