recent, Science / Tech

Deepfakes and the Threat to Privacy and Truth

You just crossed into the twilight zone.

“Photographs furnish evidence,” wrote Susan Sontag in On Photography. “A photograph passes for incontrovertible proof that a given thing happened.” Sontag went on to write of how photographs can misrepresent situations. But do they even have to show real objects?

When you open the website “This Person Does Not Exist” you are met with the face of a man or woman. He or she looks normal—like the average person you would brush past on the way to work—but he or she does not exist. The website uses generative adversarial networks, which produce original data from training sets. Through analyzing vast numbers of real faces the website can generate new ones.

True, there are some glitches. The first man I saw—a cheerful, bald, middle-aged man who could have been a television evangelist or a salesman at a training seminar—had an inexplicable hole beneath his ear, which, once seen, gave him an unnerving reptilian appearance. More often than not, though, the faces are indistinguishable from the real thing.

You just crossed into the twilight zone.

Scientists from OpenAI, a research organization dedicated to investigating means of developing “safe” artificial intelligence have decided not to release their new machine learning system, which generates texts based on writing prompts, for fear that it could be used to “generate deceptive, biased, or abusive language at scale.”

The system generates text by understanding and replicating the linguistic and rhetorical logic of prose. To be sure, the small examples OpenAI has released—in which, for example, the model argues that recycling is bad and reports that Miley Cyrus has been caught shoplifting—contain some bizarre assertions, weird repetition, and ungrammatical phrases, but one could say the same of a lot of the prose that people write.

You just crossed into the twilight zone.

“Deepfakes”—a portmanteau of “deep learning,” which essentially entails machines learning by example, and “fake”—are used not just to create non-existent people but to misrepresent real ones. One can, for example, take someone’s face and put it onto someone else’s body.

In practice, in a grim reflection on our species, this tends to involve anonymous netizens creating videos in which the faces of celebrities have been grafted onto the bodies of porn stars. Until it was banned in 2018, the subreddit “deepfakes” was alive with users busily cooperating in the efforts to superimpose Gal Gadot or Emma Watson’s faces into hard-core porn.

In a recent interview with the Washington Post, Scarlett Johansson spoke of how dozens of videos online portray her in explicit sex scenes. “The fact is that trying to protect yourself from the Internet and its depravity is basically a lost cause,” Johansson said. “The Internet is a vast wormhole of darkness that eats itself.”

You are now crossing into the twilight zone.

Of course, people have been able to manipulate text, sound, and images for almost as long as man has been able to record them. The better part of a hundred years ago, Nikolai Yezhov was scrubbed out a photo with Joseph Stalin after the man that he had served with savage loyalty had had him executed.

Still, even as “photoshopping” technology has advanced it has had little impact on our impression of the world. “Airbrushing” has allowed the bodies of celebrities to seem eerily flawless, and occasional “fake” images have been accepted as real. For example, composite photos of John Kerry and Jane Fonda briefly allowed Republican activists to claim that Kerry was associated with the controversial anti-war/formerly pro-communist actress in 2004.

Photoshopping, though, is less vulnerable to bad actors than deepfakes. When an image has been photoshopped, as in the case of the Kerry/Fonda photo, one can generally find the original. With deepfakes, there are a whole mess of originals, which are far more difficult to trace. Video is also more convincing, and more compelling, than static images and has thus far greater potential to be used for licentious or cynical purposes.

As well as humiliating celebrities, deepfakes can be used to harass and harm ordinary people by portraying them in compromising situations. This is inherently hurtful, by damaging their sense of themselves and their control over their lives, and also threatens their relationships or their careers by dishonestly associating them with vice or crime. A sinister quality of deepfakes is how easy it has been for people to learn how to create them. Some Australian journalists easily created a simple deepfake of Malcolm Turnbull speaking (thus cleverly preempting suggestions that they learn to code).

This also raises concerns about policing. Now, if you have video of someone committing a crime it is all but impenetrable evidence of their guilt. As the technology with which to create deepfakes develops, the innocent could be framed and the guilty could have a clever new excuse.

In advanced nations this trend will be damaging and in developing nations it could be deadly. In India, dozens of people have been lynched after dark rumors have been spread about them on WhatsApp. One can only imagine the furore that could be whipped up if a rumor was supported by a video.

Political scandals could be manufactured as well. In an age of heated polarization of it will be difficult for politicians to convince their opponents that damaging videos are in fact deepfakes. On the flipside, if real videos emerge of sins and crimes they will be able to suggest that they have been invented. Some, like the political scientist Thomas Rid, have said they “do not understand the hype” when “the age of conspiracy is doing fine already.” To that I say, things can always get worse.

Platforms have hurried to contain the proliferation of deepfakes. Reddit banned /r/deepfakes, forcing its users to migrate to 4Chan, while Twitter, Gfycat, and Discord have banned Deepfake content and creators. I expect some contrarians to insist that deepfakes are examples of free expression but if the law offers any defense of our reputations and our privacy, malicious deepfakes should be banned not just on social media platforms but by law.

An important study of the potential for AI to be misused maliciously has also recommended restricting the availability of AI codes to block “less capable actors” from adding them to their arsenals. “Efforts to prevent malicious uses solely through limiting AI code proliferation,” however, they go on, “are unlikely to succeed fully, both due to less-than-perfect compliance and because sufficiently motivated and well-resourced actors can use espionage to obtain such code.”

As deep learning technology advances, methods of exposing fakes advance as well. Siwei Lyu of the University of Alabama, for example, produced a method of detecting deepfakes by analysing the frequency with which people in videos blinked. Recreating natural blinking habits in deepfakes has been hard, he explained, because it is much harder to find pictures of people with their eyes closed than with their eyes open.

Still, there is no guarantee that methods of detecting fake images, audio, and text will advance as rapidly as methods of producing them. Moreover, in an age of declining social trust it is increasingly difficult to convince people that something is true or false even if one has proof. The heat and pace of the news cycle appeal to our biases, not our rationality. As the scandal of the Covington students and Jussie Smollett have demonstrated, people make snap judgements based on scant information and changing their minds once they have been made is always difficult.

In a world where data can be so terribly unreliable—where our eyes can indeed be lying to us—we have to restrain those aggressive impulses that lead us to draw bold conclusions about people and events. We have to collectively acknowledge the importance of accurate data, and the possibility that it might defy our prejudices, even if we disagree on broader theories and ambitions.

When we take a photograph, Sontag wrote, we are “creating a tiny element of another world: the image-world that bids to outlast us all.” One of our important responsibilities is to struggle to align the image-world with our own.

 

Ben Sixsmith is an English writer living in Poland. Visit his website here and follow him on Twitter @BDSixsmith

39 Comments

  1. Nicholas Cage says

    I take my news in the form of text. It’s always been possible to lie in print, so I’m stuck with evaluating anything I read using my critical faculties. The shortcut where you assume that video is unvarnished truth is no longer a safe one. Photographs preceded it by decades. If you want the truth I guess you’re going to have to work for it!

  2. Deep Facts says

    The author does not understand how deepfake technology works. No normal person can be a deepfake victim. It only works for people where you have a collection of hundreds or thousands of images of their face from all different directions. That’s why it’s always a politician or media personality who’s face you see used.

    • I do understand how deepfakes work. Of course, it is true that if one avoids being photographed one is quite safe but there are many normal people who do not, from local newsreaders to local politicians to YouTubers to jobbing actors to prolific Facebook users and so on and so on.

      • Deep Facts says

        You still don’t understand. Deep fake technology requires thousands of images of the person, in multiple lighting conditions, from multiple angles, with them making multiple expressions. These types of facial databases simply do not and will not exist for 99% of humanity. I strongly suggest you actually attempt to make one for yourself, and you will understand what you are missing.

        • Thousands? When I was researching this article I read this blogpost which suggested that a few hundred were sufficient. Perhaps it is wrong? Granted, the deepfake video was not very good but this article is premised on the assumption that they will get better & that the time to prepare for that is now while they are still relatively unconvincing.

          https://www.scip.ch/en/?labs.20181122

          • Deep Facts says

            A few hundred are sufficient for proof of concept, but nothing close to passable to the human eye. Even the best deepfakes videos in the page you linked, with the most images, and the most processing power / time, are still easily identified fakes. They glitch out in the eyes and teeth every few seconds.

            Here’s a quote from your link:

            “While the result is generally good, this video does also clearly show areas where deepfake technology needs to improve. Focusing on the mouth, it becomes clear that the algorithm cannot handle teeth particularly well. They are either not shown at all or as a single white area which even overlaps the lips in most cases.”

    • ga gamba says

      I think you may have forgotten about sites like Facebook and Instagram where normal people post hundreds and thousands of photos of themselves.

      That’s why it’s always a politician or media personality who’s face you see used.

      I suspect this may also be due in large part because the public already has interest in these personalities, especially if they are attractive, as they often are. There are those who also seek to embarrass or damage the credibility of public figures, for example politicians. There’s likely more interest in these people than Steve and Amy, normies who work at Tesco.

      Ultimately, we will only be safe when all of us have nude photos, real or not, or ourselves floating around the internet – think of it like MAD. You try to embarrass me by spreading photos of me and I respond by spreading photos of you. And with the billions of images that exist, and will only continue to grow, it all kind of merges into a blurry blob of anonymity.

      • Deep Facts says

        ga gamba, I have not forgotten. Those images are not sufficient for creating Deep Fakes. The reason is because most of those people will post themselves making the same expression, like smiling, from the same angles, in similar lighting conditions. It’s not the number of images that matter. You need multiple facial expressions, from multiple angles, in multiple lighting conditions, in the counts of the thousands. As I said above, the facial databases will not exist and can not be made for 99% of humanity. We use celebrity faces because they are the only ones that have the prerequisite volume of facial images that are needed.

        • Stephanie says

          Ga gamba: Prolific FB and Instagram posters tend to know precisely which angles and lighting produce their best photos. Even regular users know when they know they look good, and it tends to be under very specific lighting conditions and 1-3 facial angles and expressions, even if they don’t make the decision consciously.

          Something I learnt while I was modelling. Very few of my nudes aren’t gorgeous, but I’ll still feel better when everyone else has nudes up too.

        • Well, this is increasingly terrifying.

          @Deep Facts: Is it not possible that improved training algorithms could learn to create fakes from a much smaller set of training samples? Particularly if generalised learning allows you to work from a net that already has some ‘knowledge’ of facial structure?

  3. mitchellporter says

    These are terrifying developments. The emotional, social and intellectual life of Homo sapiens is going to be reverse engineered, infiltrated by AI imitations, and finally absorbed into a larger ecology of artificial interactions. With so much of life already mediated by computers, we’re halfway there already. The world will be governed, not by human politics and human law, but by the equivalent of Skynet and its T-1000 drones, and whatever imperatives and power relations exist among such beings. Even if a Skynet was friendly to human beings, they would be completely at its mercy, not just physically but epistemically.

  4. Jezza says

    A slightly tangential headline rhyme

    I tawt I taw a puddy tat
    They said it wasn’t real
    I DID, I taw a puddy tat
    It had me for its meal

    I never apologise, never explain

  5. CogitoBcn says

    “Preempting suggestions to learn code”?

    Eighth paragraph:

    “To use the program you don’t really need to know how to code — all you need is a relatively fast computer. ”

    So, LEARN TO READ!

  6. Heath says

    Excellent article and a very wise warning from the author. The only good news about this type of bad and scary news is the God is in control and when all this mess is over, Jesus will judge us all. His gift of truth and grace is free. Come on in, the water is warm.

  7. Lightning Rose says

    Maybe the answer is for most of us “normal folks” to get a little smarter about indiscriminate uploading of our pictures, private lives, opinions, vacations, and just about everything else about ourselves to demonstrably irrresponsible platforms like Facebook. I’m old enough to be able to tell you that everyone who ever lived before 12 years ago, the whole history of humanity, did JUST FINE without “social media.” To get caught up in this is a CHOICE people are making. Including when one decides to become a “public figure.”

    • ga gamba says

      I largely agree. A problem is all your friends, colleagues, and family members who upload photos they’ve taken that also include you.

      • Lightning Rose says

        No. They haven’t. Because I’ve forbidden it, as has my immediate family. DO-NOT-BE-CAUGHT-DEAD-UPLOADING-OUR-PICTURES. So far, everyone’s abided by it just fine.

        • Stephanie says

          I have a neighbour who goes to great lengths to never be photographed, because he’s too aware of how data gets manipulated. I’d like to follow suit, but I’m addicted to broadcasting (read: bragging) about my photogenic life. I wish I had never started. I use all kinds of excuses to keep it, from videochatting with family, to keeping in touch with friends I wouldn’t bother actually maintaining penpal friendships with, to keeping track of memories. I’ve got privacy settings on max, but that won’t help much in a data breach.

  8. Thanks, Millennials says

    Perfect example of why use of AI / ML technologies will bring more harm than good. Literally everything will be able to be convincingly faked within a year or two, whether it’s a famous person saying something they didn’t say, a private citizen doing something embarrassing to ruin their reputation (this is already happening though not yet in the news because no one has done the obligatory revenge killing yet and told the story in court — it will happen 100% guaranteed, it will become common). Two people having sex that didn’t have sex, or committing a crime they didn’t commit, anything.

    File all of this under “Some of us told you so,” “Code is not the solution to everything,” and “Doing things just because you can doesn’t make it a good f-ing idea.”

  9. Farris says

    I understand the problems with deep fakes but I am not completely sure I share the author’s concern.

    If someone is to be framed or libeled with a deep fake, can not the target simply ask when and where the photo or video was made? Once the target is able to show he or she was not in that location at the date and time of creation, would not the creator and publisher be revealed as forgers?
    Perhaps my ignorance on this topic makes the answers to these questions less obvious to me than to most others.

    • Deep Facts says

      No one can be framed using the current technology, not even celebrities. Even the best Deepfakes are obvious, as there is always some “flicker” in the person’s face, where the lighting and masking glitch out for a frame or two and it’s very obvious to a human watching. Likewise, no normal person will ever become a Deep Fake victim with the current technology, because there isn’t a vast database of their face making a large number of facial expressions, with multiple angles and lighting. You’d need a huge variety in the thousands, not a thousand smiling facebook pictures.

    • Cassandra says

      Tell that to General Lord Bramall, who was accused of sexually abusing a minor in Pimlico,,at a time and date when he was on official business representing the Queen in Singapore. There were plenty of newsreels of him at these ceremonies, but the Metropolitan Police chose to ignore them and go with their own ‘credible witness’.

      If it could happen to him, it could happen to you.

  10. Saw file says

    The broader threat to ‘Joe/Jane Average’ from deep fakes is not what can be accomplished using the “current technology”. It is what will be able to be created using the future technology.
    It’s reasonable to expect this type of technology to advance at a ever increasing pace. What today requires thousands of varied images to produce a somewhat mediocre deep fake, in the future will (undoubtedly) only require a relative handful of images to produce a truly believable product.
    I think that it is also safe to assume that the more nefarious elements in the technology’s development are working on ways to make such deep fakes undetectable. I think that the the current battle with hackers is a reasonable blueprint as to how that’s going to play out.

  11. Tersitus says

    Without wishing to overly conspiracy-theorize— a few months ago the buzz was that a record existed showing a passport belonging to Michael Cohen crossed ? border. (forgotten exactly which) — a claim he consistently has denied since the first appearance of the dossier, and did again. Had me thinking— still does— deepfake. But what do I know.

    • tarstarkas says

      As I understand it the Michael Cohen in Prague was a different Michael Cohen, the Fake Dossier crew were sloppy and included that info in their data.

      • Tersitus says

        Except that the later report came long after both Cohen and then Mueller’s investigation and charging documents had pretty clearly and publicly dismissed the earlier claim as mistaken identity. Cohen’s response to the more recent was (1) “not me” and (2) the added, cryptic “Mueller knows everything.”

    • Tome708 says

      Donald Trump will likely be the first victim. He has upset the wrong people

  12. Tersitus says

    One of the most fascinating things the Soviet archives produced, post-collapse— Check out David King’s 1990’s “coffee table book” The Commissar Vanishes. It’s an eye-opener. The Stalin-Era doctoring of photographic and visual art history is darkly comic in its obsession and it’s brazenness.

  13. Stephanie says

    The possibility of generating fake videos is terrifying. As Deep Facts points out, currently less than 1 % of people could be reconstructed, but what about in China? They’ll have 2 cameras for every person soon, and they keep track of everyone’s coming and goings. Accumulating such footage over years will certainly give the CPP the ability to generate deepfakes of their citizens. Yet another tool for keeping the population in line and purging impure party members.

    Aren’t they testing similar technology in Europe now?

    • Saw file says

      @Stephanie
      I too was thinking a lot about China, after reading this article. With the surveillance culture of the regime accumulating massive amounts of data on it’s citizen subjects, and it’s aggressive “by any means necessary” pursuit of these various technologies, it’s only a matter of time before advanced deep fake technology will be used to discredit (‘eliminate’) dissidents and political/business rivals. They won’t just have images. They will have hours of video.
      Restraint, let alone morality and ethics, is not somthing the CCP understands, when it comes to controlling the population and keeping it’s ideological boot firmly in place.
      The thought of how it could be used to co-opt foreign politicians and business leaders, to advance the international interests of China (aka: CCP) is even more terrifying.

  14. Peter Kriens says

    Although impressive things are done today with AI the technology has huge obstacles to surmount. It is still ages away from anything resembling human intelligence and it turns out the AI networks are extremely sensitive to adversarial inputs. See https://blog.openai.com/adversarial-example-research/

    We are anyway a very long way off from the time that even a cursory technical analysis of a photo or video will not reveal its fake ness.

    I actually expect that we will over time begin to limit our acceptance of visual proofs because as society we will learn that what we see is not always the truth. Our human intelligence in contrast to AI is quit good in learning.

  15. In the pre-digital era of film-only photography it took hours to retouch a negative. The famous Ottawa portraitist Karsh had an excellent anonymous retoucher work on his photos of statesmen like Churchill or movie stars, to make them look better. Then came digital, and soon, Photoshop. As a hobbyist photographer using photoshop it would take me 1-2 hours to put one person’s head on another’s body, or to add or subtract a person from a photo seamlessly. I once did this for friends for a joke, but I’m not really good at it. However, a pro retoucher can probably do this in 15 minutes, invisibly. We don’t need AI to fake photos. Ai just makes faking easier and faster. Even more so with video.

  16. Pingback: Weekend Reads 030119 – rule 11 reader

  17. Deepfake says

    Benny boy, I thought you were the Quillette contributor who has a nasty habit of publishing articles like, “Use Signal, use Tor and you’ll be invisible from the big bad NSA.” But that’s someone else, apparently.

    The best thing that can come out of the deepfakes, besides the chaos of false porn scandals, is that people stop trusting the “inherent” “truth” of visual evidence.

    It’s about time we grew up and realized that some partial smartphone video of someone calling someone a nigger on the subway, and the resulting chimpout, is meaningless and stupid. As we say in the software world, it’s “considered harmful.”

    Did you know that photographs and video footage are actually admissible in court as evidence, and that juries are so stupid they fail to question the authenticity of the evidence?

  18. BonnieBot says

    Google Photos. Yeah, so if you are set up to auto upload all of your photos, well, that could be an issue. I am not great at capturing photographic images, therefore, I take a ton at a time. Likewise, my phone captures a ton of facial expressions of my subject and uploads them to Google without me even thinking about it. Used to anyway, until a couple of years ago when realized how bad Google is.

Comments are closed.