1 Since then, the majority of malicious deepfakes continue to target women, but deepfakes have also been increasingly tapped for their creative potential by artists, musicians, documentarians, amateur creators, and human rights activists, all with varying motivations and intentions. Satire has emerged as a vibrant realm of deepfakery, with consumer-tool generated memes, sophisticated faceswaps, and more elaborate digital scenes volleying incisive critiques in the service of equity and justice.

While part of a wave of emerging media, deepfakes themselves along with the tools needed to create them are not without historical precedent. They have evolved from a long line of audio-visual media, many of which were first met with debate and even moral panic. As recently as the 1990s, for example, the rise of home computing, digital video cameras, and software such as Photoshop stoked fears as to where digital fabrication might lead.

Channel 4 in the United Kingdom aired a deepfaked Queen alternative Christmas message for a very alternative year (2020).

 

In one sense, deepfake satire falls within an expansive tradition of subversive cultural practice that includes modernist Dada installations and collage paintings, theatrical “happenings” and “pranks,” guerilla video and graffiti art. More broadly, satire has long held an established place across global popular culture, as well as literary, broadcasting, and film traditions. Satire draws on narrative and literary tropes such as irony, innuendo, defamiliarization, and hyperbole, often pushing representation beyond the threshold of realism.2 The aim is generally to hold real-life people and events up for critical evaluation.

Writing in the 18th century, biographer and essayist Samuel Johnson discussed satire in terms of exercising moral judgement, holding “wickedness and folly” up for censure.3 More recently, media historians Jonathan Gray, Jeffrey P. Jones, and Ethan Thompson have argued that satire exposes or articulates a truth that may be obscured or hidden from public view. In turn, it provides a valuable means through which citizens can “analyze and interrogate power” and entrenched norms.4 With the rise of online journalism, satire has found a place on media hubs ranging from 5

6 As their names imply, cheapfakes and shallowfakes are created through widely accessible, relatively inexpensive software.

Images of a burning building in India circulating online suggesting xenophobic attacks in Johannesburg.

 

Satire

Satire is a genre of creative expression that draws on comedic devices (hyperbole, irony, etc.) to cast judgment on a person, group of people, a set of norms, or a larger idea. At their best, such works of art aim to reveal larger social truths. Humor is crucial to the technique and popular appeal of satire, whose history spans folk and oral traditions, visual art, music, poetic and essayistic forms, and theatrical productions. Satire has continued to live on via large and small screens in the 20th and 21st centuries.

Satirical newspapers from around the world. From top to bottom, left to right: USA’s The Onion; Nigeria’s Punocracy; Jordan’s Al Hudood, and Mexico’s El Deforma.

 

Parody

Parody is a comic imitation of a person, work, or style, often involving exaggeration or playful stylization. It can serve a critical and edifying function, poking fun at a target and skillfully showing how it operates. Parody frequently takes the form of light entertainment, such as musician Weird Al Yankovic’s 7 A few years later, Charlie Chaplin embodied the Hitleresque dictator Adenoid Hynkel in The Great Dictator (1940), exposing the fascist mobilization of fantasy and the vanity and delusion of authoritarian leaders. More recently, satirists have gravitated to deepfakes, using synthetic versions of politicians’ faces in irreverent resistance to oppressive regimes and the extremist movements behind them.

The majority of these “faceswap” videos are created by individual artists or amateurs who go by pseudonymous handles on social media. Most are quickly and even crudely fashioned, and do not intend to fool viewers per se. They make their subjects look ridiculous, ignorant, or evil by way of a grotesque performance. These videos reveal what may be masked in the individual’s day-to-day professional life or hidden from public view. In some cases, lip-synched impersonations or synthetically replicated speech give viewers the impression that these politicians are denigrating their own platforms or party ideologies.

Artist Bruno Sartori was an early innovator in this realm of deepfakery, making a name for himself through his mocking of Brazil’s far-right president, Jair Bolsonaro. One video depicts Bolsonaro’s paranoid demonization of his go-to scapegoat, the former president Lula (Luiz Inácio Lula da Silva), by faceswapping both of them into Mariah Carey’s 8

The discourse surrounding public figures could help offer broad protections to those synthetic media makers aiming to satirize figures who enjoy the societal spotlight. But the boundaries are not always clear between a public and a private person, especially given the highly visible way that people live their lives online, and the increasing intersections between the spheres of politics and popular culture.

Questions of ownership can also be murky around the commodification of likeness. While a skilled actor doing an impersonation might clearly be in rightful possession of their filmed performances as intellectual property, the synthetic voice of a politician constitutes a noticeable distinction and might come with some restrictions for use. The profit motive is also an important consideration. Should it be necessary to obtain Leonardo DiCaprio’s consent to make him into a realistic-looking avatar in a video game, reality TV show, or documentary? As an actor, he makes a living from the use of his image, and should have the agency to control and share in any money made from it. However, perhaps an AI DiCaprio should be seen as continuous instead with how the actor might be depicted or impersonated in traditional forms of animation, sketch comedy, or literary satire.

Advocacy

Deepfakes are also becoming a tool for progressive advocacy, with approaches ranging from biting satire to more somber PSA-style messages. This kind of media takes many forms, and resonates with modernist art movements in the 1920s-30s, the street theatre of the 1960s, the anti-globalization video art and “culture-jamming” pranks of the 1990s, along with more institutionalized outreach efforts by nonprofits and NGOs in the 2000s.9

Deepfakes can humanize problems that might seem distant or abstract. The Pakistani climate-change initiative 10

Given the ease with which deepfakes can be made, altered, and shared, social media companies need to take seriously how they’re managed. A nuanced and interpretive approach to moderation would assess the implications of different forms of sound and image fabrication. Too light a touch could confuse viewers or lead to the spread of online misinformation. Too aggressive an approach could result in deepfake art being unable to find a platform.

A series of articles by 11

Philosopher Regina Rini observes that the increasing production and circulation of deepfakes could heighten a general public distrust in audio-visual media. Rini writes that audio-visual recordings have served democratic societies in providing a useful “epistemic backstop” against which a wide breadth of claims concerning utterances and actions could be evaluated. However, the rise of deepfakes could lead to a pervasive crisis of confidence, making new methods and protocols (especially at the legal and political level) necessary for discerning fact from fiction.12

In 2019, a 13

In 2020, 14

Even as the Bee mocks claims that its content is intentionally deceptive, its articles reinforce readers’ world views. Trump used to tweet stories from the platform as true, as when he shared his astonishment that Twitter had intentionally shut down their network to slow the spread of anti-Biden stories. As journalist Parker J. Bach describes in Slate, the Bee’s articles might not in themselves be a form of mis/disinformation, but they strengthen and reinforce content that is deceptive. From a media-platform perspective, the question is not just how to label individual articles published and distributed by the Bee or others, but how to handle posts containing these articles that are circulated and commented on by other people.

PART III

THE LEGAL VIEW

PART III: THE LEGAL VIEW

Reflections from Professor Evelyn Aswad on Deepfakes and International Human Rights

Professor Aswad is the Herman G. Kaiser Chair in International Law at the University of Oklahoma, where she also serves as the Director of the Center for International Business and Human Rights. The following builds on her contribution to the Deepfakery web series.

The international law standard on freedom of expression is set forth in Article 19 of the widely ratified International Covenant on Civil and Political Rights (ICCPR). This covenant provides broad protections for freedom of expression, including different forms of art (such as satire) and human rights activism. Freedom of expression may be limited, but only if the government can demonstrate that a restriction passes the three-part test of legality, legitimacy and necessity/proportionality. If a government wants to limit or otherwise burden speech, including speech through the use of AI-enabled media, it bears the burden of proving that all three conditions are met. The UN Human Rights Committee’s General Comment No. 34 further clarifies how these tests should be applied.

Although these human rights principles are designed for state actors, the international community has endorsed the UN Guiding Principles for Business and Human Rights, which call on companies to respect internationally recognized human rights in their business operations. The UN Special Rapporteur on Freedom of Opinion and Expression has called upon social media companies to use this framework in order to avoid infringing on freedom of expression and to address adverse impacts.

These tests might be understood, in attempting to regulate AI-enabled media, as follows:

Legality: Any proposed legislation or policies that restrict or burden expression must not, among other things, be vague or overly broad. In the case of deepfakes and other forms of AI-enabled media, this includes defining clearly what specific forms of media and their uses are being restricted. This practice gives users clear guidelines, limits the discretion of those implementing the policy (which helps to avoid selective and discriminatory enforcement), and avoids restricting practices that do not pose risks of harm.

Legitimacy: The reason for limiting expression must be a legitimate public interest objective, as set forth in ICCPR Article 19, such as protecting the rights of others, national security, or public health. Protecting a regime, head of state, or government official from the kinds of deepfake satire detailed in this report would not constitute legitimate grounds for limiting expression.

Necessity and proportionality: This test should be applied using interdisciplinary, multi-stakeholder input to determine when it is truly necessary to limit the use of AI-enabled media. This condition includes asking:

  1. Are there non-censorial approaches that could be deployed to achieve the public-interest objective? If effective non-censorial methods are available, it may not be necessary to burden speech to achieve the objective. In the context of AI-enabled media, there are a number of questions that should be considered. For example, are governments or social media platforms investing enough in digital literacy and other media-education initiatives to build societal resiliency with respect to the use of such technologies? Can fact-checking operations be effective? Are governments or social media platforms investing in technology that empowers consumers to know if they are looking at a deepfake?
  2. If non-censorial methods are insufficient to achieve the legitimate objective, what is the least intrusive measure that can be pursued to accomplish the goal? Speech regulators should develop a continuum of options to achieve the objective and then select the one that achieves the public-interest objective with the least burden on speech. With regard to AI-enabled media, labelling content would be less intrusive than deleting it.
  3. If the least intrusive measure is implemented, does it actually work? If the selected measures are ineffective or counterproductive, they cannot be upheld as they do not serve to achieve the public-interest objective. In the case of AI-enabled media, if governments and social media platforms implement measures that burden or limit expression, they would need to monitor and be transparent about whether the measures are effective in achieving the legitimate public-interest goal (e.g., preventing intimate image abuse, disinformation harms, etc.).

Social media platforms need to ensure this three-part test is applied to all aspects of content moderation, including platform speech codes as well as human and automated moderation of speech. These companies should also be transparent with the public about the measures they are applying to regulate speech.

Reflections from Attorney Matthew F. Ferraro on Satire, Fair Use, and Free Speech

A former U.S. intelligence officer, Matthew F. Ferraro is an attorney at WilmerHale, where he works at the intersection of cybersecurity, national security, and crisis management.

Disclaimer: The following are general statements of legal principles, not legal advice and not intended as such. I’m speaking only for myself, and not on behalf of my firm or clients.

Protections Under the U.S. Constitution

Satire is generally protected under the First Amendment to the U.S. Constitution. This principle was enunciated in a well-known U.S. Supreme Court case, Hustler v. Falwell, 485 U.S. 46 (1988). Adult-media mogul Larry Flynt provoked a lawsuit by satirizing the Christian televangelist Jerry Falwell in a cartoon Campari ad in his sexually explicit Hustler magazine. The ad showed an illustrated Falwell “recalling” his first sexual experience—with his mother, in an outhouse. A disclaimer noted that it was an “ad parody not to be taken seriously.” Falwell sued and won damages from the lower courts for intentional infliction of emotional distress. However, the U.S. Supreme Court reversed the lower courts’ award and held unanimously that it was, in fact, a protected parody.

The ruling held that “public figures” such as Falwell may not recover damages for the intentional infliction of emotional distress without showing that the offending publication contained a false statement of fact, made with “actual malice.” This requires knowledge that the statement was false or made with reckless disregard as to whether or not it was true.

U.S. defamation law distinguishes between public figures and private persons. For someone of public interest or familiarity—like a government official, politician, celebrity, business leader, actor, or athlete—to litigate a defamation claim successfully, they must not only prove that the statement was defamatory (a false statement of fact), but also that it was made with “actual malice.” The higher standard is because the Constitution aims to promote debate involving public issues. Private persons outside of the public spotlight generally only need to prove that the defamer acted “negligently”—that they failed to behave with the level of care that someone of ordinary prudence would have exercised under the same circumstances.

Balancing First Amendment Rights with Fair Use and Defamation Laws

“Fair use” is the ability to use someone else’s copyrighted intellectual property under certain permissible circumstances, such as satire, without authorization or monetary payment. Congress wrote the principles of fair use into law in the Copyright Act of 1976, stating that such purposes as criticism, news reporting, teaching, and scholarship should not constitute an “infringement of copyright.” The Act articulates a four-factor balancing test to determine whether a use is a fair one. Critical aspects include the purpose and character of the use, the amount and substantiality of the portion used, and the effect of the use on the potential market value of the copyrighted work.

Since the passing of the Copyright Act, fair use has increasingly hinged on the question of whether the use was “transformative,” and the extent to which the aggregate amount of material from the original media object is appropriate to the new use or application. The Center for Media and Social Impact has some excellent resources in this area.

For defamation, after determining whether the person suing is a private or public figure, the court will ask the following general questions: Is the statement in question indeed false, but purporting to be fact? Is the false statement somehow published, or communicated to a third person? Does the defendant show the appropriate level of fault (negligence if the plaintiff is a private person, or actual malice if a public figure)?

Lastly, the courts will ask if the false statements could and did cause the plaintiff to suffer damages such as loss of money or potential earnings, harm to their reputation, or difficulties in relations with third persons whose views might have been affected.

Addressing Deepfakes

Several of the laws adopted by U.S. states around deepfakes specifically exempt satire from their prohibitions:

  • In New York, a new right-to-publicity deepfake law (S5959/A5605C) establishes a postmortem right to protect performers’ likenesses—including digitally manipulated likenesses—from unauthorized commercial exploitation for 40 years after death. However, the law includes broad fair-use-type exceptions for parody or criticism, or educational or newsworthy value. Notably, the law provides safe harbor. If the digital replica carries “a conspicuous disclaimer” that its use was not authorized by the rights holder, then the use “shall not be considered likely to deceive the public into thinking it was authorized.”
  • This same New York law also bars most nonconsensual deepfake pornography. In these cases, a disclaimer is no defense. The law requires written consent from the depicted individual.
  • A law in California (AB-730) prohibits anyone from distributing, within 60 days of an election, any materially deceptive audio or visual media depicting any candidate on the ballot with “actual malice,” i.e., the intent to injure the candidate’s reputation or to deceive voters—unless the media carry an explicit disclosure that it has been manipulated. Again, this law exempts satire and parody, as determined by the courts. It also provides exemptions from liability for broadcasting stations and websites that label the media in a way that foregrounds the fact of alteration, or for airing paid political advertisements that contain materially deceptive audio or video.

Combating the Malicious Uses of AI-enabled Media

As I argue in a Washington Post article co-written with Jason C. Chipman, titled “Fake News Threatens Our Businesses, Not Just Our Politics,” although free-speech rights protect opinion, businesses and individuals may have legal recourse—especially when third parties defame private individuals or benefit financially from spreading lies. Here are some of the possible remedies.

  • State and federal laws bar many kinds of online hoaxes.
  • Laws that could be applicable to deepfakes may include: defamation, trade libel, false light, violation of the right to publicity, intentional infliction of emotional distress, negligent infliction of emotional distress, and right to publicity.
  • Manipulated media that harm a victim’s commercial activities may also be actionable under widely recognized economic and equitable torts, including: unjust enrichment, unfair and deceptive trade practices, and tortious interference with prospective economic advantage.

Federal laws may be applicable if the media misappropriates intellectual property:

  • The Lanham Act prohibits the use in commerce, without the consent of the registrant, of any ‘‘registered mark in connection with the sale, offering for sale, distribution, or advertising of any goods’’ in a way that is likely to cause confusion. The Lanham Act also prohibits infringing on unregistered, common-law trademarks.
  • The Copyright Act of 1976, which protects original works of authorship, may provide remedies if the manipulated media is copyrighted. (See the article I co-wrote with Jason C. Chipman and Stephen W. Preston for Pratt’s Privacy & Cybersecurity Law Report.)

No court that I know of has weighed in so far on social media platforms’ obligations to label or remove deceptive media that claims to be satirical. As noted above, labelling is relevant to some of the deepfake laws and, as we saw in the Hustler case, to some free-speech law, too. The major concern is “line drawing”—one person’s satire is another’s harmful defamation. In the United States, we’ve tended to err on the side of allowing free speech.

Strategies for Satirists

Generally speaking, I’ve observed that public figures are afforded less protection than private persons, so satirists are on firmer ground when they focus on the former rather than on everyday citizens. Opinions are usually protected, while stating false claims as fact or vice versa (e.g., X person did Y thing) are not. Thus, satirists will find themselves in safer territory to the extent that they focus on opinions and not on specific misleading or spurious assertions.

PART IV

SOCIAL MEDIA PLATFORMS,
POLICIES, AND PITFALLS

PART IV: SOCIAL MEDIA PLATFORMS, POLICIES, AND PITFALLS

Major social media platforms have specific policies for deceptively manipulated media as well as synthetic audio and video. A number of platforms are revising their rules and protocols for these areas, as well as trying to clarify their rules on satire and parody. However, these policies are often not consistently enforced, particularly on a global scale, and often fail to adequately account for the full range of false or misleading information, as well as artistic works (such as satire), that appear on these platforms. There is a lack of coherence and transparency regarding how decisions are made, as well as avenues of appeal available to users and others.

Civil-society advocates note that one core issue is that major social media companies do not adequately resource content moderation globally, nor understand local context sufficiently. In some national contexts, such as the U.S., Cambodia, and India, companies have been perceived as too closely involved with government entities and political actors. As WITNESS has noted elsewhere, human rights activists (e.g., in Myanmar, Brazil, Sri Lanka, Ethiopia, the Philippines, Hungary, and India) frequently call out the failure of platforms to act when social media is used to incite violence or amplify crises—often coordinated, commercialized and directed by governments. In the United States, activists have criticised Facebook and other platforms for failing to address racialized hate and misinformation. Although some progress has been made, it hasn’t been enough.

Platforms vary to the degree that they consider synthetic media manipulation separate from that of shallowfakes or cheapfakes. They also differ in how they understand and evaluate manipulation, deception, and the likelihood that a particular work of media may cause harm. In many cases, deepfakes will be covered under other principles that are format-agnostic; for example, media that takes the form of hate speech, election interference, COVID-19 misinformation, incitement to violence, and non-consensual sexual images. As legal scholar Evelyn Aswad notes, policies on deceptively manipulated media should be designed with human rights principles in mind, an approach adopted by Facebook’s Oversight Board when examining a recent case that raised questions about free expression and satire.

Facebook’s own recent explanation of how they are trying to better handle satire as well as humor in general, based on a set of interviews the company conducted with experts, offers a good primer on the key dynamics for any platform: “Stakeholders noted that humor and satire are highly subjective across people and cultures, underscoring the importance of human review by individuals with cultural context. Stakeholders also told us that ‘intent is key,’ though it can be tough to assess. Further, true satire does not ‘punch down’: the target of humorous or satirical content is often an indicator of intent. And if content is simply derogatory, not layered, complex, or subversive, it is not satire. Indeed, humor can be an effective mode of communicating hateful ideas.” However, none of the platforms have a robust explanation of how they assess satire in a nuanced and contextualized manner.

Current Policies

Facebook

Facebook’s policy (as of October 2021) states that the platform will remove content if it has been “edited or synthesized—beyond adjustments for clarity or quality—in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” Content will also be removed if it is the “product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”

The policy’s focus on AI-enabled media points to deepfakes along with forms of skillfully edited video. It does not include other forms of deceptively manipulated media such as shallowfakes and cheapfakes. However, Facebook’s other Community Guidelines still apply to relevant issues concerning the credible threat of violence, agnostic to manipulation type, where action is taken based on “hate speech” “or “misinformation and unverifiable rumors that contribute to the risk of imminent violence or physical harm.” Additionally, third-party fact checkers (who are from independent organizations contracted by Facebook) can flag shallowfake content as “false” or at least partially false. This results in viewers receiving a warning when they encounter this kind of media, and leads to duplicates and near-duplicates of the relevant images or video being identified with automation. This can help ensure that misinformation does not spread, but it also leads to the take-down of inaccurately classified shallowfakes and deepfakes.

Facebook does grant explicit exceptions for “content that is parody or satire” and “video that has been edited solely to omit or change the order of words.” Despite this caveat, until recently Facebook did not make clear how considerations of satire would be made. A June 2021 Facebook Oversight Board decision has compelled the company to clarify how they incorporate satire into their assessments. In turn, this calls for Facebook to pursue a contextual understanding of media grounded in local context, and address how users themselves might indicate satirical intent. As of April 2021, Facebook has also started experimenting with adding labels to satire pages on users’ newsfeeds.

Twitter

Twitter’s policy states: “You may not deceptively promote synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide additional context.”

The social media platform uses three key questions to decide if and how to label their content or decide to remove it. Is the content “synthetic” (created through AI tools) or is it deceptively manipulated? Is the content shared in a deceptive manner? Is the content likely to impact public safety or cause serious harm? Notably, the policy does not explicitly provide exemptions for satirical media, but bases moderation decisions on whether “the context in which media are shared could result in confusion or misunderstanding or suggests a deliberate intent to deceive people about the nature or origin of the content.”

TikTok

TikTok’s Community Guidelines on misinformation prohibits “digital forgeries (Synthetic Media or Manipulated Media) that mislead users by distorting the truth of events and cause harm to the subject of the video, other persons, or society.”

However, the guidelines also make “exceptions under certain circumstances,” including “artistic content, satirical content, content in fictional settings, and counterspeech.” TikTok does not provide a transparent account of how often this exception is applied, whether the decisions made with this rule are applied to similar content algorithmically, or whether users can contest decisions.

YouTube

Youtube’s Misinformation Policies (referenced October 2021) prohibit deceptively manipulated media, which the platform defines as “content that has been technically manipulated or doctored in a way that misleads users (beyond clips taken out of context) and may pose a serious risk of egregious harm.” As is the case with other platforms, deceptively manipulated media on YouTube will often be considered unacceptable under other policies on hate, violence, or specific circumstances such as elections. YouTube has rules on so-called “educational, documentary, scientific or artistic content,” which allow exceptions for satire. For example, their policies on COVID-19 misinformation note that they “may make exceptions if the purpose of the content is to condemn, dispute, or satirize misinformation that violates our policies.”

Coming Up Short

One perceived flaw with several social media companies’ policies, including Facebook, is that they concentrate on deepfakes without directly addressing many of the other common forms of deceptively manipulated media, such as shallowfakes. They have been taken to task for only providing warning labels or reduced distribution for shallowfakes if they are flagged by third-party fact checkers. YouTube’s deceptive-practices policy similarly makes an exception for clips taken out of context. Platforms that have deepfake-specific policies have argued that they aim to address the novelty of these AI-enabled production techniques and the difficulty for the public to detect them, while harmful shallowfakes are often covered by rules about impact and outcomes. Critics note that such rules are often inconsistently and incompletely applied.

(Mis)labelling Satire

Narrowly conceived or overly stringent practices of fact checking and labeling satirical media (and other forms of digital art) can have a stifling impact on creative expression, free speech, and open debate. This occurred in Cameroon when a local academic and activist shared a clearly fabricated video of the French ambassador telling Cameroonians that they never really achieved independence from France’s colonial exploitation. Facebook’s third-party fact checkers at the French broadcaster France 24 labelled the video partially false, thus nullifying some of the rhetorical power of the critique.

As Internet Sans Frontières director Julie Owono observed in the Deepfakery web series session on global human rights, labelling this video as “partially false” means “whoever shares this clip will be labeled as someone sharing “fake news.” In attempting to “fight fake news [in this way], platforms actually contribute in silencing important debates and an expression in countries that desperately need those debates and expression.”

This approach contrasts with Facebook’s lack of intervention in Gabon when the opposition leader falsely claimed a video of president Ali Bongo was a deepfake, as discussed above. Social media policies need to be aware of the liar’s dividend and false claims that media is manipulated, and ensure these are addressed with specific policies.

Local and Global Context

The Facebook Oversight Board’s ruling on satire made it clear that the platform should address context. More specifically, the Board stated that Facebook should provide: “content moderators with: (i) access to Facebook’s local-operation teams to gather relevant cultural and background information, and (ii) sufficient time to consult with Facebook’s local-operation teams and to make the assessment. Facebook should ensure that its policies for content moderators incentivise further investigation or escalation where a content moderator is not sure whether a meme is satirical or not.”

As noted earlier, all too frequently, the major social media companies do not adequately resource their global content moderation to provide the time, capacity, and expertise to make sound judgements on the media they host.

Inconsistent Enforcement and the Failure of Automation

Even when platforms’ policies seem to be comprehensive, they are too frequently applied in erratic fashion. Twitter has been accused of inconsistency in their labeling practices, as was the case during the recent U.S. presidential election. The platform did not label ambiguously edited footage of Trump in a Biden campaign advertisement or deceptively edited audio of Biden from Trump’s campaign.

Even when a company has determined that a work of media has violated its policies, many struggle to effectively remove it from their platform. TikTok faced criticism along these lines when they only took down a select few reposts of a widely circulated, deceptively edited video of Biden during the 2020 presidential election. The video depicted Biden saying, “We can only re-elect Donald Trump.” Automated systems are used to identify duplicate or near-duplicate examples of policy-violating media, but these can perpetuate errors from false positives. They can also overlook media that is not in violation according to one protocol (a satirical work clearly contextualized as such), but is in violation of another (a satirical work re-contextualized to remove satirical intent and used maliciously).

PART V

KEY QUESTIONS, NEXT STEPS

PART V: KEY QUESTIONS, NEXT STEPS

There is a need for more sustained and nuanced reflection on the use and misuse of synthetic media, and its place in a broader media ecosystem. The following questions, grouped by subject heading, foreground the civic possibilities and potential harm of synthetic media, pointing toward future discussions and research agendas.

In the coming months, we aim to convene individuals from across disciplines, sectors, fields of expertise, and walks of life to help explore some answers. Please join the conversation.

Consent and the Targets of Satirical Deepfakery

  1. In what cases, if any, is consent needed to target individuals in positions of power?
  2. Should deepfakes’ visceral and photorealistic nature require consent in ways that other forms of parody or impersonation do not?
  3. Can someone meaningfully consent to their likeness being used by a third party in whatever way they choose?
  4. Under what circumstances is deepfaking the deceased acceptable?
  5. Under what circumstances should it be completely unacceptable to deepfake another person?
  6. Are there practical steps available to distinguish better between satirical content and malicious manipulation masquerading as humor?

Disclosure, Labelling, and Intent

  1. Should creators be required to label all deepfakes? How might this be implemented and enforced? Could the explicit labelling of deepfake satire and other forms of AI-enabled art undermine their resonance?
  2. How can synthetic- media makers give insight to consumers about the process of creation without necessarily neutralizing the rhetorical or aesthetic power of the object?
  3. How could the responsibility for labelling and disclosing this kind of media be shared between the tool developers, creators, online platforms, and other stakeholders?
  4. Do poorly executed deepfakes that happen to fool some viewers need to be treated by both platforms and lawmakers as functionally identical to those that explicitly aim to harm or deceive?
  5. How can platforms’ interfaces and user experience make it harder or easier for viewers to understand the origin, style, and intention of the media they encounter?

Weaponization and Differential Impact

  1. What is the differential impact of weaponized deepfakes on countries in the Global South as well as marginalized communities in the West/Global North?
  2. How can a human rights framework be applied for practical decision-making around manipulated media and deepfakes at a platform and governmental level?

Content Moderation and Countermeasures

  1. What is the appropriate content-moderation approach for deepfakes and synthetic media that works globally and respects clear normative frameworks such as international human rights law?
  2. What are the ways that platforms can ensure AI-enabled art (including deepfakes) are not discredited by automated moderation or misguided fact checking?
  3. How do social media platforms’ differing approaches to moderating this kind of media impact specific global regions more than others, or differ in impact based on culturally specific understandings of satire?
  4. Should deepfake apps be legally required to sign onto a common “code of practice” to minimize potential for misuse? Might users be required to sign something similar?

Law and Policy

  1. What is the role of government in regulating how different forms of media circulate online? Should legislation dictate protocols for categorizing and vetting AI-enabled media specifically, and/or set standards for accountability?
  2. Should the use of deepfakes in political campaigns and political advertisements be prohibited in certain or all contexts?
  3. What are the ways in which copyright and defamation laws could be used as a pretense to suppress creative and socially charged uses of AI-enabled media? Are there particular guardrails or anticipatory legal measures that could be taken to prevent this?
  4. How can democratic societies avoid the weaponization of “deepfake panic” and “the liar’s dividend” as well as governmental or commercial policies that overly restrict satire as well as truthful journalism and fact-finding?

FOOTNOTES

  1. Samantha Cole’s article in VICE was one of the first to raise awareness about deepfakes. Samantha Cole, “AI-assisted Porn is Here and We’re All Fucked,” Motherboard, VICE, December 11, 2017, https://www.vice.com/en/article/gydydm/gal-gadot-fake-ai-porn. For more on the rise of deepfakes, see the State of Deepfakes reports by Deeptrace (now Sensity) https://regmedia.co.uk/2019/10/08/ deepfake_report.pdf; the Prepare, Don’t Panic: Synthetic Media and Deepfakes initiative by WITNESS, https://wit.to/Synthetic-Media-Deepfakes; Tim Hwang, Deepfakes: Primer and Forecast, NATO Strategic Communications Centre of Excellence, 2020, https://stratcomcoe.org/publications/deepfakes-primer-and-forecast/42; Tim Hwang, “Deepfakes: A Grounded Threat Assessment” (Center for Security and Emerging Technology, July 2020), file:///Users/joshuaglick/Downloads/CSET-Deepfakes-Report.pdf; Nina Schick, Deepfakes: The Coming Infocalypse (New York: Twelve, 2020).
  2. Northrop Frye, Anatomy of Criticism (New York: Atheneum, 1970), 223-4. For a cultural history of satire, see Jonathan Greenberg, The Cambridge Introduction to Satire (Cambridge: Cambridge University Press, 2019), 7-26.
  3.  Samuel Johnson, A Dictionary of The New English Language, ed. E.L. McAdam and George Milne (Mineola: Dover, 2005), 357.
  4. Jonathan Gray, Jeffrey P. Jones, and Ethan Thompson, Satire TV: Politics and Comedy in the Post-Network Era (New York: NYU Press, 2009), 8-19. 
  5. Dannagal Goldthwaite Young, Irony and Outrage: The Polarized Landscape of Rage, Fear, and Laughter in the United States (Oxford: Oxford University Press, 2020), 69-84.
  6. Britt Paris and Joan Donovan, Data & Society, Deepfakes and Cheap Fakes: The Manipulation of Audio-Visual Evidence (2019), https://datasociety.net/library/deepfakes-and-cheap-fakes/
  7. Sabine Kriebel, “Manufacturing Discontent: John Heartfield’s Mass Medium,” New German Critique, No. 107 (Summer 2009): 53-88; Jodi Sherman, “Humor, Resistance, and the Abject: Robert Benigni’s Life is Beautiful and Charlie Chaplin’s The Great Dictator,” Film & History: An Interdisciplinary Journal of Film and Television Studies 32:2 (2002): 72-81.
  8. James Scott, Weapons of the Weak: Everyday Forms of Peasant Resistance (New Haven: Yale University Press, 1985).
  9. See, for example, Marilyn DeLaure, Mortiz Fink, and Mark Dery, eds., Culture Jamming: Activism and the Art of Cultural Resistance (New York: NYU Press, 2017).
  10. Helen Rosner, “The Ethics of a Deepfake Anthony Bourdain Voice,” New Yorker, July 17, 2021 https://www.newyorker.com/culture/annals-of-gastronomy/the-ethics-of-a-deepfake-anthony-bourdain-voice; Helen Rosner, “A Haunting New Documentary About Anthony Bourdain,” New Yorker, July 15 2021 https://www.newyorker.com/culture/annals-of-gastronomy/the-haunting-afterlife-of-anthony- bourdain;  Interview with Justin Hendrix and Sam Gregory, “Voice Clone of Anthony Bourdain Prompts Synthetic Media Ethics Questions,” Tech Policy Press, July 16, 2021, https://techpolicy.press/voice-clone-of-anthony-bourdain-prompts-synthetic-media-ethics-questions/
  11. Robert Chesney and Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” (July 14, 2018). 107 California Law Review 1753 (2019), 1785-86.
  12. Regina Rini, “Deepfakes and the Epistemic Backstop,” Philosophers’ Imprint 20, no. 24 (August 2020): 2-9.
  13. For more on disinformation within the right-wing media ecology in the U.S., see Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (Oxford: Oxford University Press, 2018). W. Lance Bennet and Steven Livingston, The Disinformation Age: Politics, Technology, and Disruptive Communication in the United States (Cambridge: Cambridge University Press, 2021).
  14. Caty Borum Chattoo and Lauren Feldman, A Comedian and an Activist Walk Into a Bar: The Serious Role of Comedy in Social Justice (Berkeley: University of California Press, 2020).