Reflections from Professor Evelyn Aswad on Deepfakes and International Human Rights
Professor Aswad is the Herman G. Kaiser Chair in International Law at the University of Oklahoma, where she also serves as the Director of the Center for International Business and Human Rights. The following builds on her contribution to the Deepfakery web series.
The international law standard on freedom of expression is set forth in Article 19 of the widely ratified International Covenant on Civil and Political Rights (ICCPR). This covenant provides broad protections for freedom of expression, including different forms of art (such as satire) and human rights activism. Freedom of expression may be limited, but only if the government can demonstrate that a restriction passes the three-part test of legality, legitimacy and necessity/proportionality. If a government wants to limit or otherwise burden speech, including speech through the use of AI-enabled media, it bears the burden of proving that all three conditions are met. The UN Human Rights Committee’s General Comment No. 34 further clarifies how these tests should be applied.
Although these human rights principles are designed for state actors, the international community has endorsed the UN Guiding Principles for Business and Human Rights, which call on companies to respect internationally recognized human rights in their business operations. The UN Special Rapporteur on Freedom of Opinion and Expression has called upon social media companies to use this framework in order to avoid infringing on freedom of expression and to address adverse impacts.
These tests might be understood, in attempting to regulate AI-enabled media, as follows:
Legality: Any proposed legislation or policies that restrict or burden expression must not, among other things, be vague or overly broad. In the case of deepfakes and other forms of AI-enabled media, this includes defining clearly what specific forms of media and their uses are being restricted. This practice gives users clear guidelines, limits the discretion of those implementing the policy (which helps to avoid selective and discriminatory enforcement), and avoids restricting practices that do not pose risks of harm.
Legitimacy: The reason for limiting expression must be a legitimate public interest objective, as set forth in ICCPR Article 19, such as protecting the rights of others, national security, or public health. Protecting a regime, head of state, or government official from the kinds of deepfake satire detailed in this report would not constitute legitimate grounds for limiting expression.
Necessity and proportionality: This test should be applied using interdisciplinary, multi-stakeholder input to determine when it is truly necessary to limit the use of AI-enabled media. This condition includes asking:
- Are there non-censorial approaches that could be deployed to achieve the public-interest objective? If effective non-censorial methods are available, it may not be necessary to burden speech to achieve the objective. In the context of AI-enabled media, there are a number of questions that should be considered. For example, are governments or social media platforms investing enough in digital literacy and other media-education initiatives to build societal resiliency with respect to the use of such technologies? Can fact-checking operations be effective? Are governments or social media platforms investing in technology that empowers consumers to know if they are looking at a deepfake?
 - If non-censorial methods are insufficient to achieve the legitimate objective, what is the least intrusive measure that can be pursued to accomplish the goal? Speech regulators should develop a continuum of options to achieve the objective and then select the one that achieves the public-interest objective with the least burden on speech. With regard to AI-enabled media, labelling content would be less intrusive than deleting it.
 - If the least intrusive measure is implemented, does it actually work? If the selected measures are ineffective or counterproductive, they cannot be upheld as they do not serve to achieve the public-interest objective. In the case of AI-enabled media, if governments and social media platforms implement measures that burden or limit expression, they would need to monitor and be transparent about whether the measures are effective in achieving the legitimate public-interest goal (e.g., preventing intimate image abuse, disinformation harms, etc.).
 
Social media platforms need to ensure this three-part test is applied to all aspects of content moderation, including platform speech codes as well as human and automated moderation of speech. These companies should also be transparent with the public about the measures they are applying to regulate speech.
Reflections from Attorney Matthew F. Ferraro on Satire, Fair Use, and Free Speech
A former U.S. intelligence officer, Matthew F. Ferraro is an attorney at WilmerHale, where he works at the intersection of cybersecurity, national security, and crisis management.
Disclaimer: The following are general statements of legal principles, not legal advice and not intended as such. I’m speaking only for myself, and not on behalf of my firm or clients.
Protections Under the U.S. Constitution
Satire is generally protected under the First Amendment to the U.S. Constitution. This principle was enunciated in a well-known U.S. Supreme Court case, Hustler v. Falwell, 485 U.S. 46 (1988). Adult-media mogul Larry Flynt provoked a lawsuit by satirizing the Christian televangelist Jerry Falwell in a cartoon Campari ad in his sexually explicit Hustler magazine. The ad showed an illustrated Falwell “recalling” his first sexual experience—with his mother, in an outhouse. A disclaimer noted that it was an “ad parody not to be taken seriously.” Falwell sued and won damages from the lower courts for intentional infliction of emotional distress. However, the U.S. Supreme Court reversed the lower courts’ award and held unanimously that it was, in fact, a protected parody.
The ruling held that “public figures” such as Falwell may not recover damages for the intentional infliction of emotional distress without showing that the offending publication contained a false statement of fact, made with “actual malice.” This requires knowledge that the statement was false or made with reckless disregard as to whether or not it was true.
U.S. defamation law distinguishes between public figures and private persons. For someone of public interest or familiarity—like a government official, politician, celebrity, business leader, actor, or athlete—to litigate a defamation claim successfully, they must not only prove that the statement was defamatory (a false statement of fact), but also that it was made with “actual malice.” The higher standard is because the Constitution aims to promote debate involving public issues. Private persons outside of the public spotlight generally only need to prove that the defamer acted “negligently”—that they failed to behave with the level of care that someone of ordinary prudence would have exercised under the same circumstances.
Balancing First Amendment Rights with Fair Use and Defamation Laws
“Fair use” is the ability to use someone else’s copyrighted intellectual property under certain permissible circumstances, such as satire, without authorization or monetary payment. Congress wrote the principles of fair use into law in the Copyright Act of 1976, stating that such purposes as criticism, news reporting, teaching, and scholarship should not constitute an “infringement of copyright.” The Act articulates a four-factor balancing test to determine whether a use is a fair one. Critical aspects include the purpose and character of the use, the amount and substantiality of the portion used, and the effect of the use on the potential market value of the copyrighted work.
Since the passing of the Copyright Act, fair use has increasingly hinged on the question of whether the use was “transformative,” and the extent to which the aggregate amount of material from the original media object is appropriate to the new use or application. The Center for Media and Social Impact has some excellent resources in this area.
For defamation, after determining whether the person suing is a private or public figure, the court will ask the following general questions: Is the statement in question indeed false, but purporting to be fact? Is the false statement somehow published, or communicated to a third person? Does the defendant show the appropriate level of fault (negligence if the plaintiff is a private person, or actual malice if a public figure)?
Lastly, the courts will ask if the false statements could and did cause the plaintiff to suffer damages such as loss of money or potential earnings, harm to their reputation, or difficulties in relations with third persons whose views might have been affected.
Addressing Deepfakes
Several of the laws adopted by U.S. states around deepfakes specifically exempt satire from their prohibitions:
- In New York, a new right-to-publicity deepfake law (S5959/A5605C) establishes a postmortem right to protect performers’ likenesses—including digitally manipulated likenesses—from unauthorized commercial exploitation for 40 years after death. However, the law includes broad fair-use-type exceptions for parody or criticism, or educational or newsworthy value. Notably, the law provides safe harbor. If the digital replica carries “a conspicuous disclaimer” that its use was not authorized by the rights holder, then the use “shall not be considered likely to deceive the public into thinking it was authorized.”
 - This same New York law also bars most nonconsensual deepfake pornography. In these cases, a disclaimer is no defense. The law requires written consent from the depicted individual.
 - A law in California (AB-730) prohibits anyone from distributing, within 60 days of an election, any materially deceptive audio or visual media depicting any candidate on the ballot with “actual malice,” i.e., the intent to injure the candidate’s reputation or to deceive voters—unless the media carry an explicit disclosure that it has been manipulated. Again, this law exempts satire and parody, as determined by the courts. It also provides exemptions from liability for broadcasting stations and websites that label the media in a way that foregrounds the fact of alteration, or for airing paid political advertisements that contain materially deceptive audio or video.
 
Combating the Malicious Uses of AI-enabled Media
As I argue in a Washington Post article co-written with Jason C. Chipman, titled “Fake News Threatens Our Businesses, Not Just Our Politics,” although free-speech rights protect opinion, businesses and individuals may have legal recourse—especially when third parties defame private individuals or benefit financially from spreading lies. Here are some of the possible remedies.
- State and federal laws bar many kinds of online hoaxes.
 - Laws that could be applicable to deepfakes may include: defamation, trade libel, false light, violation of the right to publicity, intentional infliction of emotional distress, negligent infliction of emotional distress, and right to publicity.
 - Manipulated media that harm a victim’s commercial activities may also be actionable under widely recognized economic and equitable torts, including: unjust enrichment, unfair and deceptive trade practices, and tortious interference with prospective economic advantage.
 
Federal laws may be applicable if the media misappropriates intellectual property:
- The Lanham Act prohibits the use in commerce, without the consent of the registrant, of any ‘‘registered mark in connection with the sale, offering for sale, distribution, or advertising of any goods’’ in a way that is likely to cause confusion. The Lanham Act also prohibits infringing on unregistered, common-law trademarks.
 - The Copyright Act of 1976, which protects original works of authorship, may provide remedies if the manipulated media is copyrighted. (See the article I co-wrote with Jason C. Chipman and Stephen W. Preston for Pratt’s Privacy & Cybersecurity Law Report.)
 
No court that I know of has weighed in so far on social media platforms’ obligations to label or remove deceptive media that claims to be satirical. As noted above, labelling is relevant to some of the deepfake laws and, as we saw in the Hustler case, to some free-speech law, too. The major concern is “line drawing”—one person’s satire is another’s harmful defamation. In the United States, we’ve tended to err on the side of allowing free speech.
Strategies for Satirists
Generally speaking, I’ve observed that public figures are afforded less protection than private persons, so satirists are on firmer ground when they focus on the former rather than on everyday citizens. Opinions are usually protected, while stating false claims as fact or vice versa (e.g., X person did Y thing) are not. Thus, satirists will find themselves in safer territory to the extent that they focus on opinions and not on specific misleading or spurious assertions.


Images of a burning building in India circulating online suggesting xenophobic attacks in Johannesburg.
Satirical newspapers from around the world. From top to bottom, left to right: USA’s The Onion; Nigeria’s Punocracy; Jordan’s Al Hudood, and Mexico’s El Deforma.