Is It Your Right to Fake? Why Recent Attempts To Regulate The Use Of AI-Generated Content in Elections May Be Unconstitutional

By George Valases

for the Deepfakes & Democracy Initiative of EthicalTech@GW

Imagine you are an enthusiastic voter ready to go to the polls and make your voice heard. However, before you get the chance you receive an unusual phone call. The voice on the call sounds exactly like the candidate you wanted to vote for. Even more strangely, the candidate appears to be telling you to not even bother voting. This sounds unlikely. Why would a candidate for office encourage voters not to vote for them? This exact scenario occurred in the run up to the 2024 New Hampshire Democratic Primaries. Many New Hampshire voters received a robocall from what sounded like President Biden telling them not to bother voting. 

In reality, President Biden was not on the phone telling voters not to vote for him, nor did he record a message encouraging voters not to vote. This call was a deepfake imitating the President’s voice that was delivered to many voters across New Hampshire. Deepfakes are synthetic and realistic representations of people created using machine learning and artificial intelligence (“AI”). Deepfakes can be used to convincingly fabricate voices and even likenesses to depict people saying or doing things that never happened. As a result, photographic and audio-recorded evidence is no longer as reliable as it may have once been. This is especially concerning for elections as bad actors could convincingly fabricate evidence against political candidates in an attempt to tank their campaigns. 

A bipartisan group of senators recently introduced a bill to combat these concerns. The proposal is called the Protect Elections from Deceptive AI Act. The proposal prohibits knowingly distributing materially deceptive AI-generated content, including deepfakes, about political candidates with the intent to influence an election or solicit funds. The proposal would permit political candidates to sue for an injunction or equitable relief to prevent the distribution of the content as well as recover damages. Violations of the proposal would also be considered defamation per se, which makes it considerably easier to prevail in a defamation case because the proposed legislation creates a legal presumption that the content is inherently defamatory.

However, this proposal raises constitutional concerns, especially pertaining to First Amendment free speech rights. Political speech is among the most protected of all speech rights, and courts are skeptical about government attempts to regulate these rights. Political speech includes discussing government affairs, elections, and campaigns. The relevant test for this proposal is to consider whether it is a narrowly tailored means of achieving a compelling state interest, also known as the strict scrutiny standard. This is generally applicable when there are attempts to regulate the content of speech. 

The proposed legislation attempts to comply with the First Amendment by carving out exceptions for parody and satire. In addition, news broadcasts and publications are also exempt but must clearly disclose the content being presented is AI-generated or that there are questions about its authenticity. While these exceptions partially alleviate the First Amendment concerns, they are likely not sufficient.

Is Deceptive AI Even Worthy of Protection?

Many people’s initial reaction may be that deceptive AI is undeserving of any legal protection. After all, they are pure fabrications and detrimental to a voter’s right to a fair electoral process. However, false speech is generally granted protection under the First Amendment. Look no further than US v. Alvarez, where the Supreme Court struck down the federal Stolen Valor Act because it criminalized speech based purely on its falsity. There are instances where the truth or falsity of a statement does influence whether it receives protection, but falsity alone is not determinative.

The Court in Alvarez emphasized that false speech may not be valuable in and of itself, but that attempts to penalize false speech may chill all speech, even true speech, due to the broad censorship power the government would be granted if falsity alone were sufficient to criminalize speech. People may hesitate to speak the truth if they fear penalties, especially criminal prosecution. That is why even most statutes pertaining to false speech do have other requirements such as knowledge of falsity or where the speech causes cognizable harm, such as with defamation and fraud.

As the Supreme Court also recognized in Alvarez, even knowingly false statements can have value. After all, parodies like Saturday Night Live use the likeness of celebrities, albeit without AI, to make comedy or humorously criticize them. Political cartoons that depict political figures saying or doing things that never happened are commonly used to criticize such figures. AI can be used for the same purpose, therefore lawmakers must be cautious when restricting this channel of speech.

Strict Scrutiny Test

Since the Protect Elections from Deceptive AI Act regulates speech based on its content, a court would most likely apply strict scrutiny in reviewing the constitutionality of the statute. The bill attempts to regulate speech pertaining to the subject matter of an election and political candidates specifically. These types of subject matter restrictions are judged under strict scrutiny. There is little dispute that protecting the integrity of federal elections is a compelling government interest. Thus, the Supreme Court has upheld some restrictions on even political speech in similar electoral contexts. In Burson v. Freeman, the Court permitted a 100-foot buffer zone around polling stations where political speech was prohibited. The Court found that protecting the integrity of the vote by preventing undue influence on voters was a compelling government interest and that the 100-foot buffer zone was narrowly tailored to achieve that goal.

In contrast with Burson, in Russell v. Lundergan-Grimes the Sixth Circuit struck down a similar bill that created a no-political-speech buffer zone. That case involved a buffer zone that was 300 feet around the polling station, much more extensive than the 100-foot buffer zone that the Supreme Court upheld in Burson. The Sixth Circuit noted that the area was 9 times larger than what the Supreme Court permitted and thus could not be considered narrowly tailored without showing cause for why the additional size was necessary, which the state had failed to do. 

Burson and Russell would likely be considered relevant precedent by courts construing the constitutionality of the Protect Elections from Deceptive AI Act. These cases and the proposed legislation all involve attempts to regulate political speech related to an election. Burson also stands as a good example of what a narrowly tailored restriction on such speech should look like, while Russell serves as a good example of what an insufficiently narrowly tailored attempt at regulating political speech looks like. To be upheld constitutionally, this proposal needs to be more like Burson and less like Russell. 

With these cases in mind, there are major questions about the constitutionality of the proposed bill. While it arguably advances a compelling government interest, it is likely to be held to be insufficiently narrowly tailored to achieve that interest. One glaring flaw with this legislation is that there is no limited time frame for the restrictions. Campaigns can declare their candidacy years in advance. For example, President Trump declared his presidential candidacy for the 2024 election in November of 2022. Under the proposed bill, there is an almost 2 year period of time where one cannot distribute deceptive AI-generated content about a prominent political figure. Some may not view this as a cause for concern or may even applaud this. After all, the AI content is likely being used to deceive voters rather than express sincere political views. 

Yet even sincere expressions of political views through AI generation can produce a deceptive effect. [Maybe provide an example of this?] The proposal does not require an intent to deceive, only that the material itself is deceptive. The only intent requirement is to either influence an election or solicit funds. Almost all expressions of support for a candidate are intended to impact the outcome of an election. People put up signs and social media posts because they want to express their support and possibly help their candidate garner support in the process. Other intentions may exist, but it is hard to argue that those who express their views publicly do not at least hope they may change someone else’s views or maybe convince an undecided voter to vote for their candidate. If AI-generated content provides an effective avenue to do so, the government must be careful when regulating to not overstep its boundaries.

Consider the following hypothetical: someone who is dissatisfied with President Biden’s border policies uses AI to create realistic images of President Biden at the border personally welcoming migrants and waving them in, and then posts that image on social media in early 2023 with the caption “This is Essentially Biden’s Border Policy.” The images are just plausible enough that a reasonable person may believe they are real and that the President is personally at the border waving migrants in. This could fall under the satire or parody exceptions, but parodies and satires are meant to be obvious by their nature and satires specifically often require portrayal in a humorous way. That does not appear to be present in this hypothetical. As a result, the Biden campaign could have the post taken down, stifling the speech of a citizen who was expressing his or her dissatisfaction with the President’s policies over a year away from even the first primary election. Images often express ideas and capture attention better than words, and not everybody is a good enough artist to put their ideas into a visual medium. Why should citizens thus be prohibited from using AI to express their criticism of a politician by making use of images of the politicians themselves? 

To be clear, not all uses of AI to portray or imitate political candidates are equally deserving of protection. After all, the use of the robocall in the New Hampshire primaries has little expressive value and is more intended to deceive. However, the fact that rights can be abused does not justify blanket restrictions on those rights. Any regulation in this area should be as narrowly tailored as possible to prevent only the specific harms related to deceiving voters in an election and not infringe on the rights to express political views, even if in an unorthodox way.

 

Comparisons with Defamation and Other Existing Law

Since the bill directly references defamation law, it is appropriate and useful to compare the proposal to defamation law and other comparable laws. The landmark decision New York Times v. Sullivan established a high burden for prevailing in a defamation claim for public figures. Political candidates and certainly office holders are among the most prominent examples of public figures. The standard the Court adopted requires that the defendant must have known their statements were false or made the statements with reckless disregard to what the truth was. This standard applies only to statements that are capable of verification or falsification. Opinions are protected speech and cannot form the basis of a defamation action, under the Supreme Court’s defamation precedent.

Accordingly, any AI-generated content that expresses purely “opinions” would be exempt even under the Sullivan standard. Assuming the content does express falsities, the proposed legislation makes no reference to falsities, only deception. This seems semantic, but even a true statement can be deceptive. To emphasize this point, consider another hypothetical: a person who is critical of President Trump uses AI to generate an image of Donald Trump in a courtroom receiving a guilty verdict with the caption “Trump was found guilty for his crimes related to the election” and then posts it to social media. This is technically true because President Trump was on trial in a courtroom and a jury did recently find him guilty of falsifying business records, which was allegedly done to increase his chance of winning the 2016 election. However, a reasonable person coming across the post may incorrectly believe that this means he was convicted for his actions related to the 2020 election or the events of January 6, 2021. Since those allegations are more serious, a conviction for them would likely have a greater impact on the 2024 election than the business records case and could thus deceive voters into changing their vote based on the mistaken belief that Trump was convicted of a more serious crime, despite never making a false statement.

This is not to say that political candidates should not be able to recover in a defamation case against those who distribute deceptive AI content depicting them before an election. On the contrary, defamation law is a good fit for recovery on paper. The problem is the proposal does more than just extend existing defamation law to the distribution of deceptive AI-generated content before elections. Rather, the proposal extends what could be classified as defamation to potentially include the expression of opinions or technically true statements of fact just because the content conveying the message was AI-generated. Defamation should be based on the falsity of the content, not how the content was created.

The existence of laws like those prohibiting defamatory statements raises questions about whether the proposed legislation is needed at all. Carl Szabo, Vice President and Chief Counsel of NetChoice, argues that while some legislation may be needed for gap fillers, existing state and federal law is largely sufficient to cover the major concerns, especially those pertaining to democracy, without sweeping restrictions on AI innovation. For example, the New Hampshire robocall incident is already being prosecuted in New Hampshire for 13 felony counts of voter suppression. Szabo argues that using AI to impersonate a candidate in an attempt to defraud voters could also constitute wire fraud, allowing the federal government to pursue charges as well. The Foundation for Individual Rights and Expression has also suggested crimes like forgery, fraud, and false light may also be applicable. Because the use of such deepfakes in the election context is already subject to regulation under various state and federal laws, the proposed legislation may not even be necessary to deter bad actors. 

How to Improve the Bill

One critical step to improve the bill is to render it more narrowly tailored by adding a time frame relative to an election where it takes effect. This is crucial because the time frame for which one can declare their candidacy is extensive. Restricting any type of speech for a period of potentially years is far from narrowly tailored. This is especially true when the speech at issue is political speech. In addition, the bill should require the defendant to have the intent to deceive. A blanket regulation on deceptive content alone is not sufficient because even deceptive content can have significant expressive value. If new regulations are to be passed, they should target uses like the New Hampshire robocalls and avoid uses like both of the proposed hypotheticals. Finally, the proposal is more likely to be upheld as constitutional if it narrows its definition of AI-generated media. 

A good model to base these regulations on is the California Assembly Bill 972 (AB 972). This bill “prohibits a person, committee, or other entity, within 60 days of an election at which a candidate for elective office will appear on the ballot, from distributing with actual malice materially deceptive audio or visual media of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.” AB 792 effectively addresses two of the major concerns discussed above. It establishes a reasonable time frame for when the actions are prohibited and implements a requirement that the distributor have the intent to injure the candidate’s reputation or intent to deceive.

By addressing these concerns, AB 972 is more closely aligned with Burson than Russell. Burson had a noble objective that required some narrow restrictions on political speech. However, the lawmakers in Burson acknowledged the importance of the political speech being restricted, so they ensured that their restrictions were as narrow as reasonably possible to achieve their noble objective. Russell on the other hand involved lawmakers going far beyond what was needed and stifling political speech even on private property with no added justification for their astounding steps. The proposed federal legislation should follow the examples of Burson and AB 972 and not Russell.

It is also important to compare this bill to other unsuccessful attempts at regulating AI’s impact on elections. The Iowa General Assembly introduced a bill that requires any media that promotes a political candidate or a ballot issue that uses AI in its generation must include a disclaimer that AI was used. At first, this sounds reasonable, until one considers the definition of AI-generated content used in the proposal. The definition used is as follows: “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Critics of the proposal noted that this broad language can include even basic photo editing and the use of green screens. Ironically, this can create the false impression that genuine political content is the product of AI, when the goal was to prevent the exact opposite. 

Notably, the federal proposal may face a similar problem. While the proposal itself did not define AI, the federal government defines AI in 15 USC § 9401 as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” This is the exact same definition used in the Iowa proposal, meaning it could have been susceptible to the same criticism. However, a recent change to the federal proposal likely helps avert this issue. Now the federal proposal defines deceptive AI-generated media as “the product of artificial intelligence technology that uses machine learning .…” The new requirement that the media be the product of AI using machine learning likely narrows the definition to avoid much of the negative implications of the Iowa proposal. 

However, the proposal does not define how much of a work must be AI-generated in order to qualify. It could be 51%, 100% or even 1%. The lack of clarity may lead to the same negative implications the Iowa proposal had. To ensure the proposal is sufficiently narrowly tailored, the drafters should adopt a definition that requires at least a majority of the work be the product of AI, if not an even higher percentage.

In summary, the bill as it currently stands is likely unconstitutional under the First Amendment. However, the constitutional issues can likely be remedied by: limiting the time frame for when the prohibition takes place relative to an election, requiring an intent to deceive, and more clearly defining what it means for something to be the product of AI. 

Conclusion

AI is a powerful tool that will change a lot about our society. It produces great new ways to communicate thoughts and ideas. With it comes the inherent risk of abuse. Nobody wants voters to be manipulated or lied to by bad-faith actors using AI tools to fabricate voices and likenesses to undermine democracy. Overreaching measures that infringe on citizens’ right to express themselves freely are not the answer. Lawmakers must approach these issues with a scalpel rather than a chainsaw. Legislation must be narrowly tailored to achieve the objectives of protecting democracy. Restricting legitimate political speech would not protect democracy, it would destroy it.

Leave a Reply

Your email address will not be published. Required fields are marked *

GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form.