United By AI: The Unusual Alliance Between G7 Leaders, Big Tech, and the Pope

By George Valases

for the Deepfakes & Democracy Initiative of EthicalTech@GW

Earlier this month, Pope Francis made history as the first Pope to address the G7, a group of leaders representing some of the most powerful and influential countries in the world. Among the issues the Pope addressed was AI. The Pope himself was the victim of AI and deep fakes just last year when an image spread online of the Pope wearing a white puffy jacket. However, his comments pertained very little to this relatively insignificant incident. Instead the Pope emphasized that AI must be human focused such that it does not deprive us of our human dignity.

Strangely, this is not even the first time the Pope has gotten involved with AI. In 2020, the Vatican was among the signatories of the Rome Call for Ethics, which also included tech companies like Microsoft and IBM. While the call does not call for specific policy, it offers broad guidelines and principles on how to ethically develop AI such that it minimizes harm and maximizes benefit to humanity.

 

The Pope’s G7 Meeting

Chief among the concerns the Pope addressed at the G7 Summit was AI’s decision making. The Pope stated that important decisions must be left to humans and not algorithms, for there are factors to consider that cannot simply be quantified by a numerical value. One such issue is criminal sentencing. Algorithms are being used by judges to help determine if a prisoner should be granted home confinement. These algorithms, the Pope says, take into account factors like the severity of the sentence and the prisoner’s behavior, but also their ethnicity, education, and even credit rating. The Pope stressed that humans are not numbers and that these algorithms cannot properly account for a person’s ability to surprise others. 

However the greatest AI decision making threat that the Pope addressed is autonomous weapons. These are machines that can make a choice to kill a human. While much of the Pope’s address dealt primarily with broad principles, the Pope here made an outright call for specific policy. The Pope called for world leaders to ban the use of autonomous weapons, stating that “No machine should ever choose to take the life of a human being.”

Another point the Pope addressed was AI generated essays. While these essays seem analytical, the Pope stated that they do not actually develop any new analysis or concepts. These essays are instead merely copying existing analysis and compiling it into an appealing form. Rather than offering an opportunity for authentic reflection like education should, the prevalence of these essay writing algorithms “runs the risk of [education] being reduced to a repetition of notions, which will increasingly be evaluated as unobjectionable, simply because of their constant repetition.”

One final point the Pope addressed was the need for “algor-ethics.” The Pope believes that AI is shaped by the worldview of its creator and there is often no consensus on various issues concerning our world today. Algor-ethics is a set of global principles that are meant to mitigate the harmful effects of the biases AI may have obtained through its development. 

 

The Rome Call For Ethics

In February of 2020, the Rome Call For Ethics was signed by entities like Microsoft and IBM, but also included the Vatican. The Call established 3 areas of impact and 6 principles to ensure the ethical, human focused development of AI. The areas of impact are ethics, education, and rights. The 6 principles are transparency, inclusion, accountability, impartiality, reliability, and security/privacy.

The first area of impact is ethics. It starts with a basic recognition that all humans are free with equal dignity and rights. Overall the primary goal of AI should be to serve humanity and its progress, not undermine it. This can be achieved through three requirements. “It must include every human being, discriminating against no one; it must have the good of humankind and the good of every human being at its heart; finally, it must be mindful of the complex reality of our ecosystem and be characterized by the way in which it cares for and protects the planet (our “common and shared home”) with a highly sustainable approach, which also includes the use of artificial intelligence in ensuring sustainable food systems in the future.” AI must protect, not exploit humanity, especially the weak and vulnerable. In addition, one should always know when they are interacting with a machine and not a human. 

The next area of impact is education. This impact area calls for the working with younger generations for the benefit of younger generations. This is done through the development of quality, accessible, and non discriminatory education. However, these same education prospects should be deliverable for the elderly as well, including through off-line services during the technological transition. Education must raise awareness about both the opportunities and issues posed by AI. The goal should be that nobody is left behind and everyone is able to express themselves and contribute to the benefit of humanity.

The final area of impact is rights. This area focuses on ensuring AI development is done in a way that protects the weak and underprivileged, as well as the natural environment. This includes making human rights the core of AI. In particular, a “duty of explanation” should be considered. This means that what goes into the AI’s decision making and what its objective and purpose is should be transparent. 

To achieve the objectives outlined in the areas of impact, the Pope calls for “algor-ethics” that abides by 6 principles. The first is transparency. This ties back to the duty of explanation and how the AI’s decision making process must be clear and knowable. Next is inclusion, this means that people cannot be excluded or discriminated against by AI, which is part of both the ethics and education areas of impact. Next is responsibility, those who create the AI systems are to be accountable for the AI’s outcomes. After that is impartiality, which means the AI development must avoid the biases of its developer. Then there is the principle of reliability, which simply means AI should be relied upon such as through providing accurate information. AI is known to “hallucinate” and make up facts out of thin air, which must be addressed. Finally there is security/privacy, AI must respect a person’s private information.

 

Conclusion

The Catholic church is one of the oldest global organizations in the world. With the considerable impact AI will have on the world, it does strangely make sense for the Vatican to get involved. The Pope’s concerns are well founded and should be considered as AI continues to advance. If humanity is not at the center of AI’s development, we risk that AI may replace humanity.

How Broad Can I Be? Why The No AI FRAUD Act’s Scope Dooms it Constitutionally

By George Valases

for the Deepfakes & Democracy Initiative of EthicalTech@GW

The vast growth of AI over recent years has brought to life new concerns about AI’s use, especially as it pertains to portraying real people saying or doing things that never happened. One such proposal to address these concerns is called the No Artificial Intelligence Fake Replicas and Unauthentic Duplications Act, or the No AI FRAUD Act. The No AI FRAUD Act, despite its noble intentions, is an overly broad proposal with many dangerous implications. The broad scope of the act could classify the device you are reading this on as a “personal cloning service” and subject the maker of it to $50,000 in liabilities each time one is sold. Anything from digital drawings you have made to photos you have taken can subject you to a $5,000 or more in liabilities, even if no AI was involved at all. It opens up platforms like Youtube, Facebook, Twitter/X, Tik Tok, and virtually every other social media platform to liability for simply hosting such content. Finally, in order to escape the glaring 1st Amendment issues, it calls on courts to balance the first amendment with the enforcement of this bill while simultaneously suggesting that the bill should not alter how courts apply the first amendment. 

Personal Cloning Service?

Cloning, while a real thing, is more often the subject of science fiction than reality. Most people would probably be shocked to discover they were in possession of a cloning device, but said device may be in their hand or pocket right now. This bill defines a personal cloning service as “an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is to produce one or more digital voice replicas or digital depictions of particular, identified individuals.” A digital depiction is defined as a “replica, imitation, or approximation of the likeness of an individual that is created or altered in whole or in part using digital technology.” Finally, the bill defines an individual as “a human being, living or dead.”

Since virtually all cameras are made with the purpose of creating a digital replica of whatever they are viewing when the photo is taken, including individuals, it is possible that any camera or device with a camera such as your smartphone will be classified as a personal cloning service. While the bill requires the depiction to be of “particular, identified individuals”, it is unclear exactly what that means. It could mean an algorithm or other tool that only creates depictions of a few select and specifically named individuals and nobody else. For example: an AI that only produces images of Taylor Swift. However, this interpretation is undermined by a later provision stating “[e]very individual has a property right in their own likeness and voice.” If every individual has the property right, why should someone who was falsely depicted by an AI be excluded just because the AI was broader and not designed to replicate that person specifically? Thus, a more likely interpretation is that the provision refers to any algorithm or other technology that depicts real, identifiable people, rather than creating images of made up people. 

Determining what does and does not qualify as a personalized cloning service is important. The bill opens anyone who distributes or otherwise makes available to the public these personal cloning services without authorization to $50,000 in liabilities. Keep in mind that the authorization need not be from the government or any government actor, but rather the alleged injured party. If the definition is as broad as it seems, any manufacturer of cameras or similar devices may have to get the express permission of every person who could be photographed in order to avoid liability. 

This produces stunning effects on free speech. Visual media like photos, videos, and digital drawings such as with photoshop are key forms of expression. None of these even require the use of AI to be created. This proposal’s broad definitions do not just prevent the use of AI in depicting people, it essentially prevents the use of any digital media depicting someone. Hopefully you are good at drawing, because that appears to be the most feasible way to get around this. 

Your Own Work Is Considered AI, Even If It Is Not

Beyond just the manufacturers of so-called “personal cloning services”, the proposal also opens up almost anyone to thousands of dollars in liability. The proposal states “in the case of an unauthorized publication, performance, distribution, transmission, or other making available of a digital voice replica or digital depiction, five thousand dollars ($5,000) per violation or the actual damages suffered by the injured party or parties as a result of the unauthorized use, plus any profits from the unauthorized use that are attributable to such use and are not taken into account in computing the actual damages.” Combine this with the previously stated definition of a digital depiction and one can reasonably reach the conclusion that merely posting your own photographs, videos, and audio recordings on the internet or even publishing them to print could open you up to liability. 

This liability applies not only to actual depictions like a photograph of a person or a recording of their actual voice, but any imitation of that person as well so long as it was created in whole or in part by digital technology. If you use tools like photoshop to create or even just edit images or drawings, this definition is broad enough to make you liable for thousands of dollars of any person you depict. Going even further, the definition does not require the representation to be of the particular person, just their likeness. That means that even potentially making an image of a character from a movie or tv show could be enough because that character still has the actor’s likeness. This definition is also broad enough to potentially encompass political cartoons, a form of political speech older than the United States itself. 

This bill goes beyond visual media and includes “audio voice replicas”, which the proposal defines as “digital voice replica” means an audio rendering that is created or altered in whole or in part using digital technology and is fixed in a sound recording or audiovisual work which includes replications, imitations, or approximations of an individual that the individual did not actually perform.” Imagine your favorite comedian gives an imitation of another individual during one of their shows. They could be held liable for that so long as it is created or altered by digital technology and is recorded. This likely includes anything from voice filters, to tools like autotune. 

To make matters worse, the bill does not even include an exception for parody or satire, something even most intellectual property laws permit as fair use. Even humorous criticism of prominent figures is not protected. Imagine an America where you could not make fun of your politicians or other prominent figures, and they could in fact punish you directly for doing so. This is something you would see from authoritarian regimes, not a country that is supposed to value free speech and individual liberty. 

Even Your Favorite Websites Could Be Liable

Right now, most online platforms are protected by Section 230 of the Communications Decency Act, or commonly referred to as just Section 230. This means that platforms like Youtube and Tik Tok cannot be held liable for the content distributed on their platforms by third parties. There are some notable exceptions to Section 230 however, most importantly right now is Intellectual Property violations posted on their platforms.This is relevant because The No AI FRAUD Act classifies the right of publicity as an intellectual property right. That means that not only do you risk liability, as do the manufacturers of “personal cloning services”, but even the platforms the content is posted on is liable.

Section 230 provides essential protection to online platforms, allowing them to moderate as much or as little content as they wish without risking liability. Section 230 is thus essential for protecting speech online because it allows platforms to air on the side of more speech rather than more censorship. Platforms are still permitted to moderate their content how they see fit, but rarely can face liability for what is on them. However, this proposal opening up liability to all platforms for violations of it means that platforms would have to crack down and air on the side of caution instead. As a result, even speech that is not otherwise prohibited by the bill would be at risk because in edge cases platforms will likely choose to censor rather than permit so as to not be liable. 

Attempting to Uphold The First Amendment By Violating It

The proposal tries to escape the clear first amendment issues by expressly permitting the first amendment may be used as a defense. The drafters of the proposal seem to forget however that the first amendment would serve as a defense regardless of if the bill expressly permits it. However, the so-called “first amendment defense” described in the proposal is not a first amendment defense at all. Instead, it asks the court to balance the public interest in access with the outlined intellectual property concerns.

Despite what the proposal suggests, fundamental rights are not something that should merely be balanced with other interests. Our constitution demands significantly more when restricting fundamental rights. One common constitutional test for content based restrictions on speech, including subject matter, is known as strict scrutiny. Strict scrutiny requires that restrictions must be narrowly tailored in furtherance of a compelling government interest. There is reasonable debate about whether restricting the depiction of individuals counts as a subject matter restriction, but considering it pertains to the actual substance of the speech, it is more likely than not a subject matter restriction. 

Next thus comes the question of whether the bill is narrowly tailored in furtherance of a compelling government interest. As this assessment has repeatedly addressed, nothing about this proposal is narrowly tailored. It appears to apply to everything from cell phones, to cameras, to photoshop. The proposal even lacks a clear compelling government interest. There are references to scenarios such as fake celebrity endorsements, which may be a compelling interest to protect consumers, but surely that cannot serve as the compelling government interest for restriction not pertaining to celebrities at all.

Another potential avenue to salvage this bill is arguing that it is a reasonable and content neutral time, place, and manner restriction with ample alternative channels of communication. For the sake of argument, we will assume that this restriction is content neutral. Ward v. Rock Against Racism found that for a restriction on speech to be reasonable under the time, place, and manner test, it must be “the least intrusive upon the freedom of expression as is reasonably necessary to achieve a legitimate purpose of the regulation.” While restricting AI use in replicating likenesses may be a valid manner restriction on speech, blanket regulations on any replica created even partially by digital technology is far from the least intrusive means of achieving the proposal’s goal. 

Even if it were reasonable, given how platforms will be forced to react to this proposal, many if not most online platforms may no longer be able to serve as channels for not only these specific types of speech, but even speech that would narrowly be permissible. The channels of communication would inhibit even the permitted speech’s ability to be spoken out of fear of facing liability. Since these restrictions likely will result in the shutting down of many channels of communication, it is likely to fail this test as well. 

 

How to Improve the Bill

The most crucial step that can be taken to save this proposal from constitutional challenges is to greatly narrow the definition of a personal cloning service such that it clearly excludes devices like cameras. The definition could be something akin to “an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is to produce one or more digital voice replicas or digital depictions of particular, identified individuals, with either no or minimal human intervention beyond the giving of instructions. Any algorithm, software, tool, or other technology, service, or device that merely assists a human decision maker is excluded.” Under this definition, creative works where a human was behind its creation beyond just the inputting of instructions would be excluded. A human has to set up the scene or choose where to snap a photograph. A human also makes the decisions on the colors and strokes used when drawing using digital tools. This definition thus provides protections for human created creative works that merely use digital technology as a tool to that end.

Another key improvement to this bill could be to either not classify this as an intellectual property right or to expressly not hold platforms liable for hosting it. Even under the proposed definition, it is not always obvious just by looking at it what should be taken down. The whole point of a lot of AI generated content is to look real. If social media platforms have to distinguish between real photos or AI and risk liability, they will likely air on the side of caution and stifle more speech than is necessary. 

One final key improvement would be to provide a satire and parody exception. Such exceptions are common among IP law, but are even more essential in this proposal. The likeness of people is more often the subject of, or at least used in, a parody or satire. Think of shows like Saturday Night Live who regularly depict prominent figures in humorous ways, often in a mocking manner. Given that satire and parody is often used to criticize prominent figures, including and especially politicians, it is arguably more essential to have a parody and satire exception here than in most forms of IP law. This proposal deals with potential political speech more than most other forms of IP. 

Conclusion

Some may view this criticism as hyperbolic. Surely the definitions are not actually meant to be as broad as a plain reading may imply. Even if they were, surely no court would actually uphold these laws in such a way. Hopefully that sentiment is fully correct. However, why even give the option? Why even make these scenarios that could potentially chill free speech a possibility when there is no good reason to even desire this outcome? Why subject people who are merely expressing themselves through a digital medium to litigation and the stress and expenses that come with it just to vindicate their rights? The fact that courts are unlikely to allow it does not mean they should have the option to allow it at all, nor that speakers should be temporarily chilled prior to the litigation being decided. Congress should thus either reject the bill or narrow it greatly to avoid any overstepping on their boundaries.

Is It Your Right to Fake? Why Recent Attempts To Regulate The Use Of AI-Generated Content in Elections May Be Unconstitutional

By George Valases

for the Deepfakes & Democracy Initiative of EthicalTech@GW

Imagine you are an enthusiastic voter ready to go to the polls and make your voice heard. However, before you get the chance you receive an unusual phone call. The voice on the call sounds exactly like the candidate you wanted to vote for. Even more strangely, the candidate appears to be telling you to not even bother voting. This sounds unlikely. Why would a candidate for office encourage voters not to vote for them? This exact scenario occurred in the run up to the 2024 New Hampshire Democratic Primaries. Many New Hampshire voters received a robocall from what sounded like President Biden telling them not to bother voting. 

In reality, President Biden was not on the phone telling voters not to vote for him, nor did he record a message encouraging voters not to vote. This call was a deepfake imitating the President’s voice that was delivered to many voters across New Hampshire. Deepfakes are synthetic and realistic representations of people created using machine learning and artificial intelligence (“AI”). Deepfakes can be used to convincingly fabricate voices and even likenesses to depict people saying or doing things that never happened. As a result, photographic and audio-recorded evidence is no longer as reliable as it may have once been. This is especially concerning for elections as bad actors could convincingly fabricate evidence against political candidates in an attempt to tank their campaigns. 

A bipartisan group of senators recently introduced a bill to combat these concerns. The proposal is called the Protect Elections from Deceptive AI Act. The proposal prohibits knowingly distributing materially deceptive AI-generated content, including deepfakes, about political candidates with the intent to influence an election or solicit funds. The proposal would permit political candidates to sue for an injunction or equitable relief to prevent the distribution of the content as well as recover damages. Violations of the proposal would also be considered defamation per se, which makes it considerably easier to prevail in a defamation case because the proposed legislation creates a legal presumption that the content is inherently defamatory.

However, this proposal raises constitutional concerns, especially pertaining to First Amendment free speech rights. Political speech is among the most protected of all speech rights, and courts are skeptical about government attempts to regulate these rights. Political speech includes discussing government affairs, elections, and campaigns. The relevant test for this proposal is to consider whether it is a narrowly tailored means of achieving a compelling state interest, also known as the strict scrutiny standard. This is generally applicable when there are attempts to regulate the content of speech. 

The proposed legislation attempts to comply with the First Amendment by carving out exceptions for parody and satire. In addition, news broadcasts and publications are also exempt but must clearly disclose the content being presented is AI-generated or that there are questions about its authenticity. While these exceptions partially alleviate the First Amendment concerns, they are likely not sufficient.

Is Deceptive AI Even Worthy of Protection?

Many people’s initial reaction may be that deceptive AI is undeserving of any legal protection. After all, they are pure fabrications and detrimental to a voter’s right to a fair electoral process. However, false speech is generally granted protection under the First Amendment. Look no further than US v. Alvarez, where the Supreme Court struck down the federal Stolen Valor Act because it criminalized speech based purely on its falsity. There are instances where the truth or falsity of a statement does influence whether it receives protection, but falsity alone is not determinative.

The Court in Alvarez emphasized that false speech may not be valuable in and of itself, but that attempts to penalize false speech may chill all speech, even true speech, due to the broad censorship power the government would be granted if falsity alone were sufficient to criminalize speech. People may hesitate to speak the truth if they fear penalties, especially criminal prosecution. That is why even most statutes pertaining to false speech do have other requirements such as knowledge of falsity or where the speech causes cognizable harm, such as with defamation and fraud.

As the Supreme Court also recognized in Alvarez, even knowingly false statements can have value. After all, parodies like Saturday Night Live use the likeness of celebrities, albeit without AI, to make comedy or humorously criticize them. Political cartoons that depict political figures saying or doing things that never happened are commonly used to criticize such figures. AI can be used for the same purpose, therefore lawmakers must be cautious when restricting this channel of speech.

Strict Scrutiny Test

Since the Protect Elections from Deceptive AI Act regulates speech based on its content, a court would most likely apply strict scrutiny in reviewing the constitutionality of the statute. The bill attempts to regulate speech pertaining to the subject matter of an election and political candidates specifically. These types of subject matter restrictions are judged under strict scrutiny. There is little dispute that protecting the integrity of federal elections is a compelling government interest. Thus, the Supreme Court has upheld some restrictions on even political speech in similar electoral contexts. In Burson v. Freeman, the Court permitted a 100-foot buffer zone around polling stations where political speech was prohibited. The Court found that protecting the integrity of the vote by preventing undue influence on voters was a compelling government interest and that the 100-foot buffer zone was narrowly tailored to achieve that goal.

In contrast with Burson, in Russell v. Lundergan-Grimes the Sixth Circuit struck down a similar bill that created a no-political-speech buffer zone. That case involved a buffer zone that was 300 feet around the polling station, much more extensive than the 100-foot buffer zone that the Supreme Court upheld in Burson. The Sixth Circuit noted that the area was 9 times larger than what the Supreme Court permitted and thus could not be considered narrowly tailored without showing cause for why the additional size was necessary, which the state had failed to do. 

Burson and Russell would likely be considered relevant precedent by courts construing the constitutionality of the Protect Elections from Deceptive AI Act. These cases and the proposed legislation all involve attempts to regulate political speech related to an election. Burson also stands as a good example of what a narrowly tailored restriction on such speech should look like, while Russell serves as a good example of what an insufficiently narrowly tailored attempt at regulating political speech looks like. To be upheld constitutionally, this proposal needs to be more like Burson and less like Russell. 

With these cases in mind, there are major questions about the constitutionality of the proposed bill. While it arguably advances a compelling government interest, it is likely to be held to be insufficiently narrowly tailored to achieve that interest. One glaring flaw with this legislation is that there is no limited time frame for the restrictions. Campaigns can declare their candidacy years in advance. For example, President Trump declared his presidential candidacy for the 2024 election in November of 2022. Under the proposed bill, there is an almost 2 year period of time where one cannot distribute deceptive AI-generated content about a prominent political figure. Some may not view this as a cause for concern or may even applaud this. After all, the AI content is likely being used to deceive voters rather than express sincere political views. 

Yet even sincere expressions of political views through AI generation can produce a deceptive effect. [Maybe provide an example of this?] The proposal does not require an intent to deceive, only that the material itself is deceptive. The only intent requirement is to either influence an election or solicit funds. Almost all expressions of support for a candidate are intended to impact the outcome of an election. People put up signs and social media posts because they want to express their support and possibly help their candidate garner support in the process. Other intentions may exist, but it is hard to argue that those who express their views publicly do not at least hope they may change someone else’s views or maybe convince an undecided voter to vote for their candidate. If AI-generated content provides an effective avenue to do so, the government must be careful when regulating to not overstep its boundaries.

Consider the following hypothetical: someone who is dissatisfied with President Biden’s border policies uses AI to create realistic images of President Biden at the border personally welcoming migrants and waving them in, and then posts that image on social media in early 2023 with the caption “This is Essentially Biden’s Border Policy.” The images are just plausible enough that a reasonable person may believe they are real and that the President is personally at the border waving migrants in. This could fall under the satire or parody exceptions, but parodies and satires are meant to be obvious by their nature and satires specifically often require portrayal in a humorous way. That does not appear to be present in this hypothetical. As a result, the Biden campaign could have the post taken down, stifling the speech of a citizen who was expressing his or her dissatisfaction with the President’s policies over a year away from even the first primary election. Images often express ideas and capture attention better than words, and not everybody is a good enough artist to put their ideas into a visual medium. Why should citizens thus be prohibited from using AI to express their criticism of a politician by making use of images of the politicians themselves? 

To be clear, not all uses of AI to portray or imitate political candidates are equally deserving of protection. After all, the use of the robocall in the New Hampshire primaries has little expressive value and is more intended to deceive. However, the fact that rights can be abused does not justify blanket restrictions on those rights. Any regulation in this area should be as narrowly tailored as possible to prevent only the specific harms related to deceiving voters in an election and not infringe on the rights to express political views, even if in an unorthodox way.

 

Comparisons with Defamation and Other Existing Law

Since the bill directly references defamation law, it is appropriate and useful to compare the proposal to defamation law and other comparable laws. The landmark decision New York Times v. Sullivan established a high burden for prevailing in a defamation claim for public figures. Political candidates and certainly office holders are among the most prominent examples of public figures. The standard the Court adopted requires that the defendant must have known their statements were false or made the statements with reckless disregard to what the truth was. This standard applies only to statements that are capable of verification or falsification. Opinions are protected speech and cannot form the basis of a defamation action, under the Supreme Court’s defamation precedent.

Accordingly, any AI-generated content that expresses purely “opinions” would be exempt even under the Sullivan standard. Assuming the content does express falsities, the proposed legislation makes no reference to falsities, only deception. This seems semantic, but even a true statement can be deceptive. To emphasize this point, consider another hypothetical: a person who is critical of President Trump uses AI to generate an image of Donald Trump in a courtroom receiving a guilty verdict with the caption “Trump was found guilty for his crimes related to the election” and then posts it to social media. This is technically true because President Trump was on trial in a courtroom and a jury did recently find him guilty of falsifying business records, which was allegedly done to increase his chance of winning the 2016 election. However, a reasonable person coming across the post may incorrectly believe that this means he was convicted for his actions related to the 2020 election or the events of January 6, 2021. Since those allegations are more serious, a conviction for them would likely have a greater impact on the 2024 election than the business records case and could thus deceive voters into changing their vote based on the mistaken belief that Trump was convicted of a more serious crime, despite never making a false statement.

This is not to say that political candidates should not be able to recover in a defamation case against those who distribute deceptive AI content depicting them before an election. On the contrary, defamation law is a good fit for recovery on paper. The problem is the proposal does more than just extend existing defamation law to the distribution of deceptive AI-generated content before elections. Rather, the proposal extends what could be classified as defamation to potentially include the expression of opinions or technically true statements of fact just because the content conveying the message was AI-generated. Defamation should be based on the falsity of the content, not how the content was created.

The existence of laws like those prohibiting defamatory statements raises questions about whether the proposed legislation is needed at all. Carl Szabo, Vice President and Chief Counsel of NetChoice, argues that while some legislation may be needed for gap fillers, existing state and federal law is largely sufficient to cover the major concerns, especially those pertaining to democracy, without sweeping restrictions on AI innovation. For example, the New Hampshire robocall incident is already being prosecuted in New Hampshire for 13 felony counts of voter suppression. Szabo argues that using AI to impersonate a candidate in an attempt to defraud voters could also constitute wire fraud, allowing the federal government to pursue charges as well. The Foundation for Individual Rights and Expression has also suggested crimes like forgery, fraud, and false light may also be applicable. Because the use of such deepfakes in the election context is already subject to regulation under various state and federal laws, the proposed legislation may not even be necessary to deter bad actors. 

How to Improve the Bill

One critical step to improve the bill is to render it more narrowly tailored by adding a time frame relative to an election where it takes effect. This is crucial because the time frame for which one can declare their candidacy is extensive. Restricting any type of speech for a period of potentially years is far from narrowly tailored. This is especially true when the speech at issue is political speech. In addition, the bill should require the defendant to have the intent to deceive. A blanket regulation on deceptive content alone is not sufficient because even deceptive content can have significant expressive value. If new regulations are to be passed, they should target uses like the New Hampshire robocalls and avoid uses like both of the proposed hypotheticals. Finally, the proposal is more likely to be upheld as constitutional if it narrows its definition of AI-generated media. 

A good model to base these regulations on is the California Assembly Bill 972 (AB 972). This bill “prohibits a person, committee, or other entity, within 60 days of an election at which a candidate for elective office will appear on the ballot, from distributing with actual malice materially deceptive audio or visual media of the candidate with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate.” AB 792 effectively addresses two of the major concerns discussed above. It establishes a reasonable time frame for when the actions are prohibited and implements a requirement that the distributor have the intent to injure the candidate’s reputation or intent to deceive.

By addressing these concerns, AB 972 is more closely aligned with Burson than Russell. Burson had a noble objective that required some narrow restrictions on political speech. However, the lawmakers in Burson acknowledged the importance of the political speech being restricted, so they ensured that their restrictions were as narrow as reasonably possible to achieve their noble objective. Russell on the other hand involved lawmakers going far beyond what was needed and stifling political speech even on private property with no added justification for their astounding steps. The proposed federal legislation should follow the examples of Burson and AB 972 and not Russell.

It is also important to compare this bill to other unsuccessful attempts at regulating AI’s impact on elections. The Iowa General Assembly introduced a bill that requires any media that promotes a political candidate or a ballot issue that uses AI in its generation must include a disclaimer that AI was used. At first, this sounds reasonable, until one considers the definition of AI-generated content used in the proposal. The definition used is as follows: “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Critics of the proposal noted that this broad language can include even basic photo editing and the use of green screens. Ironically, this can create the false impression that genuine political content is the product of AI, when the goal was to prevent the exact opposite. 

Notably, the federal proposal may face a similar problem. While the proposal itself did not define AI, the federal government defines AI in 15 USC § 9401 as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” This is the exact same definition used in the Iowa proposal, meaning it could have been susceptible to the same criticism. However, a recent change to the federal proposal likely helps avert this issue. Now the federal proposal defines deceptive AI-generated media as “the product of artificial intelligence technology that uses machine learning .…” The new requirement that the media be the product of AI using machine learning likely narrows the definition to avoid much of the negative implications of the Iowa proposal. 

However, the proposal does not define how much of a work must be AI-generated in order to qualify. It could be 51%, 100% or even 1%. The lack of clarity may lead to the same negative implications the Iowa proposal had. To ensure the proposal is sufficiently narrowly tailored, the drafters should adopt a definition that requires at least a majority of the work be the product of AI, if not an even higher percentage.

In summary, the bill as it currently stands is likely unconstitutional under the First Amendment. However, the constitutional issues can likely be remedied by: limiting the time frame for when the prohibition takes place relative to an election, requiring an intent to deceive, and more clearly defining what it means for something to be the product of AI. 

Conclusion

AI is a powerful tool that will change a lot about our society. It produces great new ways to communicate thoughts and ideas. With it comes the inherent risk of abuse. Nobody wants voters to be manipulated or lied to by bad-faith actors using AI tools to fabricate voices and likenesses to undermine democracy. Overreaching measures that infringe on citizens’ right to express themselves freely are not the answer. Lawmakers must approach these issues with a scalpel rather than a chainsaw. Legislation must be narrowly tailored to achieve the objectives of protecting democracy. Restricting legitimate political speech would not protect democracy, it would destroy it.

GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form.