How Broad Can I Be? Why The No AI FRAUD Act’s Scope Dooms it Constitutionally

By George Valases

for the Deepfakes & Democracy Initiative of EthicalTech@GW

The vast growth of AI over recent years has brought to life new concerns about AI’s use, especially as it pertains to portraying real people saying or doing things that never happened. One such proposal to address these concerns is called the No Artificial Intelligence Fake Replicas and Unauthentic Duplications Act, or the No AI FRAUD Act. The No AI FRAUD Act, despite its noble intentions, is an overly broad proposal with many dangerous implications. The broad scope of the act could classify the device you are reading this on as a “personal cloning service” and subject the maker of it to $50,000 in liabilities each time one is sold. Anything from digital drawings you have made to photos you have taken can subject you to a $5,000 or more in liabilities, even if no AI was involved at all. It opens up platforms like Youtube, Facebook, Twitter/X, Tik Tok, and virtually every other social media platform to liability for simply hosting such content. Finally, in order to escape the glaring 1st Amendment issues, it calls on courts to balance the first amendment with the enforcement of this bill while simultaneously suggesting that the bill should not alter how courts apply the first amendment. 

Personal Cloning Service?

Cloning, while a real thing, is more often the subject of science fiction than reality. Most people would probably be shocked to discover they were in possession of a cloning device, but said device may be in their hand or pocket right now. This bill defines a personal cloning service as “an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is to produce one or more digital voice replicas or digital depictions of particular, identified individuals.” A digital depiction is defined as a “replica, imitation, or approximation of the likeness of an individual that is created or altered in whole or in part using digital technology.” Finally, the bill defines an individual as “a human being, living or dead.”

Since virtually all cameras are made with the purpose of creating a digital replica of whatever they are viewing when the photo is taken, including individuals, it is possible that any camera or device with a camera such as your smartphone will be classified as a personal cloning service. While the bill requires the depiction to be of “particular, identified individuals”, it is unclear exactly what that means. It could mean an algorithm or other tool that only creates depictions of a few select and specifically named individuals and nobody else. For example: an AI that only produces images of Taylor Swift. However, this interpretation is undermined by a later provision stating “[e]very individual has a property right in their own likeness and voice.” If every individual has the property right, why should someone who was falsely depicted by an AI be excluded just because the AI was broader and not designed to replicate that person specifically? Thus, a more likely interpretation is that the provision refers to any algorithm or other technology that depicts real, identifiable people, rather than creating images of made up people. 

Determining what does and does not qualify as a personalized cloning service is important. The bill opens anyone who distributes or otherwise makes available to the public these personal cloning services without authorization to $50,000 in liabilities. Keep in mind that the authorization need not be from the government or any government actor, but rather the alleged injured party. If the definition is as broad as it seems, any manufacturer of cameras or similar devices may have to get the express permission of every person who could be photographed in order to avoid liability. 

This produces stunning effects on free speech. Visual media like photos, videos, and digital drawings such as with photoshop are key forms of expression. None of these even require the use of AI to be created. This proposal’s broad definitions do not just prevent the use of AI in depicting people, it essentially prevents the use of any digital media depicting someone. Hopefully you are good at drawing, because that appears to be the most feasible way to get around this. 

Your Own Work Is Considered AI, Even If It Is Not

Beyond just the manufacturers of so-called “personal cloning services”, the proposal also opens up almost anyone to thousands of dollars in liability. The proposal states “in the case of an unauthorized publication, performance, distribution, transmission, or other making available of a digital voice replica or digital depiction, five thousand dollars ($5,000) per violation or the actual damages suffered by the injured party or parties as a result of the unauthorized use, plus any profits from the unauthorized use that are attributable to such use and are not taken into account in computing the actual damages.” Combine this with the previously stated definition of a digital depiction and one can reasonably reach the conclusion that merely posting your own photographs, videos, and audio recordings on the internet or even publishing them to print could open you up to liability. 

This liability applies not only to actual depictions like a photograph of a person or a recording of their actual voice, but any imitation of that person as well so long as it was created in whole or in part by digital technology. If you use tools like photoshop to create or even just edit images or drawings, this definition is broad enough to make you liable for thousands of dollars of any person you depict. Going even further, the definition does not require the representation to be of the particular person, just their likeness. That means that even potentially making an image of a character from a movie or tv show could be enough because that character still has the actor’s likeness. This definition is also broad enough to potentially encompass political cartoons, a form of political speech older than the United States itself. 

This bill goes beyond visual media and includes “audio voice replicas”, which the proposal defines as “digital voice replica” means an audio rendering that is created or altered in whole or in part using digital technology and is fixed in a sound recording or audiovisual work which includes replications, imitations, or approximations of an individual that the individual did not actually perform.” Imagine your favorite comedian gives an imitation of another individual during one of their shows. They could be held liable for that so long as it is created or altered by digital technology and is recorded. This likely includes anything from voice filters, to tools like autotune. 

To make matters worse, the bill does not even include an exception for parody or satire, something even most intellectual property laws permit as fair use. Even humorous criticism of prominent figures is not protected. Imagine an America where you could not make fun of your politicians or other prominent figures, and they could in fact punish you directly for doing so. This is something you would see from authoritarian regimes, not a country that is supposed to value free speech and individual liberty. 

Even Your Favorite Websites Could Be Liable

Right now, most online platforms are protected by Section 230 of the Communications Decency Act, or commonly referred to as just Section 230. This means that platforms like Youtube and Tik Tok cannot be held liable for the content distributed on their platforms by third parties. There are some notable exceptions to Section 230 however, most importantly right now is Intellectual Property violations posted on their platforms.This is relevant because The No AI FRAUD Act classifies the right of publicity as an intellectual property right. That means that not only do you risk liability, as do the manufacturers of “personal cloning services”, but even the platforms the content is posted on is liable.

Section 230 provides essential protection to online platforms, allowing them to moderate as much or as little content as they wish without risking liability. Section 230 is thus essential for protecting speech online because it allows platforms to air on the side of more speech rather than more censorship. Platforms are still permitted to moderate their content how they see fit, but rarely can face liability for what is on them. However, this proposal opening up liability to all platforms for violations of it means that platforms would have to crack down and air on the side of caution instead. As a result, even speech that is not otherwise prohibited by the bill would be at risk because in edge cases platforms will likely choose to censor rather than permit so as to not be liable. 

Attempting to Uphold The First Amendment By Violating It

The proposal tries to escape the clear first amendment issues by expressly permitting the first amendment may be used as a defense. The drafters of the proposal seem to forget however that the first amendment would serve as a defense regardless of if the bill expressly permits it. However, the so-called “first amendment defense” described in the proposal is not a first amendment defense at all. Instead, it asks the court to balance the public interest in access with the outlined intellectual property concerns.

Despite what the proposal suggests, fundamental rights are not something that should merely be balanced with other interests. Our constitution demands significantly more when restricting fundamental rights. One common constitutional test for content based restrictions on speech, including subject matter, is known as strict scrutiny. Strict scrutiny requires that restrictions must be narrowly tailored in furtherance of a compelling government interest. There is reasonable debate about whether restricting the depiction of individuals counts as a subject matter restriction, but considering it pertains to the actual substance of the speech, it is more likely than not a subject matter restriction. 

Next thus comes the question of whether the bill is narrowly tailored in furtherance of a compelling government interest. As this assessment has repeatedly addressed, nothing about this proposal is narrowly tailored. It appears to apply to everything from cell phones, to cameras, to photoshop. The proposal even lacks a clear compelling government interest. There are references to scenarios such as fake celebrity endorsements, which may be a compelling interest to protect consumers, but surely that cannot serve as the compelling government interest for restriction not pertaining to celebrities at all.

Another potential avenue to salvage this bill is arguing that it is a reasonable and content neutral time, place, and manner restriction with ample alternative channels of communication. For the sake of argument, we will assume that this restriction is content neutral. Ward v. Rock Against Racism found that for a restriction on speech to be reasonable under the time, place, and manner test, it must be “the least intrusive upon the freedom of expression as is reasonably necessary to achieve a legitimate purpose of the regulation.” While restricting AI use in replicating likenesses may be a valid manner restriction on speech, blanket regulations on any replica created even partially by digital technology is far from the least intrusive means of achieving the proposal’s goal. 

Even if it were reasonable, given how platforms will be forced to react to this proposal, many if not most online platforms may no longer be able to serve as channels for not only these specific types of speech, but even speech that would narrowly be permissible. The channels of communication would inhibit even the permitted speech’s ability to be spoken out of fear of facing liability. Since these restrictions likely will result in the shutting down of many channels of communication, it is likely to fail this test as well. 

 

How to Improve the Bill

The most crucial step that can be taken to save this proposal from constitutional challenges is to greatly narrow the definition of a personal cloning service such that it clearly excludes devices like cameras. The definition could be something akin to “an algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is to produce one or more digital voice replicas or digital depictions of particular, identified individuals, with either no or minimal human intervention beyond the giving of instructions. Any algorithm, software, tool, or other technology, service, or device that merely assists a human decision maker is excluded.” Under this definition, creative works where a human was behind its creation beyond just the inputting of instructions would be excluded. A human has to set up the scene or choose where to snap a photograph. A human also makes the decisions on the colors and strokes used when drawing using digital tools. This definition thus provides protections for human created creative works that merely use digital technology as a tool to that end.

Another key improvement to this bill could be to either not classify this as an intellectual property right or to expressly not hold platforms liable for hosting it. Even under the proposed definition, it is not always obvious just by looking at it what should be taken down. The whole point of a lot of AI generated content is to look real. If social media platforms have to distinguish between real photos or AI and risk liability, they will likely air on the side of caution and stifle more speech than is necessary. 

One final key improvement would be to provide a satire and parody exception. Such exceptions are common among IP law, but are even more essential in this proposal. The likeness of people is more often the subject of, or at least used in, a parody or satire. Think of shows like Saturday Night Live who regularly depict prominent figures in humorous ways, often in a mocking manner. Given that satire and parody is often used to criticize prominent figures, including and especially politicians, it is arguably more essential to have a parody and satire exception here than in most forms of IP law. This proposal deals with potential political speech more than most other forms of IP. 

Conclusion

Some may view this criticism as hyperbolic. Surely the definitions are not actually meant to be as broad as a plain reading may imply. Even if they were, surely no court would actually uphold these laws in such a way. Hopefully that sentiment is fully correct. However, why even give the option? Why even make these scenarios that could potentially chill free speech a possibility when there is no good reason to even desire this outcome? Why subject people who are merely expressing themselves through a digital medium to litigation and the stress and expenses that come with it just to vindicate their rights? The fact that courts are unlikely to allow it does not mean they should have the option to allow it at all, nor that speakers should be temporarily chilled prior to the litigation being decided. Congress should thus either reject the bill or narrow it greatly to avoid any overstepping on their boundaries.

Leave a Reply

Your email address will not be published. Required fields are marked *

GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form.