The independent newspaper of the University of Iowa community since 1868

The Daily Iowan

The independent newspaper of the University of Iowa community since 1868

The Daily Iowan

The independent newspaper of the University of Iowa community since 1868

The Daily Iowan

Opinion | AI deepfake apps need to be held legally responsible for creating sexually explicit content.

Iowa passed bills to criminalize the making of and distribution of sexually explicit deepfakes, especially pertaining to images of minors. However, action needs to be taken against the tech companies that help users make exploitative deepfakes and for platforms that promote such programs.
Opinion+%7C+AI+deepfake+apps+need+to+be+held+legally+responsible+for+creating+sexually+explicit+content.
iStock

Iowa legislators passed preventative laws to criminalize deepfakes this month.

Senate File 2243, passed on March 6, makes it illegal to create or distribute deepfakes that show a visual depiction of people engaging in sexual acts — fully or partially nude. House File 2240 passed on March 25, forbids the act of producing sexually explicit deepfakes of minors. However, there is still more to be done to hold all parties accountable; including the platforms these deepfakes are spread on.

Those who create this media will be charged with an aggravated misdemeanor for renderings of adults, or a felony for renderings of a minor. While there is already a federal law criminalizing the distribution of sexually explicit images of minors, including deepfakes of identifiable minors, these newly passed Iowa bills add deepfake pornography to the assault code so it can also be considered harassment.

According to a 2022 article from Interesting Engineering, deepfake technology has been around since 2014, and since then, people have used this artificial intelligence to sexually exploit celebrities, politicians, and even children across the country.

Last November, a student at a New Jersey high school created deepfake pornographic photos of 30 female classmates and uploaded them to the internet, according to an article from ABC. No state law was able to protect these victims, leaving them feeling helpless. However, one of the victims is suing the perpetrator.

According to an article from NBC, this February, five eighth-grade students at Beverly Vista Middle School in Beverly Hills, California, made sexually explicit deepfakes of 16 classmates, resulting in their expulsion. At the time, there was no California state law against deepfakes of children.

In December 2023, two students of Pinecrest Cove Preparatory Academy in Florida were suspended for producing AI-generated nude images of dozens of their classmates, according to an article from the New York Post. On March 20 this year, Florida had another incident of using AI for exploitative purposes. A third-grade science teacher was caught using yearbook photos of students to generate child pornography.

With a hit of a single button on one’s phone, it has become incredibly easy to render deepfakes, making the technology accessible to teenagers and children too.

Our state and federal governments need to take preventative legal action against the companies that create artificial intelligence apps that allow people to make exploitative content and the platforms that endorse them.

According to a March 5 article from NBC, the parent company of Facebook and Instagram, Meta, removed ads from the artificial intelligence app Perky AI from its platform. Perky AI is an app that advertises its ability to undress women using artificial intelligence. The app was also removed from the Apple store but those who have already downloaded it can still use it.

The Apple store prohibits apps that include pornography, as well as defamatory content meant to humiliate or target a group or individual. Unfortunately, these rules were not enforced, and the damage was already done. These ads should have never been on these social media platforms, especially where children can see them.

Companies that allow sexual deepfakes to be made or shared on their platforms need to be held legally accountable by the state rather than just removing their content. The basis of a deepfake is to make sexual content of someone, even children, without their consent. Anyone who creates or shares a deepfake image of a child, identifiable or not, should also face a felony charge because this behavior is predatory and exploitative.

Section 230 of the Communications Act of 1934 states that internet platforms that host third parties are not responsible for the content the third parties post, which makes it nearly impossible to hold any of these social media or AI companies accountable. However, there should be a law against companies marketing apps that people use to make deepfakes, which can be done because Section 230 does not include that in its protections.

The plague of deepfakes needs to be addressed immediately, as the very nature of deepfakes is to mislead, exploit, and cause harm to others. How many more children are going to be victimized before we set firm legal barriers against AI-generated child pornography?

Although Iowa has criminalized the making of and distribution of sexually explicit deepfakes of minors, there is little law enforcement can do when those images are already disseminated on multiple internet platforms. Legislators must make it illegal for any social media platforms to promote deepfake AI generation and programs that make sexually exploitative deepfake technologies for users.

More to Discover
About the Contributor
Natalie Nye
Natalie Nye, Opinions Columnist
(she/her/hers)
Natalie Nye is a fourth-year Journalism/Mass Communication student with a minor in art at the Univeristy of Iowa. She is an opinions columnist at The Daily Iowan and a freelance writer for Little Village magazine. She also has her own blog, called A Very Public Blog.