Sexually explicit AI generated images of Taylor Swift at football games circulated on social media platforms such as X, Reddit, and Facebook on Jan. 24; social media companies are partly culpable. Swift’s celebrity prominence has brought the issue of AI-generated photos to the forefront of the news, as this scandal underscores just how vulnerable even the most influential people are to deepfakes.
AI tools can create deepfakes: photorealistic images users can generate by entering a prompt into an AI tool. Texas has laws that criminalize deepfakes, but there is no federal law that does so. If we don’t push Congress to pass federal policy or stricter guidelines on platforms like X, everyday internet users can create nonconsensual or humiliating photos of others using deepfakes.
With the current political polarization between our leading political parties, there are few measures or incentives that prevent political candidates from creating damaging, explicit, or misleading images of their opponents. For example, Reuters reported on how former Republican presidential candidate Ron DeSantis used deepfake technology to falsify a video of former president Donald Trump kissing Anthony Fauci, the chief medical advisor who endorsed COVID-19 vaccines and masks. Political candidates can easily spread rumors or falsify photos, which can mislead voters or reduce the amount of people competing in elections. This could disrupt free elections at the national, state and local level by increasing how many voters are casting ballots based on explicitly, boundlessly false information.
If AI was used to humiliate an international celebrity, it can be used to hurt other innocent people. Imagine after a bad break-up, a scorned ex creates humiliating photos and sends it to their former partner’s family or employer. There are already many incidents of ex-partners recording and uploading intimate videos without their former partner’s consent. In the case of AI technology, those violations and instances of sexual harassment are likely to increase as the technology develops and grows more commonplace. Rates of sexual harassment, especially against women, are already high, especially among the college age group. Deepfakes add another layer to this issue that must be dealt with to better protect people’s right to privacy and right to be free from fraud or sexual harassment.
We need to address deepfakes now. Certain Congressional members were alarmed this happened to someone as powerful and influential as Taylor Swift. They have realized if it can happen to her, it can happen to anyone. For example, Reality Defender, a cybersecurity company focused on detecting AI, determined with 90%confidence that the images of Taylor Swift were created through AI diffusion technology. This technology is available on more than 100,000 platforms and public models. Fortunately, this issue has caught the eye of senators in Congress.
On Jan. 30, a bipartisan group of U.S. senators introduced the “Defiance Act”. This bill would allow victims depicted in nude or sexually explicit forgeries to seek civil penalties against individuals who produced or distributed those images. What’s unique about this bill is it was sponsored by two democrats and two republicans: a unity that is rare in Congress. Both citizens and politicians from both parties should join forces to pass legislation that regulate deepfakes nationally.
You may not believe forgery is a big problem, but many congressional members recognize the lasting damage it could have on victims. Senator Josh Hawley (R-MO) has stated that “Innocent people have a right to…hold perpetrators accountable in court. This bill could make that a reality.” Without a lot of legal protection in states outside of Texas, deepfakes will increase rates of blackmail and sexual harassment.
After the images were uploaded, X eventually suspended certain culpable accounts. The company took further action and temporarily banned Taylor Swift’s name from coming up in searches on the platform. But the photos were online and viewed millions of times, showing that corporate actions aren’t enough to quell deepfakes’ harm or virality.
We need to address deepfakes now. The Defiance Act was recently introduced, but if we don’t push for it now in the wake of the bill’s sponsoring and the scandal, Congress’ attention will shift to other issues. The possibility for AI to create this new level of danger is why students should stand up to and voice support for bills that hold culprits liable for this action. UTD students can call or email their representatives and ask them to vote in support of The Defiance Act.