By Cecilia D’Anastasio and Davey Alba
When fans of Kaitlyn Siragusa, a popular 29-year-old internet personality known as Amouranth, want to watch her play video games, they will subscribe for $5 a month to her channel on Amazon.com Inc.’s Twitch. When they want to watch her perform adult content, they’ll subscribe for $15 a month for access to her explicit OnlyFans page.
And when they want to watch her do things she is not doing and has never done, for free, they’ll search on Google for so-called “deepfakes” — videos made with artificial intelligence that fabricate a lifelike simulation of a sexual act featuring the face of a real woman.
Siragusa, a frequent target of deepfake creators, said each time her staff finds something new on the search engine, they file a complaint with Google and fill out a form requesting the particular link be delisted, a time and energy draining process. “The problem,” Siragusa said, “is that it’s a constant battle.”
During the recent AI boom, the creation of nonconsensual pornographic deepfakes has surged, with the number of videos increasing ninefold since 2019, according to research from independent analyst Genevieve Oh. Nearly 150,000 videos, which have received 3.8 billion views in total, appeared across 30 sites in May 2023, according to Oh’s analysis. Some of the sites offer libraries of deepfake programming, featuring the faces of celebrities like Emma Watson or Taylor Swift grafted onto the bodies of porn performers. Others offer paying clients the opportunity to “nudify” women they know, such as classmates or colleagues.
Some of the biggest names in technology, including Alphabet Inc.’s Google, Amazon, X, and Microsoft Corp., own tools and platforms that abet the recent surge in deepfake porn. Google, for instance, is the main traffic driver to widely used deepfake sites, while users of X, formerly known as Twitter, regularly circulate deepfaked content. Amazon, Cloudflare and Microsoft’s GitHub provide crucial hosting services for these sites.
For the targets of deepfake porn who would like to hold someone accountable for the resulting economic or emotional damage, there are no easy solutions. No federal law currently criminalizes the creation or sharing of non-consensual deepfake porn in the US. In recent years, 13 states have passed legislation targeting such content, resulting in a patchwork of civil and criminal statutes that have proven difficult to enforce, according to Matthew Ferraro, an attorney at WilmerHale LLP. To date, no one in the US has been prosecuted for creating AI-generated nonconsensual sexualized content, according to Ferraro’s research. As a result, victims like Siragusa are mostly left to fend for themselves.
“People are always posting new videos,” Siragusa said. “Seeing yourself in porn you did not consent to feels gross on a scummy, emotional, human level.”
Recently, however, a growing contingent of tech policy lawyers, academics and victims who oppose the production of deepfake pornography have begun exploring a new tack to address the problem. To attract users, make money and stay up and running, deepfake websites rely on an extensive network of tech products and services, many of which are provided by big, publicly traded companies. While such transactional, online services tend to be well protected legally in the US, opponents of the deepfakes industry see its reliance on these services from press-sensitive tech giants as a potential vulnerability. Increasingly, they are appealing directly to the tech companies — and pressuring them publicly — to delist and de-platform harmful AI-generated content.
“The industry has to take the lead and do some self-governance,” said Brandie Nonnecke, a founding director of the CITRIS Policy Lab who specializes in tech policy. Along with others who study deepfakes, Nonnecke has argued that there should be a check on whether an individual has approved the use of their face, or given rights to their name and likeness.
Victims’ best hope for justice, she said, is for tech companies to “grow a conscience.”
Among other goals, activists want search engines and social media networks to do more to curtail the spread of deepfakes. At the moment, any internet user who types a well-known woman’s name into Google Search alongside the word “deepfake” may be served up dozens of links to deepfake websites. Between July 2020 and July 2023 monthly traffic to the top 20 deepfake sites increased 285%, according to data from web analytics company Similarweb, with Google being the single largest driver of traffic. In July, search engines directed 248,000 visits every day to the most popular site, Mrdeepfakes.com — and 25.2 million visits, in total, to the top five sites. SimilarWeb estimates that Google Search accounts for 79% of global search traffic.
Nonnecke said Google should do more “due diligence to create an environment where, if someone searches for something horrible, horrible results don’t pop up immediately in the feed.” For her part, Siragusa said that Google should “ban the search results for deepfakes” entirely.
In response, Google said that like any search engine, it indexes content that exists on the web. “But we actively design our ranking systems to avoid shocking people with unexpected harmful or explicit content they don’t want to see,” spokesperson Ned Adriance said. The company said it has developed protections to help people affected by involuntary fake pornography, including that people can request the removal of pages about them that include the content.
“As this space evolves, we’re actively working to add more safeguards to help protect people,” Adriance said.
Activists would also like social media networks to do more. X already has policies in place prohibiting synthetic and manipulated media. Even so, such content regularly circulates among its users. Three hashtags for deepfaked video and imagery are tweeted dozens of times every day, according to data from Dataminr, a company that monitors social media for breaking news. Between the first and second quarter of 2023, the quantity of tweets from eight hashtags associated with this content increased 25% to 31,400 tweets, according to the data.
X did not respond to a request for comment.
Deepfake websites also rely on big tech companies to provide them with basic web infrastructure. According to a Bloomberg review, 13 of the top 20 deepfake websites are currently using web hosting services from Cloudflare Inc. to stay online. Amazon.com Inc. provides web hosting services for three popular deepfaking tools listed on several websites, including Deepswap.ai. Past public pressure campaigns have successfully convinced web services companies, including Cloudflare, to stop working with controversial sites, ranging from 8Chan to Kiwi Farms. Advocates hope that stepped-up pressure against companies hosting deepfake porn sites and tools might achieve a similar outcome.
Cloudflare did not respond to a request for comment. An Amazon Web Services spokesperson referred to the company’s terms of service, which disallows illegal or harmful content, and asked people who see such material to report it to the company.
Recently, the tools used to create deepfakes have grown both more powerful and more accessible. Photorealistic face-swapping images can be generated on demand using tools like Stability AI, maker of the model Stable Diffusion. Because the model is open-source, any developer can download and tweak the code for myriad purposes — including creating realistic adult pornography. Web forums catering to deepfake pornography creators are full of people trading tips on how to create such imagery using an earlier release of Stability AI’s model.
Emad Mostaque, CEO of Stability AI, called such misuse “deeply regrettable” and referred to the forums as “abhorrent.” Stability has put some guardrails in place, he said, including prohibiting porn from being used in the training data for the AI model.
“What bad actors do with any open source code can’t be controlled, however there is a lot more than can be done to identify and criminalize this activity,” Mostaque said via email. “The community of AI developers as well as infrastructure partners that support this industry need to play their part in mitigating the risks of AI being misused and causing harm.”
Hany Farid, a professor at the University of California at Berkeley, said that the makers of technology tools and services should specifically disallow deepfake materials in their terms of service.
“We have to start thinking differently about the responsibilities of technologists developing the tools in the first place,” Farid said.
While many of the apps that creators and users of deepfake pornography websites recommend for creating deepfake pornography are web-based, some are readily available in the mobile storefronts operated by Apple Inc. and Alphabet Inc.’s Google. Four of these mobile apps have received between one and 100 million downloads in the Google Play store. One, FaceMagic, has displayed ads on porn websites, according to a report in VICE.
Henry Ajder, a deepfakes researcher, said that apps frequently used to target women online are often marketed innocuously as tools for AI photo animation or photo-enhancing. “It’s an extensive trend that easy-to-use tools you can get on your phone are directly related to more private individuals, everyday women, being targeted,” he said.
FaceMagic did not respond to a request for comment. Apple said it tries to ensure the trust and safety of its users and that under its guidelines, services which end up being used primarily for consuming or distributing pornographic content are strictly prohibited from its app store. Google said that apps attempting to threaten or exploit people in a sexual manner aren’t allowed under its developer policies.
Mrdeepfakes.com users recommend an AI-powered tool, DeepFaceLab, for creating nonconsensual pornographic content that is hosted by Microsoft Inc.’s GitHub. The cloud-based platform for software development also currently offers several other tools that are frequently recommended on deepfake websites and forums, including one that until mid-August showed a woman naked from the chest up whose face is swapped with another woman’s. That app has received nearly 20,000 “stars” on GitHub. Its developers removed the video, and discontinued the project this month after Bloomberg reached out for comment.
A GitHub spokesperson said the company condemns “using GitHub to post sexually obscene content,” and the company’s policies for users prohibit this activity. The spokesperson added that the company conducts “some proactive screening for such content, in addition to actively investigating abuse reports,” and that GitHub takes action “where content violates our terms.”
Bloomberg analyzed hundreds of crypto wallets associated with deepfake creators, who apparently make money by selling access to libraries of videos, through donations, or by charging clients for customized content. These wallets regularly receive hundred-dollar transactions, potentially from paying customers. Forum users who create deepfakes recommend web-based tools that accept payments via mainstream processors, including PayPal Holdings Inc., Mastercard Inc. and Visa Inc. — another potential point of pressure for activists looking to stanch the flow of deepfakes.
MasterCard spokesperson Seth Eisen said the company’s standards do not permit nonconsensual activity, including such deepfake content. Spokespeople from PayPal and Visa did not provide comment.
Until mid-August, membership platform Patreon supported payment for one of the largest nudifying tools, which accepted over $12,500 every month from Patreon subscribers. Patreon suspended the account after Bloomberg reached out for comment.
Patreon spokesperson Laurent Crenshaw said the company has “zero tolerance for pages that feature non-consensual intimate imagery, as well as for pages that encourage others to create non-consensual intimate imagery.” Crenshaw added that the company is reviewing its policies “as AI continues to disrupt many areas of the creator economy.”
Carrie Goldberg, an attorney who specializes, in part, in cases involving the nonconsensual sharing of sexual materials, said that ultimately it’s the tech platforms who hold sway over the impact of deepfake pornography on its victims.
“As technology has infused every aspect of our life, we’ve concurrently made it more difficult to hold anybody responsible when that same technology hurts us,” Goldberg said.
Note:- (Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor. The content is auto-generated from a syndicated feed.))