Google Bans AI-Generated Porn in Ad Policy

In a significant move to combat the proliferation of deepfake pornography, Google has announced a comprehensive update to its advertising policies, explicitly prohibiting the promotion of websites and apps that generate AI-created sexually explicit content. The new policy, set to take effect on May 30, 2024, aims to address the growing concerns surrounding the misuse of AI technology in the adult entertainment industry.

The Rise of AI-Generated Porn

Rise of AI Generated Porn

The rapid advancements in artificial intelligence have led to the emergence of AI-generated pornography, where algorithms are used to create customized adult content based on user preferences. While this technology has the potential to revolutionize the adult entertainment industry, it has also raised serious ethical and legal concerns.

According to a 2019 study by Sensity AI (formerly Deeptrace AI), a staggering 96% of the 14,678 deepfake videos found online were categorized as non-consensual pornography. As of December 2020, the number of deepfake videos had skyrocketed to over 85,000, with the count doubling every six months.

YearNumber of Deepfake Videos
20187,964
201914,678
202085,047
Source: Sensity AI

Google’s Response: Updating Ad Policies

Google’s updated advertising policy now explicitly prohibits the promotion of “synthetic content that has been altered or generated to be sexually explicit or contain nudity.” This includes websites and applications that provide tools or instructions for creating deepfake pornography.

Michael Aciman, a Google spokesperson, confirmed the update, stating, “This update is to explicitly prohibit advertisements for services that offer to create deepfake pornography or synthetic nude content.”

The tech giant will employ a combination of human reviews and automated systems to enforce the new policy, promptly removing any violating ads. In 2023 alone, Google reported removing over 1.8 billion ads for violating its sexual content policies, according to its annual Ads Safety Report.

Challenges and Limitations

While Google’s policy update is a step in the right direction, challenges remain in effectively combating the spread of AI-generated pornography. Some apps have managed to circumvent previous restrictions by disguising themselves as non-sexual in Google ads or app stores, only to promote explicit content elsewhere.

Moreover, the lack of comprehensive federal legislation addressing deepfake pornography has left victims with limited legal recourse. Although some states, such as Virginia and California, have introduced regulations restricting certain uses of faked and deepfaked pornographic media, a unified national approach is yet to be established.

The Need for Collaborative Efforts

Combating the misuse of AI technology in the adult entertainment industry requires a multi-faceted approach involving collaboration between tech companies, lawmakers, and society as a whole. While Google’s ad policy update is a significant move, it is only one piece of the puzzle.

Lawmakers must work towards introducing comprehensive legislation that provides clear guidelines and penalties for the creation and distribution of non-consensual deepfake pornography. Additionally, educational initiatives are necessary to raise awareness about the potential harms of AI-generated content and promote responsible use of the technology.

Tech companies, too, have a crucial role to play in developing robust detection and removal mechanisms for deepfake content. Collaboration between industry leaders can lead to the establishment of industry-wide standards and best practices for handling AI-generated pornography.

Conclusion

Google’s decision to ban the promotion of AI-generated porn through its advertising platform is a commendable step towards addressing the growing concerns surrounding deepfake technology. As AI continues to advance at an unprecedented pace, it is crucial for society to remain vigilant and proactive in mitigating the potential harms associated with its misuse.

By fostering a collaborative approach involving tech companies, lawmakers, and the public, we can work towards creating a safer online environment that respects individual privacy and consent. Only through concerted efforts can we harness the power of AI for good while preventing its exploitation for malicious purposes.

Affiliate DisclosureThis post may contain some affiliate links, which means we may receive a commission if you purchase something that we recommend at no additional cost for you (none whatsoever!)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.