The demand for apps and websites utilizing artificial intelligence (AI) to digitally undress individuals in photos is experiencing a significant surge, raising ethical and legal concerns, according to recent research.
In September, approximately 24 million people visited platforms offering undressing services, as reported by Graphika, a social network analysis company. Researchers noted a substantial increase of over 2,400% in links promoting these “nudify” apps on social media platforms such as X and Reddit throughout the year.
These applications employ AI algorithms to manipulate images, rendering the subject in a nude appearance. Notably, a concerning trend has emerged with the rise of non-consensual pornography, often referred to as deepfake pornography, exploiting advancements in AI technology. The images are frequently sourced from social media without the subject’s consent, raising serious legal and ethical challenges.
The surge in popularity aligns with the availability of open source diffusion models, enabling developers to create more realistic images. Santiago Lakatos, an analyst at Graphika, emphasized the improved quality of these deepfakes compared to their predecessors.
Despite concerns, some of these services, priced at $9.99 per month, claim significant user bases, with one website boasting over a thousand daily users. There are instances of these apps using aggressive language in their marketing, potentially encouraging harassment.
Major social media platforms such as X and Reddit have seen an influx of sponsored content promoting undressing apps. While Google has taken action against sexually explicit content in ads, some platforms have yet to respond to requests for comment.
Privacy experts, including Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, express growing concerns over the ease with which deepfake software can be accessed and utilized. Incidents of non-consensual deepfake creation are increasingly perpetrated by ordinary individuals against everyday targets, including high school and college students.
Despite the absence of a federal law specifically prohibiting the creation of deepfake pornography, the US government does criminalize the generation of such images involving minors. Recent legal action, such as the prosecution of a North Carolina child psychiatrist, underscores the severity of these offenses.
In response to the rising threat, TikTok and Meta Platforms Inc. have taken measures to block keywords associated with searching for undressing apps, signaling a broader industry awareness of the potential harms these applications may cause.
Conclusion
The escalating popularity of AI-driven undressing apps raises red flags regarding privacy and ethical concerns. The surge in non-consensual deepfake content underscores the need for regulatory measures and industry collaboration to address potential abuses. While platforms like TikTok and Meta take steps to block related keywords, the absence of specific federal laws calls for urgent legislative action. Striking a balance between technological innovation and responsible use is crucial to prevent the harmful impact of AI-driven image manipulation on individuals in the digital age. Ongoing vigilance and cooperative efforts are imperative to navigate the evolving landscape of emerging technologies.