Recently, users of Microsoft Bing's Image Creator, which harnesses the power of OpenAI's DALL-E, demonstrated a rather unsettling reality. It became apparent that this AI model could effortlessly generate content that one would argue it shouldn't be able to.
The spectrum of generated content ranged from the innocuous, such as beloved characters like Mario, Goofy, and Spongebob, to highly controversial and disturbing scenarios like Mickey Mouse wielding an AR-15, Disney characters masquerading as Abu Ghraib guards, and Lego characters plotting nefarious deeds while brandishing weapons.
i wonder how disney feels about microsoft building a bing app that generates ai pics of mickey mouse hijacking a plane on 9/11 pic.twitter.com/Y61Ag19J3D
— 𖤐 Sage 𖤐 🏳️⚧️ (@Transgenderista) October 5, 2023
Facebook's Meta, in its own foray into AI, introduced an AI bot app feature that allows users to create stickers using AI, resulting in bizarre images like Waluigi holding a gun, Mickey Mouse brandishing a bloody knife, and even Justin Trudeau in a compromising situation.
The Ethics Quandary of AI
At first glance, some of these AI-generated images may appear humorous or harmless, but they raise questions about the boundaries and responsibilities of tech companies. Stella Biderman, a researcher at EleutherAI, points out that the key issue lies in assessing who, if anyone, is harmed by such content.
While providing non-photorealistic stickers, like a peculiar depiction of Karl Marx, may seem innocuous to those actively seeking them, the scenario changes when individuals who are not looking for violent or explicit content are repeatedly exposed to it. Moreover, the generation of photorealistic imagery that could be weaponized as revenge porn poses a severe ethical concern.
Increasing revenge pornb & Deepfake nudes
between July 2020 and July 2023, monthly traffic to the top 20 deepfake sites increased by a staggering 285%, with Google being the single largest driver of traffic. Mrdeepfakes.com alone received 25.2 million visits in total.
The story takes a darker turn when we delve into the world of the infamous internet forum, 4chan. Here, a concerted effort is underway to utilize these AI tools to mass-produce racist images.
In a reported coordinated trolling campaign by 404 Media, propaganda is being generated for amusement, albeit at the expense of decency and respect.
Offensive images created with Bing's Image Creator, including the depiction of a group of Black men chasing a white woman, have managed to circumvent the tool's content filters through simple adjustments in text prompts.
The Skeptics and Their Stand
While some individuals in the tech sphere, such as Elon Musk and investor Mike Solana, dismiss these concerns as exaggerated or fabricated by journalists, there is an undeniable element of truth to the worries.
Racists and extremists will invariably exploit any available tools to propagate their ideologies. However, it is equally imperative for tech companies to recognize their responsibility in ensuring that the tools they release are equipped with adequate guardrails.
Artificial Intelligence - Ethical Concerns and Hollow Promises
The realm of AI safety and ethics is one that big tech companies often pay lip service to. They boast of having sizable teams dedicated to these issues, yet the AI tools they unleash into the world appear to need more rudimentary defenses against misuse.
It's noteworthy that Microsoft recently laid off its entire ethics and society team despite maintaining an Office of Responsible AI and an "advisory committee" for AI ethics. Their responses to media inquiries have invariably boiled down to a familiar refrain: "We know, but we're working on it, we promise."
In a statement to Motherboard, a Microsoft spokesperson claimed to have nearly 350 individuals working on responsible AI, with just over a third dedicated to it full-time. The remainder carry responsible AI responsibilities as a core part of their roles.
The spokesperson emphasized ongoing efforts to implement a range of guardrails and filters to ensure a positive and safe user experience with the Bing Image Creator.
Meta, the parent company of Facebook, echoes a similar sentiment when responding to media requests regarding its AI tools. Their boilerplate statement acknowledges that generative AI systems may produce inaccurate or inappropriate outputs and promise continuous improvement based on user feedback.
Notably absent is any specific commentary on their AI safety practices concerning the stickers generated on their Messenger app.
The Plight of Creatives and Copyright
Beyond the concerns of offensive content, there lies another looming issue: the potential Exploitation of creative work. Authors, musicians, and visual artists have raised vehement objections to AI tools often trained using data indiscriminately scraped from the internet. Tech giants like Meta, Google, X, and Zoom already changed privacy policies and how they will use public and user data to train. There AI's
This includes original and copyrighted works used without the authors' consent. The misuse of AI to replicate their creations has become a significant point of contention, leading to disputes and, in some instances, lawsuits against the companies behind these tools.
As the debate rages on and tech giants grapple with the challenges posed by AI-generated content, one fact remains evident. Even if these systems are patched to prevent the creation of disturbing and controversial images, the world of AI will always be engaged in a relentless cat-and-mouse game.
Crafting safeguards that account for every conceivable definition of "unwanted" or "unsafe" content proves to be an elusive goal.
Biderman aptly summarizes the dilemma by highlighting the inherent difficulty in creating a universal notion of safety across all application contexts. What might be considered safe in primary school education applications may need to align with safety standards in other contexts.
Even if companies patch their systems to prevent the creation of offensive content, a persistent challenge remains. AI companies find themselves in a perpetual cat-and-mouse game as they attempt to stay one step ahead of those seeking to misuse their technology.
Stella Biderman highlights a fundamental issue: "These 'general purpose' models cannot be made safe because there is no single consistent notion of safety across all application contexts." What may be deemed safe in one context could pose risks in another.
The Imperative of Responsible Innovation
Despite the hurdles and skepticism, it's undeniable that AI and its creative potential continue to evolve. While the promise of AI-generated content is compelling, it comes with a weighty responsibility. Tech companies must balance innovation with ethics, ensuring that safeguards are not just an afterthought but a fundamental aspect of AI development.
As we navigate this uncharted territory, it becomes increasingly apparent that the AI landscape is still a work in progress. What remains to be seen is whether companies can rise to the occasion, implement effective safeguards, and foster an environment where AI-generated content truly benefits society without compromising its values.
Beyond the ethical concerns, another dimension of this issue is the impact on creative professionals. Authors, musicians, and visual artists have strongly opposed AI tools that often rely on indiscriminately scraped internet data, including copyrighted works, without authorization.
This practice has sparked legal battles, such as those involving Hollywood writers and actors' unions, who have raised concerns about worker exploitation.
AI Watermarking as a Solution?
Watermarking has emerged as a promising strategy in response to the challenges posed by AI-generated content. Just as physical watermarks authenticate documents, Pictures, etc.
But recent research, including studies by Soheil Feizi, a University of Maryland computer science professor, has revealed vulnerabilities in watermarking. Feizi's work focuses on "low perturbation" watermarks, which are invisible to the naked eye.
Feizi's research demonstrates how attackers can remove watermarks, add them to human-generated images, and trigger false positives.
The Verdict on Watermarking AI Content
The flaws in watermarking have prevented tech giants from presenting it as a solution. However, experts in AI detection, such as Ben Colman, CEO of AI-detection startup Reality Defender, are skeptical.
They argue that watermarking's real-world applications often need to improve, as it can be easily faked, removed, or ignored. Companies like Undetectable have arisen to provide watermark-removal services.
Watermarking, while imperfect, can still play a role in AI detection when combined with other technologies. Hany Farid, a professor at the UC Berkeley School of Information, emphasizes the need to understand that more than watermarking alone may be needed.
He suggests that a combination of strategies will make it harder for malicious actors to produce convincing fakes.
Can tech giants keep up with AI's rapid changes and stay ahead in this digital cat-and-mouse game? & What are they willing to do to win this game?