How Generative AI Is flooding the web with deepfakes and disinformation | TechCrunch Disrupt 2024
Science & Technology
Introduction
At TechCrunch Disrupt 2024, a panel discussion took place to address the growing threat of disinformation, particularly with the rise of Generative AI technologies. Moderated by TechCrunch's Kyle Wiggers, the panel included notable figures such as Imran Ahmed, CEO of the Center for Countering Digital Hate; Brandy Nanaki, Director of Berkeley's Citrus Policy Lab; and Pamela San Martin, Co-Chair of Meta's Oversight Board. The discourse illuminated the implications of AI-generated content and the strategies to combat it effectively.
The Nature of Disinformation
The panel began by drawing a distinction between misinformation and disinformation. Misinformation refers to incorrect or misleading information without malicious intent, while disinformation is fabricated or manipulated content designed to deceive. Imran Ahmed cited the long-standing myth that carrots improve night vision, originally propagated during World War II to mask the UK's advanced radar technology. This example served as a reminder of how pervasive disinformation can influence behaviors and societal norms, even centuries later.
The Current State of Disinformation
As the conversation progressed, the panelists shared their observations regarding the current landscape of disinformation. With the upcoming elections and the proliferation of AI-generated content, the spread of disinformation has reached unprecedented levels. Brandy Nanaki emphasized how generative AI could create highly convincing deep fakes, particularly through audio, posing new challenges for moderation and detection.
Pamela San Martin highlighted the difficulties platforms face in addressing disinformation effectively. Many moderation efforts rely on automation, which often fails to grasp the context of messages. Misinterpretations can lead to inadequate responses to harmful content, underscoring the necessity for context-aware regulations.
The Role of Platforms and Regulation
Multi-faceted approaches to combatting disinformation emerged as a key theme. The panel discussed the forthcoming laws in California aimed at the regulation of generative AI technologies and the requirement for these technologies to offer detection tools to the public. Watermarking AI-generated content was also suggested as a means to inform users of its origin, fostering a culture of skepticism.
However, the participants acknowledged the limitations of self-regulation, especially within organizations like meta and challenges in government regulation. Although the Oversight Board of Meta has made recommendations for better transparency and content moderation, panelists like Imran Ahmed argued that self-regulation often falls short and lacks the necessary checks and balances expected in democratic systems.
Looking Forward
Despite the overwhelming challenges posed by disinformation, the panel concluded on a note of optimism regarding possible pathways forward. Regulatory frameworks in places such as the UK and the European Union may serve as benchmarks for the US to eventually adopt. Additionally, the expected cooperation between AI developers and platforms in creating practical detection mechanisms offers hope for more robust defenses against damaging content.
The panelists agreed that a holistic approach that combines technological advancements, better regulation, and an informed public is essential to mitigate the risks associated with generative AI and disinformation while preserving its beneficial applications.
Keyword
- Disinformation
- Misinformation
- Generative AI
- Deepfakes
- Moderation
- Regulation
- California Laws
- Oversight Board
- Transparency
- Context
FAQ
Q: What is the difference between misinformation and disinformation?
A: Misinformation is incorrect information without malicious intent, while disinformation is actively misleading information created to deceive.
Q: Why is generative AI a concern for disinformation?
A: Generative AI can produce convincing deep fakes and manipulated content at previously unseen scales, making it easier to spread disinformation.
Q: What are some proposed solutions to combat disinformation?
A: Solutions include regulatory measures like watermarking AI-generated content, developing detection tools, and increasing transparency in social media platforms.
Q: How have platforms like Meta responded to disinformation?
A: Platforms have implemented some measures recommended by oversight bodies, but challenges in context-aware moderation remain significant.
Q: Why is self-regulation not enough for addressing disinformation?
A: Self-regulation often lacks accountability and can result in insufficient responses to harmful content distribution, making external regulatory oversight vital.