Currencies28583
Market Cap$ 2.31T+1.60%
24h Spot Volume$ 44.17B-1.07%
BTC Dominance50.22%+0.43%
ETH Gas5 Gwei
Cryptorank
CryptoRankNewsExperts Call...

Experts Call for More Regulation on Deepfakes


Experts Call for More Regulation on Deepfakes
Feb, 21, 2024
3 min read
by CryptoPolitan
Experts Call for More Regulation on Deepfakes

In an unprecedented move, leading figures in the artificial intelligence (AI) community, including pioneering researcher Yoshua Bengio, have issued a call for tighter regulation on the production of deepfakes. This collective, comprising experts and executives across various sectors, has articulated their concerns through an open letter orchestrated by Andrew Critch, a notable AI researcher at UC Berkeley. The letter, “Disrupting the Deepfake Supply Chain,” outlines a strategy for legislative and regulatory measures to mitigate the risks posed by these convincingly realistic yet synthetic creations.

Urgent call for regulation

Deepfakes, synthesized images, audios, and videos generated by AI technologies, have reached a level of sophistication where distinguishing them from authentic human-generated content is becoming increasingly difficult. This advancement has raised alarms over their potential misuse in spreading sexual exploitation, fraud, and political misinformation. “Given the rapid progress of AI technologies, making deepfakes more accessible, it is imperative to establish safeguards,” the signatories emphasized. Their recommendations for regulatory action include the outright criminalization of deepfake content that exploits children, imposing criminal penalties on those knowingly involved in creating or disseminating harmful deepfakes, and mandating AI companies to ensure their technologies do not facilitate the production of such content.

Broad coalition for action

Over 400 individuals from diverse fields like academia, entertainment, and politics have lent their support to the letter, showcasing the widespread concern over the issue. Notables among the signatories are Steven Pinker, a professor of psychology at Harvard, Joy Buolamwini, founder of the Algorithmic Justice League, two former presidents of Estonia, and researchers affiliated with Google DeepMind and OpenAI. This broad coalition underscores the gravity of the situation and the collective resolve to seek solutions.

The growing concern over AI’s impact

The regulatory scrutiny on AI systems has intensified, especially following the introduction of ChatGPT by OpenAI, which demonstrated the potential of AI to mimic human-like interactions. This development, along with other advancements in AI, has sparked a series of warnings from high-profile figures about the technology’s risks. A notable instance is a letter endorsed by Elon Musk last year, advocating for a temporary halt in the advancement of AI technologies beyond the capabilities of OpenAI’s GPT-4 model. Such calls for caution reflect the growing consensus on the need to balance AI innovation with societal safeguards.

Recommendations for a safer future

The letter proposes a multifaceted approach to regulate deepfakes, emphasizing the need for legal frameworks that can adapt to the pace of AI innovation. By criminalizing the most egregious forms of deepfakes and holding creators and disseminators accountable, the signatories argue for a proactive stance against the technology’s misuse. Moreover, they advocate for AI companies to play a pivotal role in preventing the generation of harmful content, suggesting a shared responsibility in safeguarding the public.

In conclusion, the call for more stringent regulation of deepfakes by leading AI experts and industry figures marks a critical juncture in the ongoing dialogue about the ethical use of AI technologies. As the capabilities of AI continue to evolve, the collective action outlined in “Disrupting the Deepfake Supply Chain” offers a roadmap for mitigating the risks associated with these advancements. By aligning the efforts of policymakers, industry leaders, and the broader community, there is a hopeful path forward in ensuring AI serves the greater good while minimizing its potential for harm.

Read the article at CryptoPolitan

Read More

Russia Unveils New AI-Powered EW Robot

Russia Unveils New AI-Powered EW Robot

Scientific and Production Enterprise Geran (NVP Geran) in Russia has unveiled its lat...
May, 02, 2024
3 min read
by CryptoPolitan
AI and Its Impact on the Reasons We Learn Languages

AI and Its Impact on the Reasons We Learn Languages

AI is placing humanized knowledge in a setting where languages are receiving increasi...
May, 02, 2024
3 min read
by CryptoPolitan
CryptoRankNewsExperts Call...

Experts Call for More Regulation on Deepfakes


Experts Call for More Regulation on Deepfakes
Feb, 21, 2024
3 min read
by CryptoPolitan
Experts Call for More Regulation on Deepfakes

In an unprecedented move, leading figures in the artificial intelligence (AI) community, including pioneering researcher Yoshua Bengio, have issued a call for tighter regulation on the production of deepfakes. This collective, comprising experts and executives across various sectors, has articulated their concerns through an open letter orchestrated by Andrew Critch, a notable AI researcher at UC Berkeley. The letter, “Disrupting the Deepfake Supply Chain,” outlines a strategy for legislative and regulatory measures to mitigate the risks posed by these convincingly realistic yet synthetic creations.

Urgent call for regulation

Deepfakes, synthesized images, audios, and videos generated by AI technologies, have reached a level of sophistication where distinguishing them from authentic human-generated content is becoming increasingly difficult. This advancement has raised alarms over their potential misuse in spreading sexual exploitation, fraud, and political misinformation. “Given the rapid progress of AI technologies, making deepfakes more accessible, it is imperative to establish safeguards,” the signatories emphasized. Their recommendations for regulatory action include the outright criminalization of deepfake content that exploits children, imposing criminal penalties on those knowingly involved in creating or disseminating harmful deepfakes, and mandating AI companies to ensure their technologies do not facilitate the production of such content.

Broad coalition for action

Over 400 individuals from diverse fields like academia, entertainment, and politics have lent their support to the letter, showcasing the widespread concern over the issue. Notables among the signatories are Steven Pinker, a professor of psychology at Harvard, Joy Buolamwini, founder of the Algorithmic Justice League, two former presidents of Estonia, and researchers affiliated with Google DeepMind and OpenAI. This broad coalition underscores the gravity of the situation and the collective resolve to seek solutions.

The growing concern over AI’s impact

The regulatory scrutiny on AI systems has intensified, especially following the introduction of ChatGPT by OpenAI, which demonstrated the potential of AI to mimic human-like interactions. This development, along with other advancements in AI, has sparked a series of warnings from high-profile figures about the technology’s risks. A notable instance is a letter endorsed by Elon Musk last year, advocating for a temporary halt in the advancement of AI technologies beyond the capabilities of OpenAI’s GPT-4 model. Such calls for caution reflect the growing consensus on the need to balance AI innovation with societal safeguards.

Recommendations for a safer future

The letter proposes a multifaceted approach to regulate deepfakes, emphasizing the need for legal frameworks that can adapt to the pace of AI innovation. By criminalizing the most egregious forms of deepfakes and holding creators and disseminators accountable, the signatories argue for a proactive stance against the technology’s misuse. Moreover, they advocate for AI companies to play a pivotal role in preventing the generation of harmful content, suggesting a shared responsibility in safeguarding the public.

In conclusion, the call for more stringent regulation of deepfakes by leading AI experts and industry figures marks a critical juncture in the ongoing dialogue about the ethical use of AI technologies. As the capabilities of AI continue to evolve, the collective action outlined in “Disrupting the Deepfake Supply Chain” offers a roadmap for mitigating the risks associated with these advancements. By aligning the efforts of policymakers, industry leaders, and the broader community, there is a hopeful path forward in ensuring AI serves the greater good while minimizing its potential for harm.

Read the article at CryptoPolitan

Read More

Russia Unveils New AI-Powered EW Robot

Russia Unveils New AI-Powered EW Robot

Scientific and Production Enterprise Geran (NVP Geran) in Russia has unveiled its lat...
May, 02, 2024
3 min read
by CryptoPolitan
AI and Its Impact on the Reasons We Learn Languages

AI and Its Impact on the Reasons We Learn Languages

AI is placing humanized knowledge in a setting where languages are receiving increasi...
May, 02, 2024
3 min read
by CryptoPolitan