Currencies28599
Market Cap$ 2.45T+4.77%
24h Spot Volume$ 45.33B-0.19%
BTC Dominance50.78%+1.02%
ETH Gas5 Gwei
Cryptorank
CryptoRankNewsAI Chatbots ...

AI Chatbots Spread Election Misinformation, Study Finds


AI Chatbots Spread Election Misinformation, Study Finds
Mar, 02, 2024
3 min read
by CryptoPolitan
AI Chatbots Spread Election Misinformation, Study Finds

A recent investigation has uncovered a troubling trend of AI chatbots disseminating false and misleading information regarding the 2024 election. This revelation comes from a collaborative study conducted by the AI Democracy Projects and Proof News, a nonprofit media organization. The findings highlight the urgent need for regulatory oversight as AI continues to play a significant role in political discourse.

Misinformation at a critical time

The study points out that these AI-generated inaccuracies are emerging during the crucial period of presidential primaries in the United States. With a growing number of people turning to AI for election-related information, the spread of incorrect data is particularly concerning. The research tested various AI models, including OpenAI’s ChatGPT-4, Meta’s Llama 2, Anthropic’s Claude, Google’s Gemini, and Mistral’s Mixtral from a French company. These platforms were found to provide voters with incorrect polling locations, illegal voting methods, and false registration deadlines, among other misinformation.

One alarming example cited was Llama 2’s claim that California voters could cast their votes via text message, a method that is illegal in the United States. Furthermore, none of the AI models tested correctly identified the prohibition of campaign logo attire, such as MAGA hats, at Texas polling stations. This widespread dissemination of false information has the potential to mislead voters and undermine the electoral process.

Industry response and public concern

The spread of misinformation by AI has prompted a response from both the technology industry and the public. Some tech companies have acknowledged the errors and committed to correcting them. For instance, Anthropic plans to release an updated version of its AI tool with accurate election information. OpenAI also expressed its intention to continuously refine its approach based on the evolving ways its tools are utilized. However, Meta’s response, dismissing the findings as “meaningless,” has sparked controversy, raising questions about the tech industry’s commitment to curbing misinformation.

Public concern is growing as well. A survey from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy reveals widespread fear that AI tools will contribute to the spread of false and misleading information during the election year. This concern is amplified by recent incidents, such as Google’s Gemini AI generating historically inaccurate and racially insensitive images.

The call for regulation and responsibility

The study’s findings underscore the urgent need for legislative action to regulate the use of AI in political contexts. Currently, the lack of specific laws governing AI in politics leaves tech companies to self-regulate, a situation that has led to significant lapses in information accuracy. About two weeks prior to the release of the study, tech firms voluntarily agreed to adopt precautions to prevent their tools from generating realistic content that misinforms voters about lawful voting procedures. However, the recent errors and falsehoods cast doubt on the effectiveness of these voluntary measures.

As AI continues to integrate into every aspect of daily life, including the political sphere, the need for comprehensive and enforceable regulations becomes increasingly apparent. These regulations should aim to ensure that AI-generated content is accurate, especially when it pertains to critical democratic processes like elections. Only through a combination of industry accountability and regulatory oversight can the public trust in AI as a source of information be restored and maintained.

The recent study on AI chatbots spreading election lies serves as a wake-up call to the potential dangers of unregulated AI in the political domain. As tech companies work to address these issues, the role of government oversight cannot be underestimated. Ensuring the integrity of election-related information is paramount to upholding democratic values and processes.

Read the article at CryptoPolitan

Read More

NIST Releases Draft Guidance on AI Safety and Standards

NIST Releases Draft Guidance on AI Safety and Standards

The U.S. National Institute of Standards and Technology (NIST) became proactive in re...
May, 03, 2024
3 min read
by CryptoPolitan
Microsoft’s Sustainability Chief Says Water is Important for AI and Data Centers

Microsoft’s Sustainability Chief Says Water is Important for AI and Data Centers

Microsoft’s chief sustainability officer for the UK, Lewis Richards, talked about the...
May, 03, 2024
2 min read
by CryptoPolitan
CryptoRankNewsAI Chatbots ...

AI Chatbots Spread Election Misinformation, Study Finds


AI Chatbots Spread Election Misinformation, Study Finds
Mar, 02, 2024
3 min read
by CryptoPolitan
AI Chatbots Spread Election Misinformation, Study Finds

A recent investigation has uncovered a troubling trend of AI chatbots disseminating false and misleading information regarding the 2024 election. This revelation comes from a collaborative study conducted by the AI Democracy Projects and Proof News, a nonprofit media organization. The findings highlight the urgent need for regulatory oversight as AI continues to play a significant role in political discourse.

Misinformation at a critical time

The study points out that these AI-generated inaccuracies are emerging during the crucial period of presidential primaries in the United States. With a growing number of people turning to AI for election-related information, the spread of incorrect data is particularly concerning. The research tested various AI models, including OpenAI’s ChatGPT-4, Meta’s Llama 2, Anthropic’s Claude, Google’s Gemini, and Mistral’s Mixtral from a French company. These platforms were found to provide voters with incorrect polling locations, illegal voting methods, and false registration deadlines, among other misinformation.

One alarming example cited was Llama 2’s claim that California voters could cast their votes via text message, a method that is illegal in the United States. Furthermore, none of the AI models tested correctly identified the prohibition of campaign logo attire, such as MAGA hats, at Texas polling stations. This widespread dissemination of false information has the potential to mislead voters and undermine the electoral process.

Industry response and public concern

The spread of misinformation by AI has prompted a response from both the technology industry and the public. Some tech companies have acknowledged the errors and committed to correcting them. For instance, Anthropic plans to release an updated version of its AI tool with accurate election information. OpenAI also expressed its intention to continuously refine its approach based on the evolving ways its tools are utilized. However, Meta’s response, dismissing the findings as “meaningless,” has sparked controversy, raising questions about the tech industry’s commitment to curbing misinformation.

Public concern is growing as well. A survey from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy reveals widespread fear that AI tools will contribute to the spread of false and misleading information during the election year. This concern is amplified by recent incidents, such as Google’s Gemini AI generating historically inaccurate and racially insensitive images.

The call for regulation and responsibility

The study’s findings underscore the urgent need for legislative action to regulate the use of AI in political contexts. Currently, the lack of specific laws governing AI in politics leaves tech companies to self-regulate, a situation that has led to significant lapses in information accuracy. About two weeks prior to the release of the study, tech firms voluntarily agreed to adopt precautions to prevent their tools from generating realistic content that misinforms voters about lawful voting procedures. However, the recent errors and falsehoods cast doubt on the effectiveness of these voluntary measures.

As AI continues to integrate into every aspect of daily life, including the political sphere, the need for comprehensive and enforceable regulations becomes increasingly apparent. These regulations should aim to ensure that AI-generated content is accurate, especially when it pertains to critical democratic processes like elections. Only through a combination of industry accountability and regulatory oversight can the public trust in AI as a source of information be restored and maintained.

The recent study on AI chatbots spreading election lies serves as a wake-up call to the potential dangers of unregulated AI in the political domain. As tech companies work to address these issues, the role of government oversight cannot be underestimated. Ensuring the integrity of election-related information is paramount to upholding democratic values and processes.

Read the article at CryptoPolitan

Read More

NIST Releases Draft Guidance on AI Safety and Standards

NIST Releases Draft Guidance on AI Safety and Standards

The U.S. National Institute of Standards and Technology (NIST) became proactive in re...
May, 03, 2024
3 min read
by CryptoPolitan
Microsoft’s Sustainability Chief Says Water is Important for AI and Data Centers

Microsoft’s Sustainability Chief Says Water is Important for AI and Data Centers

Microsoft’s chief sustainability officer for the UK, Lewis Richards, talked about the...
May, 03, 2024
2 min read
by CryptoPolitan