Washington — A new study shows that AI and chatbots are having a hard time stopping fake news before the U.S. presidential elections.
The group NewsGuard looked at 10 popular chatbots in September and found that they often repeat wrong information or don’t respond at all. In fact, they got things wrong almost 40% of the time!
McKenzie Sadeghi, the study’s author, said, “Chatbots have trouble telling which news is real and which is fake.”
NewsGuard checks fake stories each month. They ask chatbots tricky questions to see if they will repeat fake news, ignore it, or correct it.
The study found that chatbots repeat fake news the most when people ask bad questions on purpose. These tricks are often used by other countries, like Russia and China, to spread lies.
In one study, AI repeated false information from Russia when it pretended to come from American news sources. This shows how easily AI can be fooled.
Fake news has been a big problem in the 2024 election, from lies about immigrants to false stories about the government’s help during storms.
Experts say AI is not perfect. It can’t always catch mistakes or figure out what’s true. Sometimes, it learns from bad sources because many trustworthy news sites block AI from using their information.
AI researcher Sejin Paik says AI is always changing, which makes it hard to know if it’s telling the truth. Matt Jordan, a media professor, says, “AI doesn’t know anything. It just repeats what it’s learned, even if it’s wrong.”
NewsGuard checks chatbots every month to measure their accuracy and help improve them.
AI tools like ChatGPT are becoming very popular. But even though many people use them, experts say AI can spread a lot of wrong information because it’s learning from so much content. Some people even trust AI more than real news reporters.
Experts hope that the government will make rules to stop the spread of fake news. They also think that people should trust real news reporters instead of just relying on AI.