These days, AI-generated content is on every screen and platform, leaving people confused and second-guessing everything. Americans are concerned about AI spreading misinformation and what they can believe. And according to a new study, we’re not as good at identifying AI as we think.
The “Evaluating Online Information Trustworthiness” from Socialtrait finds that Americans “significantly overestimate” their ability to detect AI-generated misinformation. The problem is, that makes them even more vulnerable to digital manipulation.
- The study reveals that 88% of Gen Z and 84% of millennials say they’re confident in detecting AI-generated content, but the success rate of even tech-savvy participants is closer to 40%.
- Younger Americans are the most confident and digitally engaged, but they also share AI-generated content most often, as 87% of millennials and 80% of Gen Z report doing it.
- Just 3% of respondents in all age groups feel fully prepared to navigate AI-driven media.
- Another 14% say they don’t feel prepared at all, and baby boomers are three times more likely than Gen Z to feel unprepared.
- Nearly all participants (97%) agree that AI-generated content should require labeling, with just 1% saying that disclosure isn’t necessary.
- Nearly three-quarters (72%) say they’re “not very likely” to believe controversial videos if they can’t verify them.
- But that doesn’t mean they actually verify, as only 81% say they sometimes check account authenticity before trusting content.
- Some states are thought to be more vulnerable to AI-powered misinformation, including:
- Florida - Most at risk because of the aging population, high reliance on Facebook, political polarization.
- Mississippi - At risk because of low digital literacy, rural reliance on Facebook for information
- West Virginia - Higher risk because of limited broadband, older demographics and geographic isolation.
- Alabama - More at risk because of digital literacy gaps and heavy reliance on Facebook.
Source: Yahoo Finance