logo

Gauging Global Consumers' Fear About Artificial Intelligence

Understanding consumer anxiety within and across markets to guide corporate strategy
May 09, 2025 at 9:00 am UTC

Key Takeaways

  • Fear of new technologies like artificial intelligence can hinder their diffusion. Differences in public threat perceptions around AI can speed or delay consumers’ adoption of AI and AI-based products, and also play into regulatory momentum and geopolitical competition surrounding AI.

  • Measuring how much global consumers view AI as a threat can help inform companies’ understanding of the risks and opportunities they’ll face in key global markets.

  • A comparative fear gauge reveals that consumers in developed countries generally say AI is more of a threat than adults in developing nations, with geopolitical AI rivals China and the United States at opposite ends of the spectrum.

  • It also reveals an inverse relationship between public trust in technology companies and fear of AI, with countries where consumers have lower tech industry trust showing higher readings on the AI fear gauge. Trust in the technology industry has also declined throughout 2024-25, particularly in developed markets, potentially playing into consumers’ outsized concerns about AI in those markets.

  • Going forward, companies can leverage demographic profiles of more fearful consumers to  market AI products in targeted ways, such as by emphasizing transparency in high-fear markets and subgroups, showcasing innovation in low-fear regions, playing up ease over sophistication for older and rural audiences, and carefully managing communications about AI efficiency to maintain consumer trust and favorable regulatory conditions.

Fear of new technologies has a major influence on consumer adoption. Other factors play a role, such as the ease of adoption and the technology’s necessity for professional success. But perceived risk is a primary element in determining whether individuals accept new technologies, like artificial intelligence. From the perspective of employment concerns, higher threat perceptions among the public can meanwhile be associated with support for policies to prevent or remediate job loss caused by AI. Specifically, concerns over automation can increase popular support for redistributive policies like compensation and more generous unemployment benefits. Additionally, higher fear of AI is also associated with public support for more government regulation of the technology, although other factors like trust in government can moderate this relationship. 

Additionally, AI advances are increasingly being couched in terms of geopolitics. U.S.-China competition in AI is being characterized as a kind of arms race that will determine the shape of global power politics in the coming years and decades. Understanding which people are most and least fearful of AI — both as consumers and as voters within countries, and at the national level — is therefore a key metric for policymakers, employers, and businesses deploying AI in consumer-facing products. 

Gauging fear

Morning Consult’s AI fear gauge uses monthly survey data gathered in 19 key markets to assess global consumers’ level of threat perceptions stemming from artificial intelligence. In each market, we ask respondents whether artificial intelligence is a major or minor threat to their country, or not a threat at all.  We then transform responses to this question into a single weighted average to see how  a given country’s threat perceptions stemming from AI compare with those in other countries.

Looking across markets, consumers in many highly developed countries like the United States see a high level of threat perception associated with AI, while those in many developing countries are lower on the gauge. This may stem from perceptions that large language models (LLMs) and AI tools built on top of them will primarily automate away white collar jobs, which form a larger portion of the workforce in highly developed countries. Though there are exceptions: South Africa has one of the highest readings on the fear gauge, while South Korea and Japan are relatively low compared with developed markets in Europe and North America. 

 

Consumers in developed markets generally feel more threatened by AI

AI fear gauge readings for each of the following countries, with red indicating AI is viewed as more of a threat within that country
Morning Consult Logo
Fear gauge values reflect weighted averages of responses to a survey question asking respondents about how much they view AI as a threat to their country. Response options include major threat, minor threat, not a threat, don’t know/no opinion. The latter category is omitted from the average.
Survey conducted March 24-April 21 among roughly 1,000 adults per country, each with a margin of error of up to +/-3 percentage points.

One of the world’s largest consumer markets — China — stands out as having the second lowest level of fear in our dataset. While the wording of the question asking about national threat levels could be interpreted in a way that elicits more confident or positive attitudes from Chinese respondents, this result reinforces other research that has noted the high level of adoption of and enthusiasm for LLMs in China. The United States, meanwhile, clocks the second highest level of concern about AI as a national threat. 

U.S. adults are much more likely than Chinese consumers to see AI as a threat

Morning Consult Logo
Fear gauge values reflect weighted averages of responses to a survey question asking respondents about how much they view AI as a threat to their country. Response options include major threat, minor threat, not a threat, don't know/no opinion. The latter category is omitted from the average.
Survey conducted March 24-April 21 among roughly 1,000 adults each in the United States and China, with a margin of error of +/-3 percentage points.

Low trust in the technology industry coincides with higher AI angst

Recent academic research shows that trust in AI companies is negatively correlated with support for more regulation of the sector: People who trust the industry less (and government more) are more likely to favor heavy-handed approaches to regulating the technology. We see this dynamic at the national level, with countries that have higher readings on the AI fear gauge correlating with those markets where consumers say they have lower trust in the technology industry writ large. This poses policy risk for AI companies and for others building key products that rely on AI in markets where trust is low. 

Lower trust in the tech industry is associated with higher population-level angst about AI

AI fear gauge readings (x-axis) vs. net trust in the technology industry (y-axis)
Morning Consult Logo
Fear gauge values reflect weighted averages of responses to a survey question asking respondents about how much they view AI as a threat to their country. Response options include major threat, minor threat, not a threat, don’t know/no opinion. The latter category is omitted from the average. “Net trust” is the share of adults in each country who said they trust each category “a lot” or “some” minus the share who said “a little” or “not at all.”
Points represent pooled values from surveys conducted monthly from January-March, 2025, among roughly 1,000 adults per country, for a combined margin of error of up to +/-2 percentage points

Public trust in the technology industry has been declining over the last year and change, with developed markets in particular seeing a steady decline. Controversy over perceived censorship, privacy issues, monopolistic practices, and prominent tech CEOs’ involvement in politics may all play into declining trust.  

 

Trust in the tech industry has been steadily declining over 2024 and 2025

Net trust of the tech industry averaged across emerging and developed markets
Morning Consult Logo
Surveys conducted monthly among roughly 1,000 adults per country, each with a margin of error of up to +/-3 percentage points. Trend lines derive from simple averages. Emerging markets include Argentina, Brazil, China, India, Mexico, Russia, South Africa, South Korea, Turkey and Nigeria. Developed markets include Australia, Canada, France, Germany, Italy, Japan, Spain, the United Kingdom and the United States.
“Net trust” is the share of adults in each country who said they trust each category “a lot” or “some” minus the share who said “a little” or “not at all.”

Who is most concerned about AI as a threat?

Despite these cross-country differences, within countries, the demographic groups that perceive AI as a threat are more similar than not. For example, in both the United States and China, it is older generations and rural respondents who are more wary, though the generational differences are more stark in the United States. Some of this declining trust, therefore, could be due purely to aging populations. Interestingly, in the United States it is also higher-income Americans who are more likely to perceive AI as a threat, likely due to concerns about automation (see footnote on chart below). Ironically, higher-income Americans are also more likely to be AI super users according to Morning Consult data — suggesting that using the technology may not prevent some users from also viewing it as a threat. Political views also play a role, with Democrats more worried than either Republicans or Independents. 

The demographics of AI fear often — but not always — overlap across countries

The demographic profile of those who say that AI is a “major threat” to their country
Morning Consult Logo
The gap between the dots and the bars shows whether each group is over- or underrepresented among those who say AI is a major threat. If the bar is taller than the dot, that demographic composes a larger share of those who say AI is a major threat, and vice versa.
Demographic profiles derive from surveys conducted monthly January-March, 2025, among roughly 1,000 adults per country since, for a combined topline margin of error of up to +/-2 percentage points.

Navigating fear: How businesses can use data on AI angst to tailor their approach

This kind of demographic profile of those who express trepidation about AI can help guide businesses that are incorporating AI into their public-facing products. Companies deploying AI-powered products should tailor their market approach based on these demographic and geographic insights. For high-fear markets, including information about transparency and human oversight in AI features is likely to bear fruit. While in lower-fear markets and for products targeting younger and urban consumers, highlight cutting-edge capabilities. When targeting older, rural, or left-leaning consumers, focusing messaging on ease-of-use and tangible benefits rather than technological sophistication is likely to win them over. 

Most critically, companies should prepare comprehensive communication strategies for potential efficiency-driven workforce changes. The fact that high-income U.S. consumers are more apt to view AI as a threat does not mean that they will revolt against AI-powered product features, but it does mean that communication around any layoffs due to AI efficiency gains must be carefully managed. These perceptions in turn will influence regulatory support and consumer trust, creating either a virtuous or vicious cycle for public support of AI adoption in coming years. 

A headshot photograph of Sonnet Frisbie
Sonnet Frisbie
Deputy Head of Political Intelligence

Sonnet Frisbie is the deputy head of political intelligence and leads Morning Consult’s geopolitical risk offering for Europe, the Middle East and Africa. Prior to joining Morning Consult, Sonnet spent over a decade at the U.S. State Department specializing in issues at the intersection of economics, commerce and political risk in Iraq, Central Europe and sub-Saharan Africa. She holds an MPP from the University of Chicago.

Follow her on Twitter @sonnetfrisbie. Interested in connecting with Sonnet to discuss her analysis or for a media engagement or speaking opportunity? Email [email protected].

We want to hear from you. Reach out to this author or your Morning Consult team with any questions or comments.Contact Us