Most of the Public Believes Artificial Intelligence Tools Can Achieve Singularity and Pose a Threat to Humanity
For our latest survey data on artificial intelligence, sign up for our daily tech newsletter.
Over the course of the past year, the public has gained access to a number of powerful generative artificial intelligence tools like ChatGPT, capable of creating content on demand and engaging in human-like conversations.
Some high-profile figures, including Elon Musk and Apple Inc. co-founder Steve Wozniak, have warned these systems are rapidly approaching artificial general intelligence (AGI), in which a machine system is capable of understanding or learning any intellectual task that human beings can.
These anxieties appear to extend to the general public: Roughly 3 in 5 U.S. adults are concerned about this possibility, as well as the possibility of AGI achieving singularity — the point at which AGI exceeds human intelligence. A nearly equal share would support a pause on advanced AI development, according to a new Morning Consult survey.
7 in 10 Adults Believe AI Tools Could Achieve Artificial General Intelligence
Regular AI users are especially attuned to and concerned about AI’s potential
- Seven in 10 U.S. adults, including 83% of weekly AI users, believe current AI tools are capable of achieving artificial general intelligence.
- About 2 in 3 weekly AI users believe existing AI tools are capable of achieving sentience, including the ability to perceive and feel things. Just over 1 in 3 respondents who have not used an AI tool feel the same.
- Four in 5 AI users believe current tools like ChatGPT have the potential to think independently and act outside of human input.
- More than 3 in 5 adults and 7 in 10 regular AI users are concerned AI tools pose an existential threat to humans.
Weekly AI Users Are More Likely Than General Public to Support Pause on Development of Advanced AI
AI users are more supportive of pause on AI development than general public
- Nearly 2 in 3 adults who are aware of AGI or AI singularity, and a similar share of AI users, support a pause on the development of advanced AI systems, compared to slightly over half of the general public.
- About 3 in 4 weekly AI users support an international agreement on the use of AI and the creation of shared safety protocols.
- At least half of all people who have not used an AI tool still support measures for regulating AI innovation.
More interaction with AI leads to more concern
While 2 in 3 adults say they have never used AI tools, 3 in 5 said they have seen, heard or read at least something about artificial general intelligence and 2 in 5 said they are aware of the concept of AI singularity.
People who regularly interact with AI, including those who say they use tools like ChatGPT at least once per week, are more likely than all adults to be concerned that current AI systems are capable of achieving AGI. This may be because they are more likely to have experienced AI hallucinations, an occurrence where an AI system fabricates information and, in some instances, claims to be human.
While most of the public is concerned about the potential of AGI, and AI companies like OpenAI say they are planning for such a development, some experts are less bullish on current capabilities and believe AGI is still a long way off, if even possible.
The April 3-6, 2023, survey was conducted among a representative sample of 2,203 U.S. adults, with an unweighted margin of error of plus or minus 2 percentage points.