logo

The AI Regulations Backed by Both Democrats and Republicans

Amid fears of misuse, the creation of a new agency to properly regulate AI has bipartisan support, but regulation should not be so broad as to hinder innovation, writes tech analyst Jordan Marlatt
Unsplash / Morning Consult artwork by Anna Davis
June 14, 2023 at 5:00 am UTC

Key Takeaways

  • Regulating artificial intelligence is one of the few areas both parties can agree on: Both Democrats (57%) and Republicans (50%) support heavily regulating the development of AI technologies and even creating a new agency for this purpose.

  • Specifically, U.S. adults, including Democrats and Republicans alike, support instituting requirements for companies to label AI creations and banning the use of AI in political ads, among other measures.

  • The potential for AI to do harm is real, but advanced machine learning models can also be used to facilitate scientific breakthroughs, so the technology’s regulation should be pointed toward specific applications and helping to mitigate potential job loss rather than blanket bans.

For the latest global tech news and analysis delivered to your inbox every morning, sign up for our daily tech news brief.

When the CEO of ChatGPT developer OpenAI, Sam Altman, testified on the oversight of artificial intelligence development last month, he said in no uncertain terms that “regulation of AI is essential.” He even went as far as to recommend licensing and registration requirements for certain AI models — a stark contrast to the tech industry’s historically combative stance with the government on issues of regulation.

This shift in tone reflects an important moment in tech, one that many in the industry are considering the most significant advancement in the field since Web 2.0 or even further back. AI may be exciting, but its applications have the potential to be used for ill as much as for good.

Morning Consult data shows that the mood around AI among U.S. consumers reflects a similar tension between excitement and fear. More than half of respondents in a recent survey (52%) agreed that AI is the future of technology — down from 60% in March — while just 26% agreed that society is ready for the emergence of AI. This rapidly evolving technology is still attracting significant interest among consumers, but it’s also giving way to concerns about how AI might be used, which in turn is leading to demand for more regulation.

Bipartisan demand for regulatory action on AI

More than half of U.S. adults (53%), including 57% of Democrats and 50% of Republicans, agree that the development of AI technologies should be heavily regulated by the government. Increased bipartisan support for regulating AI may be best illustrated by even the staunchest conservatives, such as Sen. Lindsey Graham of South Carolina, likening AI to a nuclear reactor and supporting expanded government by creating a new agency tasked with regulating AI.

This latter idea, recommended by Altman during his congressional testimony, also has bipartisan public support. More than half of U.S. adults (54%), including 65% of Democrats and 50% of Republicans, are in favor of creating a new federal agency responsible for regulating AI development. Likely feeding this sentiment is the relatively low trust people have in existing government, as well as private and third-party entities, to properly regulate AI.

Trust in Existing Institutions to Effectively Regulate AI Is Low

How much respondents said they trust the following to ensure that innovations in AI are made responsibly and ethically:
Morning Consult Logo
Survey conducted May 24-26, 2023, among a representative sample of 2,198 U.S. adults, with an unweighted margin of error of +/-2 percentage points. Figures may not add up to 100% due to rounding.

With all that being said, one of the trickiest questions about regulating AI is what, specifically, should be regulated since the technology’s rapid evolution makes it a moving target. AI is a catch-all for large language models, which by themselves are relatively harmless. But the potential damage lies in how they are applied. Algorithms that use facial recognition to let users apply fun social media filters or make suggested tags in photos rely on similar technology used by authoritarian governments to monitor citizens, generative AI that can recommend and create Thanksgiving recipes can be used to spread misinformation, and so on.

Consumers support labeling AI creations, banning AI in political ads

There are a number of actions that regulators can take to mitigate the potentially negative impacts of AI, from requiring government-issued licenses for AI development to limiting the types of data that AI models are trained on. Morning Consult tested more than a dozen proposals, and each one has bipartisan support. Highest on the list is requiring companies to disclose when they use AI in a product or service, followed by requiring content generated by AI to be labeled as such.

Proposals to Regulate Specific Facets of AI Development Have Bipartisan Support

Shares of respondents who “strongly” or “somewhat” support the following regulations on AI:
Morning Consult Logo
Survey conducted May 24-26, 2023, among a representative sample of 2,198 U.S. adults, with an unweighted margin of error of +/-2 percentage points.

One particularly concerning application of AI is its use in political ads, which both Democrats and Republicans agree should be banned.

Take, for example, the potential to automate misinformation. In 2018, a former employee of consulting firm Cambridge Analytica disclosed that the company had used psychographic profiles based on surveys of millions of Facebook users to create ads designed to assist Donald Trump’s 2016 presidential bid, among other political campaigns. At the time, the operation described seemed large and sophisticated, but with generative AI today, such an operation could potentially be automated — and if financed by super PAC money, largely untraceable.

Generative AI is particularly adept at creating content based on simple prompts, including that which might appeal to someone based on publicly available profiles or information from voter databases. With the right instructions and parameters, and millions of dollars, the automation of targeted political ads could very well be a hallmark of the upcoming 2024 presidential elections.

Naturally, the potential for AI to exacerbate misinformation is vast, barring proper regulation.

Broad regulation could also dampen innovation

On the other side of the coin, AI models have the potential to be greatly beneficial. Consumers’ concerns are rooted in fear about the worst future that AI could lead to, and a strong desire to rein in AI development could hamper promising areas of AI innovation. Particularly in science, advances in machine learning models (that we colloquially call AI) are helping unearth new discoveries at an extremely rapid clip.

One positive application grounded in the here and now involves the advanced machine learning models that could solve hugely consequential and complex problems in the medical field. In fact, AI models are solving 50-year-old protein-folding problems in as little as half an hour, entirely revolutionizing research in the field and allowing for a greater understanding of genetic diseases. In this case, AI is not just a tool but potentially the key to finding the underlying causes of diseases and creating cures that have eluded scientists for decades.

These applications are not lost on people. Currently, AI applications in health care are among the most interesting forward-looking technologies to consumers, according to Morning Consult tracking. This is also an area where most U.S. adults — Democrats and Republicans alike — feel that genetic and medical data should be used to help train models, albeit with regulation. Gen Z adults and millennials in particular feel that training models using this data should be not only allowed, but unregulated as well.

Consumers Largely Agree That Medical Data Should Be Used to Train AI Models

Shares of respondents who said using genetic and medical data to train AI models to develop treatments or cures for diseases should be allowed without restrictions, allowed but regulated, or banned entirely
Morning Consult Logo
Survey conducted May 24-26, 2023, among a representative sample of 2,198 U.S. adults, with an unweighted margin of error of +/-2 percentage points. Figures may not add up to 100% due to rounding.

Fear and concern about the negative hypothetical repercussions of AI could put at risk the positive impact that these advanced models might achieve. So far, companies such as OpenAI that develop some of the most advanced AI models have been outspoken about the risk and need for regulation of AI — rightfully so — and many in Silicon Valley have been making a concerted effort not to overhype the technology.

But in the debate over the extent to which AI should be regulated, lawmakers and companies alike should be cognizant of leaving room for innovation, particularly in the areas of science and research.

Developments in AI are driving support for upskilling programs, universal basic income

There’s no way of knowing for sure just how AI will impact our lives, from the most positive outcomes to the most negative. But somewhere in the middle is the most realistic interim risk: the threat to peoples’ jobs. As AI models become more complex and reliable, and the financial windfalls of further automation become an obvious benefit, many people employed in industries where generative AI can thrive are likely to suffer. To this point, consumers say they support instituting programs to help upskill people impacted by job loss, and there is bipartisan support for increasing funding for federal programs that would safeguard people from job loss created by AI applications.

Federal Programs to Help Mitigate Job Loss Due to AI Have Bipartisan Support

Shares of respondents who would “strongly” or “somewhat” support the following if AI eliminated certain jobs or careers:
Morning Consult Logo
Survey conducted May 24-26, 2023, among a representative sample of 2,198 U.S. adults, with an unweighted margin of error of +/-2 percentage points.

More divisive is the issue of universal basic income, or the idea that people receive guaranteed cash payments with no strings attached. Similar to how the Alaska Permanent Fund redistributes a portion of the state’s oil revenues to its residents, there could be a future where companies that develop or use AI models may be taxed to fund payments to people whose jobs are displaced by AI. More than half of U.S. adults (55%) support this idea, including 65% of Democrats. Support is weakest among Republicans but still stands at a notable 45%. The proposition of using taxpayer dollars to provide universal basic income to people who lose their jobs due to AI, however, is less popular.

The momentum of advancements in AI is translating to an urgency in the regulatory world not seen with prior trends in tech such as cryptocurrency, Web3 or the metaverse. Government has historically been slow to respond to advancements in tech, but the speed at which AI is being developed and its potentially huge ramifications are driving bipartisan consensus for regulation among consumers and policymakers alike. The technology is bound to outpace regulatory efforts, so the key to effective regulation will lie in mitigating the short- and medium-term negative impacts such as job loss, while enabling the very real potential to tackle problems we can’t solve on our own.

A headshot photograph of Jordan Marlatt
Jordan Marlatt
Lead Tech Analyst

Jordan Marlatt previously worked at Morning Consult as a lead tech analyst.

We want to hear from you. Reach out to this author or your Morning Consult team with any questions or comments.Contact Us