MITRE previously collaborated with Harris Poll on a study of AI trends in November 2022 to identify public concerns that should be addressed, areas for increased investment, and potential for industry, government, and academia collaboration.
The survey found a significant trust gap, with most U.S. adults expressing reservations about AI for uses such as government benefits and healthcare. Only 39% believe AI is safe and secure, and 78% worry that AI can be used for malicious intent. The survey also found desire for more work on AI assurance, as well as government regulation of AI technologies.
The November 2022 survey was conducted prior to widespread use of generative AI tools such as ChatGPT. We conducted a second survey in July 2023 that aimed to examine changes in attitudes from some of the questions, as well as address the use of chatbots and other AI tools.
- 85% support a nationwide effort across government, industry, and academia to make AI safe and secure.
- 85% want industry to transparently share AI assurance practices before bringing products equipped with AI technology to market.
- 81% believe industry should invest more in AI assurance measures, up 11 points from November 2022.
- 52% of employed U.S. adults are concerned about AI replacing them in their job.
- 39% believe today’s AI technology is safe and secure, down 9 points from November 2022.
MITRE-Harris Poll Survey on AI Trends
While the public has started to benefit from new AI capabilities such as ChatGPT, we’ve all watched as chatbots have spread political disinformation and shared dangerous medical advice. And we’ve seen the government announce an investigation into a leading AI company’s data collection practices.
Strengthening existing government regulation and increasing public and private investments in AI assurance can play a critical role in addressing these concerns. It’s also important for industry to look for ways to collaborate to make improvements that benefit all.
AI assurance—a lifecycle process that provides justified confidence in an AI system to operate as intended without unacceptable risks—can play a pivotal role in addressing Americans’ concerns with AI so that the transformational potential of AI can be realized for social good.
Eighty-five percent of U.S. adults who were surveyed indicated that making AI safe and secure for public use needs to be a nationwide effort across industry, government, and academia. They want companies to transparently share their AI assurance practices before and after bringing products with AI capabilities to market.
We can’t take all the risk out of AI—or other technologies—but we can help individuals and organizations make informed decisions about the costs and benefits of adopting AI solutions.
Addressing accountability for harmful actions will also be an essential aspect of AI regulation. More than 75% of U.S. adults expressed concern that those using AI may not be held accountable for their actions. When humans use AI tools as part of their unwanted or criminal online behavior like cyberattacks or disinformation campaigns, those people should remain responsible for their AI-augmented actions. As we do today with cybercrime, we need to prevent, defend, remediate, and attribute these actions at the larger scale that AI technologies will enable.
Making AI safe and secure for public use needs to be a nationwide effort across industry, government, and academia.
As AI becomes increasingly prevalent in everyday life, there is a particularly noteworthy generational split on excitement about potential benefits vs. concerns about potential risks of AI.
- 46% of U.S. adults are excited about potential benefits.
- 54% of U.S. adults are concerned about potential risks.
- Younger generations (57% of Gen Z and 62% of millennials) are overwhelmingly excited about potential benefits.
- 70% of baby boomers are more concerned about the potential risks.
of U.S. adults are excited about the potential benefits of AI
What MITRE is Doing
MITRE works closely with industry and government to capture threats to AI-enabled systems and document associated adversary tactics, techniques, and procedures in the MITRE Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS™) framework.
We’re pioneering AI assurance practices by building a state-of-the-art AI assurance lab as a national resource. We’re working with NATO on the effective and secure adoption of AI technologies. As part of a team led by Carnegie Mellon University, we’re leading research on the adoption of AI-assisted decision-making tools by people and organizations who make consequential decisions. We’re also applying AI to help empower early intervention for service members at risk of entering a complicated disability system. And we’ve collaborated with Microsoft to release a free tool that enables security teams without deep AI expertise to prepare for attacks on machine learning systems.
We recently proposed a sensible regulatory framework for AI security. And we’re collaborating with the Coalition for Health AI, which MITRE co-leads, to address the quickly evolving landscape of health AI tools by outlining specific recommendations to increase trustworthiness within the healthcare community.
By working together, we can enable responsible pioneering in AI to better impact society.
This survey was conducted by The Harris Poll on behalf of MITRE via the Harris On Demand omnibus product.
- Sample size: n=2,063
- Qualification Criteria: U.S. residents, adults ages 18+
- Mode: Online survey
- Weighting: Data weighted to ensure results are projectable to U.S. adults ages 18+
- Field Dates: July 13 -17, 2023
- In tables and charts: Percentages may not add up to 100% due to weighting, computer rounding, and/or the acceptance of multiple responses.
In some cases, 2023 data are compared with data collected in November 2022. That survey was conducted via Harris on Demand November 3-7, 2022. Where applicable, statistically significant differences at the 95% confidence level between the two surveys are noted.
Please contact email@example.com for questions or attribution.