The contradiction between use and trust of AI is striking. Fifty-one percent say they use AI for research, and many also use it for writing, work, and data analysis. But only 21 percent trust AI-generated information most or almost all of the time. Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust.
A new national poll by Quinnipiac University, conducted in partnership with its School of Computing & Engineering and School of Business, reveals a significant shift in American attitudes towards Artificial Intelligence. As AI technology becomes increasingly integrated into daily life, a growing number of Americans perceive more harm than good in its impact on their personal lives and education. Opinions are divided on AI's influence in healthcare. Trust in AI-generated information remains low, despite its increasing adoption. The poll also found that a slight majority of adults feel AI's development is progressing faster than anticipated, leading to heightened concern rather than excitement. These concerns are particularly evident in public perceptions regarding AI's role in the workforce, political processes, military applications, and the establishment of AI data centers within communities.
The poll highlights a noticeable increase in the use of AI tools for various activities since April 2025. Fifty-one percent of Americans now use AI for researching topics, a significant rise from 37% previously. Other common uses include writing (28%), school or work projects (27%, up from 24%), and analyzing data (27%, up from 17%). Image creation with AI has also seen a jump to 24% from 16%. Additionally, 20% use AI for medical advice, 15% for personal advice, and 5% for companionship. Reflecting this broader adoption, the percentage of Americans who reported never using AI tools has decreased from 33% to 27%.
Despite the increasing adoption of AI tools, public trust in AI-generated information remains consistently low. The poll indicates that 76% of Americans have limited trust, believing they can trust AI 'hardly ever' (27%) or 'only some of the time' (49%). A mere 21% express higher trust, finding AI-generated information reliable 'most of the time' (18%) or 'almost all of the time' (3%). This finding is largely unchanged from a similar poll in April 2025. Chetan Jaiswal, Ph.D., from Quinnipiac University's Department of Computing, emphasized this contradiction, stating, "The contradiction between use and trust of AI is striking. Americans are clearly adopting AI, but they are doing so with deep hesitation, not deep trust."
The poll reveals a prevailing sentiment of concern over excitement regarding AI. Only 35% of Americans express excitement (6% very excited, 29% somewhat excited), while a substantial 62% report being unexcited (29% not so excited, 33% not excited at all). Conversely, a significant 80% of the population is concerned about AI, with 38% being 'very concerned' and 42% 'somewhat concerned'. This high level of concern is widespread and consistent across all surveyed age demographics, including Gen Z, Millennials, Gen X, Baby Boomers, and the Silent Generation, indicating a broad societal apprehension towards AI's advancements.
Americans generally perceive the development of Artificial Intelligence as rapid, with 51% stating that the pace of AI advancement is moving faster than they had anticipated. Thirty-eight percent believe it is progressing about as fast as expected, while a smaller segment, 8%, feel it is not moving as quickly as they thought it would.
Public perception of AI's societal impact has shifted negatively. Fifty-five percent of Americans now believe AI will cause more harm than good in their day-to-day lives, an increase from 44% in April 2025, while only 34% foresee more benefits. Regarding education, nearly two-thirds (64%) think AI will be more harmful than good, compared to 54% in the previous poll, with 27% expecting more good. For healthcare, opinions are closely divided, with 45% predicting more harm and 43% more good. Despite these concerns, the reported current impact of AI on daily life remains relatively stable since April 2025, with 21% feeling a 'lot' of impact, 29% 'some,' 30% 'only a little,' and 17% 'not at all'.
In a hypothetical scenario where an AI tool is proven more accurate than a human in reading medical scans, a vast majority of Americans (81%) would still prefer a combination of both AI and human expertise. Only 14% would rely solely on a human, and a mere 3% would trust AI exclusively. Brian O'Neill, Ph.D., an Associate Professor of Computer Science at Quinnipiac University, noted that this preference for human involvement, even when AI is more accurate, underscores the prevailing lack of trust in AI observed throughout the poll, suggesting a desire for a 'second opinion' from a human being.
A significant 70% of Americans anticipate that advancements in AI will lead to a decrease in job opportunities, a notable rise from 56% in April 2025, with only 7% expecting an increase. This pessimism about job market impact is most pronounced among Gen Z, where 81% believe AI will cut jobs. This view is shared across various employment sectors, with 71% of white-collar and 73% of blue-collar workers expecting a decline in job opportunities. Despite this broader concern, only 30% of employed Americans are personally concerned that AI might make their own jobs obsolete, though this marks an increase from 21% in April 2025. Tamilla Triantoro, Ph.D., from Quinnipiac University's School of Business, observed, "Younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market. AI fluency and optimism here are moving in opposite directions," and added that people are more willing to predict a tough market than to envision themselves as personally affected.
The poll found that a strong majority of Americans, 80%, would be unwilling to work in a job where their direct supervisor was an AI program responsible for assigning tasks and setting schedules. Only 15% expressed willingness to accept such a role.
Public demand for greater transparency and regulation of AI is evident. Seventy-six percent of Americans believe businesses are not adequately transparent about their AI usage, a sentiment largely consistent with the April 2025 poll. Similarly, 74% think the government is not doing enough to regulate AI, an increase from 69% previously. Dr. Chetan Jaiswal summarized the findings by stating, "Americans are not rejecting AI outright, but they are sending a warning. Too much uncertainty, too little trust, too little regulation, and too much fear about jobs."
Americans hold mixed, often cautious, views on AI's military applications. A slight majority (51%) oppose the military using AI to select military targets, with a significant generational divide: Gen Z strongly opposes this (69%), while the Silent Generation shows slight support (47%). On the use of AI in surveillance for security purposes, opinions are split (45% support, 44% oppose). However, Gen Z again stands out with clear opposition (58%) to AI surveillance. Dr. Brian O'Neill noted these responses reflect public doubts about the control and development of AI, with younger generations being particularly skeptical of its military uses.
Regarding the use of AI-generated images or audio in political advertisements, Americans are divided on the appropriate government response. Forty-five percent believe the U.S. federal government should mandate disclosure of such AI use, while 38% advocate for a complete ban on AI-generated content in political ads. A smaller group, 11%, thinks the government should not regulate this aspect at all.
The construction of AI data centers in local communities faces significant public opposition, with 65% of Americans against it. Key reasons for opposition include concerns over electricity costs (72%), water use (64%), and noise (41%). Conversely, those who support such developments primarily cite potential job creation (77%), increased tax revenue (53%), and the prospect of establishing a tech hub (47%) as their motivations.
A majority of Americans (56%) express confidence in their ability to discern between authentic videos or recordings and those generated by AI (18% very confident, 38% somewhat confident). Despite this reported confidence, a notable 28% of Americans admit to having previously shared a video that they later discovered was AI-generated, highlighting the challenge of identifying deepfakes.