Gen Z's Excitement About AI Dropped 14 Points in One Year. The Industry's Response: Spend $200 Billion More.
Four independent polling organizations now confirm the same pattern: Americans are using AI more and trusting it less. An original analysis of the sentiment-adoption gap reveals why the industry's $200 billion capital expenditure plan may be building infrastructure for a product the public is learning to resent.
Fourteen percentage points. That is how far Gen Z's excitement about artificial intelligence fell in a single year, according to a Gallup survey conducted February 24 to March 4, 2026, with 1,572 Americans aged 14 to 29. Excitement dropped from 36 percent to 22 percent while anger rose nine points to 31 percent, yet usage stayed flat at 51 percent weekly. The generation that was supposed to be AI-native is becoming AI-hostile, and not a single major AI company has acknowledged the problem publicly.
Gallup is not alone in documenting this shift. On March 30, Quinnipiac University released a national poll of 1,717 adults: 55 percent said AI will do more harm than good in daily life, 80 percent expressed concern, and only 21 percent trust AI-generated information most or almost all of the time. In September 2025, Pew Research found that concern about AI in daily life had risen to 53 percent, up from 38 percent in late 2022, and by November, Pew confirmed the concern was now bipartisan. In January 2026, Edelman's annual Trust Barometer placed AI-related distrust within a broader collapse of institutional trust, with 70 percent of respondents across 28 countries unwilling or hesitant to trust anyone with different values.
On April 13, venture capitalist Chamath Palihapitiya, former VP of User Growth at Facebook who scaled the platform from 50 million to 700 million users, quote-tweeted the Gallup findings with a blunt assessment: "If the leadership in the AI movement doesn't step up quickly, organize around the right 'go to market' and create incentives to align everyone, this will be a generational fumble. It is, sadly, happening before our eyes."
He is right about the fumble. He may be underestimating how far it has already gone.
The Sentiment-Adoption Gap
Nobody in the AI industry is tracking the number that may matter most. Call it the sentiment-adoption gap: the distance between how many people use AI and how many people feel good about using it.
Quinnipiac's data lets you calculate the gap directly: 51 percent of Americans now report using AI for at least one activity (research, writing, work tasks, or data analysis) but only 21 percent trust AI-generated information most or almost all of the time, a 30-point chasm between usage and trust that has no precedent in recent technology adoption. For context, social media in its most controversial period, circa 2018 to 2020, had a usage-trust gap closer to 15 points, according to Pew's tracking.
AI's gap is double that, and widening, because Quinnipiac found that 27 percent of Americans volunteered that they have never used AI tools, down from 33 percent a year earlier, which means more people are trying AI while fewer are coming away impressed.
Gallup's data adds a generational knife twist: among Gen Z, the cohort with the highest AI adoption rates, even daily users have not become more positive over the past year, which means usage is sticky while enthusiasm erodes beneath it. People are using AI the way commuters use a highway through a construction zone: because they feel there is no alternative, not because they enjoy the drive.
What the Industry Hears vs. What the Data Says
AI companies report engagement metrics that look spectacular. OpenAI hit 500 million monthly active users by late 2025. Anthropic reached a $30 billion annual revenue run rate by March 2026, the fastest enterprise revenue ramp in history. Meta's AI assistant is embedded in WhatsApp, Instagram, and Messenger, reaching billions.
These numbers are real, but they are also misleading, because engagement is not endorsement. Quinnipiac found that 70 percent of Americans believe AI will reduce jobs, with Gen Z the most pessimistic generation on this question. Half of Gen Z students told Gallup they believe they will need AI in their future jobs and their education, yet a majority simultaneously believes AI may come at a cost to learning: forced adoption of a distrusted technology whose psychological literature on coerced adoption suggests the resentment compounds over time rather than fading with familiarity.
There is a historical parallel that should unsettle anyone making long-term capital commitments to AI infrastructure. Tobacco companies tracked cigarette sales per capita as their primary health metric throughout the 1950s and 1960s, and sales kept climbing even as the Surgeon General's report, public health campaigns, and declining favorability numbers piled up. A 15-year lag between sentiment turning and behavior changing. By the time per capita consumption peaked in 1963, the regulatory and legal machinery that would eventually cost the industry $246 billion in the 1998 Master Settlement Agreement was already in motion, because engagement was never the leading indicator, but sentiment always was.
The $200 Billion Question
In 2025, the five largest AI investors, Alphabet, Amazon, Meta, Microsoft, and Apple, collectively committed over $200 billion in capital expenditures, primarily for AI infrastructure: data centers, custom chips, and model training. Goldman Sachs called it the largest single-year capital expenditure cycle in technology history, larger than the 1990s fiber-optic buildout, larger than the smartphone supply chain ramp. Figures for 2026 are tracking even higher.
All that spending assumes a particular future: that AI adoption will deepen, that enterprise customers will embed AI into core workflows, that consumers will develop habits they cannot easily abandon, and that willingness to pay will follow. But the sentiment data suggests a different trajectory. What if adoption plateaus not because the technology fails, but because the public decides it does not want what the technology offers?
McKinsey's March 2026 report on AI trust in the agentic era warned that "as AI systems take on greater autonomy, making recommendations, triggering actions, and interacting with other systems, the consequences of failure grow materially." Their framing is corporate. Translated to plain English: the AI industry is making its products more autonomous at the exact moment the public is becoming less comfortable with AI autonomy.
Who Is Actually Turning Against AI?
Not who you would expect. Pew's bipartisan finding from November 2025 broke the convenient narrative that AI skepticism is a partisan or demographic phenomenon, because Republicans and Democrats are now equally concerned about AI's role in daily life, though they diverge on regulation. Concern is high across every age cohort in Quinnipiac's data: Gen Z at 78 percent, Millennials at 81 percent, Gen X at 79 percent, Boomers at 82 percent, while Edelman found insularity cutting across income, gender, and age in 28 countries, revealing not a fringe, but a consensus that spans demographics, ideologies, and borders.
What makes this dangerous for the industry is speed, because in Pew's tracking, AI concern in daily life went from 38 percent in December 2022 to 52 percent in August 2023 to 53 percent in September 2025, a sustained acceleration that shows no sign of plateauing. Gallup's Gen Z data shows the sharpest single-year decline in any age cohort's technology enthusiasm since Gallup began tracking, and Quinnipiac's "more harm than good" finding at 55 percent represents a working majority that now holds a negative view of AI's impact on their lives.
Working majorities have consequences, and those consequences tend to be regulatory.
The Go-to-Market Failure
Palihapitiya's framing of this as a go-to-market problem is precise: not a technology problem, because GPT-4 and Claude and Gemini are genuinely useful tools, but how AI has been marketed, deployed, and discussed has created a trust deficit that utility alone cannot overcome.
Three specific failures stand out. First, the job displacement narrative ran unchecked: Klarna eliminated 3,104 customer service positions and announced it publicly, Shopify's CEO told employees not to request headcount before proving an AI system could not do the work, and IBM paused hiring for roles AI could replace. Each announcement generated headlines reinforcing a single frame, that AI is coming for your job, until individual companies' cost-cutting announcements became the dominant public narrative about what AI is for.
Second, the safety debate was captured by extremists on both sides, with existential risk fixation on one end and "move fast and break things" dismissiveness on the other, while the practical middle, where people worry about misinformation, privacy, bias, and job loss, went unrepresented by any major industry voice, leaving most Americans' actual concerns about AI unaddressed by the people building it.
Third, and perhaps most damaging, the products themselves trained users to distrust them: AI hallucinations became a punchline, lawyers were sanctioned for citing fake cases generated by ChatGPT, and Google's AI Overview told people to add glue to pizza, with every viral failure reinforcing a simple and devastating heuristic: AI is confident and unreliable, a combination uniquely corrosive to trust.
What the Honest Limitations Look Like
This analysis relies on four independent surveys with different methodologies, timeframes, and sample compositions, and while the Gallup and Quinnipiac polls use probability-based panels, the gold standard for public opinion research, yet tracking sentiment across different organizations introduces methodological noise, so the specific gap calculations (30-point usage-trust gap, 14-point excitement decline) should be understood as directional indicators rather than precise measurements. The tobacco analogy has obvious limits because AI is not physically addictive and the causal pathway from sentiment to regulation is less direct; correlation between declining sentiment and future regulation is historical pattern-matching, not prediction.
Capital expenditure numbers represent commitments, not spent cash, and companies can and do adjust planned spending when market conditions shift. Usage data from AI companies is self-reported and not independently audited.
The Strongest Case Against This Thesis
The strongest counter-argument is that sentiment does not determine adoption trajectories for infrastructure technologies. People distrusted the electrical grid, commercial aviation, and the early internet, and adoption proceeded regardless because utility was too large to resist. Under this model, the current sentiment dip is a normal feature of the technology adoption S-curve: initial hype collapses, disillusionment follows, sustained adoption climbs regardless of feelings. Gartner's hype cycle model explicitly predicts this pattern.
If that analogy holds, $200 billion in infrastructure spending is correctly timed. Companies building through the trough dominate the plateau of productivity. And the sentiment data, however alarming, is noise.
Here is the counter to that counter: electricity and aviation did not require user trust to function, because a suspicious passenger still arrives at the destination. But AI products require users to delegate judgment, trust outputs, and act on recommendations, which means a user who does not trust an AI assistant ignores suggestions, double-checks outputs, and adds friction that destroys the efficiency gains AI is supposed to deliver, making trust not a nice-to-have but a core product feature without which the entire value proposition collapses.
The Bottom Line
The AI industry is building the most expensive infrastructure in technology history for a product that a growing majority views with suspicion. Four independent polling organizations, using different methodologies across different populations, converge on one conclusion: Americans are using AI more and trusting it less. Chamath Palihapitiya called it a generational fumble. What the data shows might be something worse: a structural mismatch between what the industry is selling and what the public is willing to buy.
If you work in AI: the engagement metrics on your dashboard are not measuring what you think they are measuring, because usage without trust is not adoption but compliance, and compliance has a shelf life.
If you make investment decisions: watch the Quinnipiac and Gallup trust tracking numbers the way you would watch same-store sales for a retailer. If the 21 percent trust figure does not climb by Q4, the $200 billion in committed capital expenditure is building capacity for a market that may not arrive on schedule.
If you are in policy: a bipartisan 80 percent concern figure is the kind of number that produces legislation. The window between "this is concerning" and "we need to regulate this" is closing. The AI industry has roughly 12 to 18 months to change the narrative before the regulatory framework calcifies around the current one.