Of all the ways AI could disrupt the education sector, the most interesting to me is how it will force a reckoning with the intellectual vs. anti-intellectual tendencies on American campuses. Is embracing AI the next great intellectual leap or does it represent the triumph of practical, anti-intellectual efficiency?
While the AI debate between proponents and “hand-wringers” rages along the usual lines, I’m keeping an eye on the eventual collision with the broader campus mission of “belonging” with its focus on student comfort, “experiences,” and completion. AI’s arrival puts this mission into direct conflict with the university’s traditional intellectual purpose. The result will be a necessary reckoning over what higher education is for, likely ending in a widening division between elites and everyone else.
I’m drawing here on Richard Hofstadter’s Anti-Intellectualism in American Life (1963), which is still the best exploration of the recurring strain of umbrage in American culture.
Intellect, for Hofstadter, “is the critical, creative, and contemplative side of mind,” apart from simple intelligence. Americans are fine with intelligence but suspicious of intellect.
Anti-intellectualism, with its “resentment and suspicion of the life of the mind,” is deeply embedded in the national psyche, with roots in non-hierarchical evangelical Christianity, the valorization of the “self-made man,” and a general egalitarianism that embraces the “common man” over the Harvard man. This chronic American tension may not know which way is up when it comes to AI.
That is, AI is dividing us into multiple camps: early adopters, skeptics/doubters, doomers/doomsayers, dynamists, effective accelerationists, de-growthers, safety-aligners, human-centered critics/designers, open-source advocates, optimists, regulators, and “normies,” not to mention charlatans and snake oil salesmen.
Drawing on Hofstadter’s framework, the intellectuals (leaning toward abstractions, critical inquiry, skepticism, contemplation, and a devotion to the “life of the mind,”), would be the human-centered critics, AI safety & alignment researchers, skeptics & doubters, and de-growthers.
The anti-intellectuals (suspicion of critique, demand for immediate practical results, a zealous, quasi-religious faith that dismisses nuanced thought) would be the effective accelerationists (e/acc), dynamists, as well as the charlatans & snake oil salesmen.
Doomers are hard to categorize. Their concern about existential risk from a superintelligence is abstract and intellectual, but their style can be emotional and paranoid, with an anti-intellectual apocalyptic certainty that dismisses anyone who doesn’t share their urgency. Open-source advocates want to democratize knowledge and prevent monopolies (a solidly intellectual, enlightenment position) but can sometimes veer populist and anti-elitist, dismissing safety concerns as the “gatekeeping” of a few “experts” in their “ivory towers.” Adopters, optimists & “normies” can be intellectual (using AI as a tool for research and creativity) or fundamentally practical and anti-intellectual (using AI to bypass learning, cheat on assignments, and avoid the struggle of critical thought).
The regulators and policymakers are providing the battleground. A regulatory process driven by careful evidence, expert testimony, and long-term thinking would be intellectual. One driven by populist fear, corporate lobbying, and a desire for simple, immediate fixes would be anti-intellectual.
What does this mean for higher ed?
The campus debate is basically hand-wringers versus proponents.
The hand-wringers would claim the mantle of intellectuals – they are not expressing fear but embodying an intellectual tradition, seeing AI as a threat to academic integrity, pedagogical quality, and humanistic values. They are simply questioning the uncritical adoption of a powerful new technology and resisting the hype; defending the life of the mind; championing intellect over intelligence.
The AI proponents would retort that the hand-wringers are classic anti-intellectuals: Luddites, afraid of progress; elitist “ivory tower” academics, out of touch with the “real world” where efficiency and new skills are needed; and throwing up obstacles to a more practical, democratized, and efficient future for education.
Both sides in the campus AI debate claim the intellectual high ground, accusing the other of blindness. But this battle is being waged on terrain already captured by a different ideology: the pedagogy of “belonging.” This institutional focus on reducing friction and ensuring student comfort creates the perfect ecosystem for a technology that offers ultimate convenience. It is no surprise, then, that students have embraced AI chatbots with an ease that baffles many professors. This combination of an institutional bias toward accommodation and a technology of radical ease is what makes the old conflict between the life of the mind and the demands of practical life particularly urgent. It is the catalyst that will force a choice.
What is becoming apparent is that while intellectuals and anti-intellectuals have co-existed in higher education for a long time, it may finally be time to go their separate ways. The reality we are facing is a rapid, irreversible shift in the educational production function. The cost of producing mediocre-to-good prose, standard analysis, and passing-grade problem-solving has collapsed to near zero.
The majority of institutions – public universities and less-resourced private colleges – already see they have no choice but to integrate AI at every level. AI proponents will win by default here. These institutions will use AI to deliver instruction at lower cost and offer more efficient credentialing. The value proposition is efficiency and scale.
The hand-wringers who survive will land at artisanal or boutique institutions featuring small, Socratic seminars, intense faculty mentoring, and “AI-proof” assessments (e.g., oral exams, closed-book handwritten tests, high-stakes live debates). They will offer an expensive, “human-centric” experience, where intellectualism is the core part of their brand identity. Their graduates will be able to think, write, and perform without a machine co-pilot.
But the small seminar model is not a viable path for a public system whose core mission has become open access and completion. The truth is that a significant portion of undergraduate education is non-intellectual, formulaic labor that primarily involves hand-holding (not hand-wringing) at scale. To soften the blow of scaled-up education and to ensure completion, public systems have doubled down on a “belonging” approach that prioritizes resources to provide support, emotional comfort, and “experiences” over rigorous academic content. The hand-wringers’s umbrage is the student preference for AI chatbots over office hours.
Lessons of the past
Richard Hofstadter found the social accommodation and workforce-preparation impulse so significant that he devoted an entire chapter of Anti-Intellectualism in American Life to its strangest and most potent historical example: “The Road to Life Adjustment.” The life adjustment movement is the quintessential case study of anti-intellectual mindset. Its twelve-year trajectory, from widespread adoption to swift abandonment after the Sputnik launch, offers a powerful, cautionary parallel for our own AI-disrupted moment.
Just how deeply entrenched is the social impulse in higher ed? Consider that even if the current administration is successful in dismantling the entire DEI architecture in American universities, the underlying commitment to “belonging” as a pedagogical method would likely endure.
The life adjustment curriculum was launched in 1945 by “the father of vocational education” Charles Prosser, who believed schools were failing the majority of students, who were receiving neither adequate college preparation nor practical vocational “workforce” training. These students desperately needed “life adjustment” to fit into society. The central idea was that education should be “functional.” Typical courses included “Learning to Work,” “School and Life Planning,” and “Preparation for Marriage.” Instead of traditional science, students took courses like “Consumer Chemistry,” examining household chemicals and product safety.
While the professional education community immediately embraced the idea, the results provoked devastating criticism. Arthur Bestor’s Educational Wastelands (1953), claimed professional educators had “lowered the aims of American public schools” by retreating from real science particularly. Admiral Hyman Rickover, struggling to find personnel for his nuclear submarine program, blasted the entire method. He was joined by others in the the defense industry in denouncing the lack of science education. Military leaders complained that recruits lacked basic mathematical skills needed for technical positions. Then came Sputnik and the NDEA.
But while life adjustment lasted only twelve years (1945-1957), its anti-intellectual tendencies can be discerned in every tendency to foreground belonging over learning since.
AI is disrupting the comfortable middle ground where most of higher education has operated, making its implicit trade-offs explicit. The pedagogy of “belonging,” like “Life Adjustment,” serves a practical, accommodating function. When you have walk-in advising, you don’t notice the loss of faculty mentors. When you have affinity groups, you don’t notice that all of your courses and assignments are beginning to look the same.
Ultimately, AI acts as a great clarifying agent, realigning the university along more honest lines, accelerating the separation of institutions committed to academic rigor, depending the irreplaceable work of the human mind, from those focused on the efficient delivery of workforce credentials and social adjustment.
This impending schism in higher ed sets the stage for the endgame envisioned by AI’s most extreme voices. The accelerationists, who dream of a future of human-machine synthesis and radical capability, will find their workforce in the graduates of the efficient, AI-integrated university. This is the system that can produce millions of operators at scale, trained to leverage technology for maximum output. Conversely, the doomers and safety advocates, who fear a loss of human control and judgment, can now look to the “artisanal” university as a sanctuary responsible for cultivating the ethical and critical faculties necessary to govern, align, or simply survive the intelligence we are building. Higher ed will bifurcate in the AI era to become the engine for both competing visions of humanity’s future.
Well said as always, Hollis! I believe that Paracelsus’ “The dose makes the poison” applies just as much to AI as it does to alcohol.
Micro-dose machine learning and it’s a spark.
Overdose it and reality warps.
We must not let the crutch become a prosthetic.
I am curious as to why you believe people will pay for an AI-led education when, as far as I can tell, there is little evidence that it is even an education. I feel like the prospect of AI replicating what is already happening in classrooms is not proof of how *good* AI is but rather an exposure of how *meaningless* a lot of college courses are. I don't see why people would opt for a faster and cheaper line of the Emperor's New Clothes.