The Canary in the Classroom
A new Stanford AI study exposes the growing gap between education and employment
As all the headlines shout today, a new Stanford study shows that AI is behind the struggles of young people to find jobs. “Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence,” by Erik Brynjolfsson, Bharat Chandar, and Ruyu Chen, draws on payroll data from ADP covering millions of workers across tens of thousands of firms to argue that “early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks.” In contrast, employment for more experienced workers in the same occupations (software developers, customer service representatives) has remained stable or continued to grow, as has employment in less AI-exposed fields like nursing.
The study and the media want to see this as a story of AI displacement. I think it is more complicated and yes, higher education practices are partially to blame.
The study dates this stagnation to 2022, which caught my eye from the higher ed perspective. These are young people disrupted by the pandemic whose lives and experiences are more online than any other. I recall in May 2022 teaching a half dozen college seniors how to shake hands when walking into a job interview (firmly, for two and a half shakes), alarmed that the online platform, Handshake, did not teach this skill. It isn’t really a dean’s job, but somebody had to do it, and better late than never, just before the diploma.
The Stanford study also differentiates between jobs that AI “automates” but not “augments,” noting that jobs for people who use ChatGPT or Claude are still growing. This is interesting; more below.
But broadly, the Stanford study finds the 22-25 year old group is vulnerable across sectors. They conclude that “early, large-scale evidence consistent with the hypothesis that the AI revolution is beginning to have a significant and disproportionate impact on entry-level workers in the American labor market” (1).
Maybe. While I see a solid quantitative analysis of who is being affected by AI, I don’t see enough focus on the how or why. The paper makes no mention of the rise of online learning, automated hiring platforms, and AI-driven recruitment tools. This seems to me a significant oversight.
Let’s start with education. As the study puts it, “by nature of the model training process, AI replaces codified knowledge, the ‘book-learning’ that forms the core of formal education. AI may be less capable of replacing tacit knowledge, the idiosyncratic tips and tricks that accumulate with experience” (4-5). That is, young workers “supply relatively more codified knowledge than tacit knowledge” and therefore “may face greater task replacement from AI in exposed occupations” (5). The authors’ phrase ‘book-learning’ maps uncomfortably well onto the impersonal online learning that has become dominant for the age 22-25 cohort. If your experience and learning is largely online, you are going to have a harder time being hired as a person by a person.
I’ve been writing about this from the curriculum delivery side, arguing that online education (especially general education) is making students more substitutable by AI. This problem is exacerbated by the fact that AI automation of the hiring process means students can go through college and onto a job platform without a human vouching for their qualities and abilities, without someone making a phone call saying “yes, this young person will show up, be responsible, learn from you,” which is what an employer really needs.
What we’re seeing here is a “last mile” problem. Tacit knowledge, judgement, thoughtfulness, are not easily measurable by AI. A human who has worked with a young person knows if someone has apprehended (or improved) the “idiosyncratic tips and tricks that accumulate with experience”:
What I’m calling “the last mile” here is the last 5-15% of exactitude or certainty in making a choice from data, for thinking beyond what an algorithm or quantifiable data set indicates, when you need something extra to assure yourself you are making the right choice. It’s what the numbers don’t tell you. It’s what you hear when you get on the phone to check a reference. There are other terms for this — the human factor or the human element, but these terms don’t get at the element of distance between what metrics give you and what you need to make a decision.
The authors don’t use the phrase “last mile” but that’s what they’re talking about. The vulnerability of codified, classroom-based skills (the online courses and coding bootcamps marketed to this age group) affected 22-25 year olds more than anyone. Higher education has been selling to students “skills” at which AI will perform better. Students who complete their education online have demonstrated mastery of codified knowledge but without the human vouching that once helped them get a job afterward. There is no one to carry these learners across the final threshold into employment, no expert human to translate their skills into trust and opportunity.
To be sure, there are digitally native forms of vouching a GitHub portfolio, a nice LinkedIn profile with peer endorsements, or contributions to open-source projects. These can claim tacit skills and judgment. But we all know a phone call from a trust network is best for a last mile decision. Everyone should read Tyler Cowen and Daniel Gross’s 2022 book on identifying Talent.
The bottom line is that when ChatGPT burst on the scene in late 2022, the post-pandemic hiring ecosystem of remote work and automated hiring platforms was already causing problems. The relative lack of human guidance for these young people may explain why the employment effects persist. These workers could be guided into other positions, as has happened with previous technological disruptions. But who will guide them?
To return to the automation vs. augmentation issue. The Stanford study suggests that the issue determining employment is not AI per se but the character of its deployment. When AI automates, it removes both tasks and the entry points through which humans once demonstrated competence. When AI augments, young workers may still gain a foothold if someone can vouch for them. A talented 25-year old can bet a better worker with Claude, but you need to see the talent to hire the worker.
If we consider the traditional pathway into a first job, whether entry level, administrative, professional work. A professor recommends a standout student to a former colleague. A manager spots potential in a junior employee’s approach to a problem. A mentor advocates for someone’s promotion based on observed growth over time. This vouching is all about sustained human interaction over time and subjective evaluation. (Some of the best prestige TV, such as The Wire or The Sopranos are about this). An AI hiring platform does not care about how a young person performs under pressure.
This seems to be what the Stanford study sees and misses. Their results “are not driven solely by computer occupations or by occupations susceptible to remote work and outsourcing” (4). The effects they are seeing are happening across the board. The problem is a breakdown in the social infrastructure that traditionally connected education to employment.
The study acknowledges this in calling for further research to model and test these predictions. I would argue the focus should be on online education as well as changes in hiring processes, the last mile where human judgment traditionally mattered most. Their data shows the effects of displaced workers but not whether those workers can successfully transition to other roles or industries.
From my experience in higher education, the human infrastructure for recognizing and developing talent is absolutely essential for future employment. The Stanford study reveals a troubling employment pattern, yes. But if we accept that the problem extends beyond simple task automation, then the next task is examining what digitization has done to the pathways into those jobs. If higher education continues to push online courses, and if entry-level workers cannot gain the tacit knowledge that comes from experience proving themselves on the job (sometimes simply by showing up), then how will anyone know who is worth the risk to hire?
Today’s twenty-five-year-olds are tomorrow’s mentors and managers; how will they learn who and how to guide and trust if the pipeline of tacit knowledge transfer has been broken? The new normal of online education, remote work, and algorithmic hiring has created a cohort of young workers with thinner human networks to see their talent and translate their potential into opportunity. Unless we find ways to bridge the gap between codified learning and practical opportunity, we risk creating a lost generation of workers with nobody to vouch for them.



I’m trusting no canaries were hurt in the image, which I think I can relax in knowing it is AI! Terrific piece. Thanks
The Stanford study reveals a 13% relative decline in employment for early-career workers aged 22-25 in AI-exposed occupations since 2022. AI displacement is not the only cause. The real issue is the weakening of higher education's core promise: colleges should produce people who are not just knowledgeable but truly educated in a way that connects aesthetically and socially.
The concept of being educated is perceptual. Others make an aesthetic judgment about it in professional settings. Properties include intellectual depth, clear reasoning, and sound judgment signaling trust and potential. Something like the intangible polish that turns a contributor into a leader. It includes the ability to combine ideas, handle uncertainty, and build confidence through presence and communication. In high-stakes settings today, this aesthetic is essential for success. While AI makes technical tasks commonplace, the “educated person” aesthetic endures.
This ideal faces attacks. Critics call traditional education elitist or outdated. They claim that being a technician is enough in an AI world. Critics highlight colleges' flaws. These include high costs and impersonal online courses that favor book learning over real experience such as unspoken knowledge, social networks, and human endorsements. Shifts to remote education during the pandemic made young workers more replaceable by AI. It removed key social elements like professor recommendations, face-to-face teamwork, and hands-on challenges that built the educated aesthetic. Colleges deserve criticism for failing to create such people reliably in this new environment. However, that does not mean we should end the system.
We must push back against the idea that technicians alone matter. Instead, let’s reinvent higher education for the AI era. Colleges must ensure graduates can communicate at the application layer. This is the active space where human ideas meet AI tools. It goes beyond just using an LLM to create content. It means using AI as part of your thinking, like how reading and writing marked educated people in the past.
In earlier times, literacy required mastering the sentence for exactness, the paragraph for flow, and the essay for strong arguments. These tools let people express complex ideas and sway others. While these remain key skills - especially for graduate students - “educated persons” also need practical knowledge of tools like GitHub for team-based updates and changes. They need Cursor or a similar AI IDE for smooth human-AI building. They also need Vercel or another serverless platform for quick, large-scale deployments. These are not extra facts or skills. They form the basics of “being educated” in the new economy.