Your LLM needs a grandmother
A response to Acemoglu, Kong, and Ozdaglar
Last month Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar released an NBER working paper titled “AI, Human Cognition and Knowledge Collapse.” The phrase “knowledge collapse” is already circulating in AI policy discussions, which makes the model’s assumptions worth scrutinizing. The paper is organized around identical cohorts making a single equilibrium effort choice per period, without overlapping generations or preference heterogeneity. Evolutionary anthropology identified the missing agent thirty years ago, and including it would change the paper’s core stability results.
The new paper models how agentic AI improves individual decisions while destroying the collective knowledge those decisions depend on. Good decisions require both general knowledge and context-specific knowledge. Human effort produces both jointly, what the authors call economies of scope in learning (p. 3). Agentic AI substitutes for the context-specific component, which is the component that privately motivates effort. When effort falls, the general-knowledge byproduct falls with it. The common state drifts, nobody replenishes the signal, and the stock of public precision can converge to zero. The authors themselves cite evidence that large language models “reduce the creativity of users, especially among younger users” (p. 3). The word “younger” appears without prompting the question: what are the older users doing differently, and does their effort respond to the same incentives?
The model treats all human learners as identical single-period decision-makers whose incentive to exert effort depends entirely on the private return from context-specific knowledge (p. 6). General knowledge is a byproduct of individually motivated learning. No agent has its production as a primary function. The population is homogeneous in age, expertise, and motivation. There are no agents who have spent decades accumulating domain-specific judgment. There are no agents whose productive years are over, whose entire effort goes toward ensuring the quality and survival of existing knowledge. In the language of evolutionary anthropology, there are no grandmother agents, no demographic cohort providing a public-knowledge floor independent of novice learning incentives.
The missing agent
Human evolution built one candidate stabilizer against knowledge collapse: overlapping generations with post-reproductive specialists whose payoffs run through dependents, so the flow of general knowledge does not vanish when the private return to last-mile learning falls. Most readers have someone like this in their life, if not in their models. Grandmothers supply an endogenous, demographic source of public precision that keeps the public-signal input positive even when novices’ effort is near zero, shifting the transition map away from the knife-edge at X = 0. In the terms of the paper, their contribution corresponds to adding an additive term to the public-signal precision, or adding a second agent type whose equilibrium effort remains positive at low X.
In “Grandmothers and the evolution of human longevity” (2003), Kristen Hawkes and colleagues evaluate the role of the post-reproductive provisioner whose effort is directed at ensuring the survival of dependents and the quality of their lives, drawing on expertise accumulated over decades. In every human population studied, including hunter-gatherer societies with the highest mortality rates, roughly a third of adult women are past childbearing age and remain economically productive (Hawkes and Coxworth 2013, p. 295). This demographic structure drove the evolution of human longevity, extended juvenile dependence, and the cooperative rearing that makes cultural transmission possible. Translated into the Acemoglu framework, the grandmother produces general knowledge as a primary activity, from payoffs that run through dependents and apprentices, and from reputational and professional incentives. Agentic AI crowds out effort by substituting for the private return on context-specific knowledge. The grandmother is not working for that return. The mechanism that displaces younger agents’ learning does not reach her.
Acemoglu’s paper argues that when agentic AI reduces the marginal return to effort, everyone reduces effort together, and general knowledge depreciates with nobody whose role it is to replenish it. The authors almost see this, noting that in medicine, “high professional standards, such as minimum years of medical training, ensure such a baseline” (p. 17). They model this as a constraint on the cost parameter. They do not model it as a different type of agent, one whose effort function is structurally decoupled from the incentive shifts that agentic artificial intelligence introduces. Their own related-literature section cites Ide (2025) (who notes that “AI is still in its infancy”) on how automating entry-level tasks “hampers the intergenerational transmission of tacit knowledge, typically taking place within firms via novice-expert interactions” (p. 5). The novice-expert interaction is the closest analogue inside the paper’s cited literature to the intergenerational transfer mechanism Hawkes describes. Acemoglu could easily have taken the infancy-intergeneration idea seriously and conceded that their model omits a class of actors whose contribution to general knowledge does not mechanically track the marginal return to last-mile learning.
The authors already show that an inflow independent of human learning eliminates the zero-knowledge endpoint (Proposition 15, p. 31). The grandmother is a demographic and institutional version of the same mathematical object. The difference is that the grandmother’s contribution is a demographic fact about human populations, not an engineered AI process, and the signal she produces is informed by decades of independent domain judgment. The knowledge-collapse steady state is locally stable when effort is highly elastic (p. 17), because agents sharply reduce effort as incentives weaken.
As the authors emphasize in their effort-separability extension, local stability near collapse is determined by the strength of public learning near X=0, meaning how quickly the public-learning input vanishes as effort goes to zero. In the paper’s model, next period’s stock of general knowledge depends on current aggregate effort feeding into a public signal, which is then degraded by the random-walk drift of the common state (equation 3, p. 11). When effort collapses, the signal collapses with it. A population with even a small fraction of grandmother-type agents (they don’t even need to be standing at the end of the cliff) would alter the boundary condition by adding a public-knowledge input that does not vanish as novice effort shrinks. In the resulting modified transition map, X=0 is no longer a fixed point, so the “trap” structure is altered rather than merely stabilized. The grandmother does for general knowledge what the paper’s aggregation parameter does: she raises resilience. She does it through dedicated expertise rather than better pooling of identical signals.
The paper’s proposed solution is “garbling,” deliberately degrading agentic recommendations to preserve learning incentives. Phase one “fully suppresses agentic recommendations, forcing agents to rely on their own learning” (p. 28). Phase two caps artificial intelligence precision at the welfare-maximizing level. The policy lever is to manipulate effective precision so that equilibrium effort rebuilds the stock, then hold precision at the long-run optimum. The grandmother achieves the same stabilization upstream, through provisioning rather than throttling. She does not require suppressing AI capability.
The grandmother is an agent in every sense the paper uses and one it does not: an economic agent with a distinct payoff function, a demographic agent whose effort is decoupled from novice incentives, and, if the architecture were designed for it, an AI agent whose action space is the knowledge base itself. The question is whether AI development has already been structured around the absence of such agents.
Is it already too late?
Your LLM needs a grandmother. Grandmothers should have been part of RLHF from the beginning, alongside the young gig workers applying rubrics far away. The absence of grandmother agents from the formal model mirrors their absence from the actual AI pipeline. Constitutional AI substitutes written principles for embodied expert judgment, that is the structural problem, not a workaround for it. The people who build these systems, train them, evaluate their outputs, and theorize about their effects are overwhelmingly young. The demographic fact about who is in the room when decisions are made is the same demographic fact that is missing from the model.
A grandmother function in AI architecture would intervene upstream of the user’s last-mile decision. It would be a sustained, domain-specific audit of model outputs over time, conducted by senior practitioners with the authority and incentives to turn detected failures into updated standards, documentation, and training signals. The role does not yet exist because the architecture was not designed for it. Designing for it would mean creating positions — in medicine, law, engineering, education — where decades of accumulated judgment are the qualification, where the work is evaluating and correcting AI-generated knowledge rather than consuming it, and where the output feeds directly into the systems that produce recommendations for everyone else.
Acemoglu, Kong, and Ozdaglar have formalized the fragility of systems in which general knowledge is produced solely as a byproduct of individually motivated learning. If “knowledge collapse” becomes a policy keyword, it should carry an implication the paper does not yet reach: the response includes building age-structured and expertise-structured infrastructure around AI systems, so that public knowledge is produced as a primary activity by people with long memory and deep domain judgment. The human species did not solve the provisioning problem by garbling the food supply to force children to forage.



Someone needs to study things like philosophy because we need to carry the torch even if we offer almost no jobs that require philosophy degrees.
hunched in corner, catatonically rocking back and forth...