When you write about higher education, there’s always a rock to turn over and there’s always a scandal under the rock. Here’s one: standard textbooks (including digital editions) for introductory science courses are anywhere from 8-20 years out of date. Here’s another: this fact is widely known and accepted by science educators.
In the humanities, outdated textbooks provoke outrage, op-eds, tumultuous school board meetings, strong positions, legislation. So I wondered why nobody protests old science books at the high school or college level. Meanwhile, the U.S. is falling behind globally in science innovation. Just 40% of U.S. students who intend to major in STEM actually complete the degree, a startling attrition rate that begins in first-year science classes. Are expensive, decades-old teaching materials that don’t excite students with new discoveries the problem? It’s a question of national importance as competitors like China have made the integration of frontier science a strategic priority from middle school onward.
The reason you will not find a definitive link between outdated textbooks and poor STEM outcomes is that the public conversation is focused almost entirely on cost, not content. Major studies compare high-cost commercial textbooks to free Open Educational Resources (OER), without noticing that both have a “strikingly similar design,” and contain identical content. There’s no attention to teaching scientific breakthroughs, even when discoveries are in the news. Nobody has done studies of national costs of our reliance on an outdated, scale-driven model for science education as a matter of global competitiveness.
As a humanist I am fascinated by this gap in knowledge and I can make the case that outdated science textbooks are a scandal with evidence from the industry’s own practices.
The textbook industry vs. science news
The science textbook industry is a broken oligopoly, or perhaps cartel. Its (profitable, messy, inefficient) business model relies on high prices and superficial revisions designed to kill the used book market. Free open textbooks are all versions of their expensive commercial counterparts. The conversation among experts is about disruption through cost cutting (or eliminating textbooks) not adding value by integrating new discoveries to static texts. As a result, institutional policies and state legislation focus almost exclusively on affordability and price transparency.
In a nutshell: science textbook companies have been offering the illusion of newness every few years, charging higher and higher prices, with the result that the science textbook scandal seems only to be about cost. Nobody looks at static content.
Meanwhile even when cost is solved, introductory courses are failing to excite students and attrition rates in STEM remain high. The pedagogical focus shifts to “active learning” or “belonging” rather than to the excitement of new scientific discoveries. Well-intentioned reforms like the Next Generation Science Standards (NGSS) aim to shift students from learning facts to “doing” science, but the result is teaching known principles not exploring the cutting edge.
While textbooks provide a necessary “spine” of fundamental knowledge (to meet core competency outcomes), the burden of keeping courses current falls on overworked faculty, many of whom are adjuncts with heavy teaching loads and no support for curriculum development. Crucially there are no systemic incentives – no carrots or sticks – to reward this effort. No accrediting agency rates an institution for how hard it works to keep current in science. No state higher ed appropriations formula links funding to ensuring the latest scientific discoveries are taught. U.S. News & World Report does not rank colleges on how up-to-date their introductory science curriculum is.
In short, science textbook stagnation is everyone’s fault and nobody’s.
AI is poised to jump-start science education by bypassing the entire chain of intermediaries. It can analyze the latest scientific papers directly, delivering the excitement of discovery to students without the filter of textbook publishers or even the simplified summary of popular science websites.
But AI cannot solve a problem the science education community won’t see. As long as the focus remains on cost and the pedagogy of “belonging,” the bigger scandal of content stagnation will go unexamined. What, then, will force universities to cut the textbook industry cord?
What is missing from college introductory science courses?
Physics course materials often lag the furthest behind, following a century-old progression from mechanics through electromagnetism. Textbooks still teach semiclassical models, like the Bohr model, that faculty who have the time spend “unteaching” to their students. Condensed matter physics is almost never discussed in undergraduate physics courses and textbooks. Despite a push from the American Association of Physics Teachers (AAPT) nearly a decade ago to integrate computational physics into curricula, progress has been slow and uneven.
Standard chemistry materials, even in top universities, still follow the general-organic-physical pathway. While advocates have succeeded in integrating “green chemistry” into introductory courses, transformative fields like nanotechnology are almost entirely absent.
Biology shows a similar decades-long lag in integrating major discoveries. Recombinant DNA techniques developed in the 1970s didn’t appear in undergraduate textbooks until the 1990s. The Human Genome Project (completed in 2003) is taught in genetics courses but not widely integrated. CRISPR technology and synthetic biology are not part of undergraduate foundation courses at most universities. There have been a variety of update efforts (some now over a decade old), but when the field is moving so fast that there’s fighting over what to update first, the answer seems to be not to update anything for new students, saving discoveries for upper level major courses. The result is a significant delay between breakthrough and textbook.
You see the same lag across disciplines and institutions, from top-ranked research universities to community colleges. Why? it is cheaper to package and deliver what is already known than to invest in the frontier.
Outdated science as business model
The acceptance of outdated science is rooted in the 20th-century scaling of higher education. After WWII, the GI Bill and a push for “general education” created a massive market for standardized textbooks. Publishers like Pearson and McGraw Hill ramped up to meet this demand. The post-Sputnik National Defense Education Act (NDEA) funded textbooks as a key technology for national security, underwriting development and revision cycles to meet critical needs.
But this model had a built-in tolerance for delay. Even with NDEA funding, science textbooks in the 1960s and 70s couldn’t keep pace with discoveries in space exploration – rocketry, satellite orbits, radio astronomy, and celestial mechanics – or DNA sequencing or computer technology. While outdated history texts provoked outrage and protests, the lag in science has been accepted as a necessary byproduct of mass education. Outside of battles over Intelligent Design there has been no public outcry demanding that introductory science reflect the frontier.
The other part of textbook history is at the state level. Twenty years ago, at Millsaps College, I heard a talk by the Civil Rights activist Ed King about growing up in Mississippi in the 1940s. He described a painful evolution in his thinking, going from being proud of himself, as a youngster, for being careful with his textbooks: not destroying or vandalizing pages, as some of his classmates did, but “keeping them clean,” knowing they would be passed along to the “colored schools” (he used the language of the era) in the poorer school districts in a few years. It took a long time, King said, too long, to come to the understanding that what he thought was virtue was no virtue at all, accepting the status quo. And yet, he said, he was glad he took care with his schoolbooks.
King’s story is about how a tolerance for “good enough” becomes systemic. The acceptance of passing lesser materials to some communities seems to have created a culture where a lag in knowledge delivery is still deemed acceptable.
This legacy of tolerance for the outdated persists, which we should all find very odd. Today, a science textbook can be seven years old and still be acceptable for transfer credit in the University of California system. This policy, meant to ensure some level of currency, is meaningless when publishers issue superficially updated editions with the appearance of newness. At the same time, affordability regulations like California’s push for free, open-source textbooks focus on cost, not content. Currency is not even a secondary concern.
Vast scholarship exists on the social costs of outdated history textbooks. In humanities fields, the “textbook wars” over how to portray slavery or gender have produced legislation and public outcry. Yet a parallel body of research on outdated science texts is conspicuously absent.
What is actually going on?
Why the absence of raised student and parent voices over the years, as course materials have lagged further and further behind exciting scientific discoveries? Why has it not mattered that universities keep teaching yesterday’s findings at scale without telling students that there’s work to be done at the frontier and how to do it?
Bruno Latour’s concept of “black boxing” offers one answer. Science education often presents knowledge as a black box – a set of established facts, stripped of the messy process of discovery. By hiding the controversies, failed experiments, and networks of researchers that create knowledge, the curriculum obscures the details and processes of scientific advancement. When science is taught as static facts, a delay in updating those facts seems trivial.
No one has studied the national security cost of teaching from obsolete materials, even as other nations make integrating frontier science a strategic priority.
Imagine a study comparing two groups of students. One learns from current materials, discovering the names of today’s leading researchers and institutions. The other learns from outdated texts, studying obsolete instruments and theories. The first group is given a map to the networks of modern science; the second is taught a history of a world that no longer exists. Knowing the path and entryway into the communities where science actually happens matters.
Teaching from outdated science books might offer unintentional benefits if students in the second group were told they were given old information while others are receiving new. The students in the second group might approach their books with a healthy skepticism and seek new information elsewhere. They might be excited at what they read on the internet and disdainful of the textbook lag. They might be more likely to go into science careers. In short, the circulation of “old” books could be healthy, if it provokes students and teachers to engage with where and how new knowledge is produced and always to ask questions.
Rigorous scholarship on the policies of systematically giving some school districts outdated textbooks could have told us what the architects of these policies thought about (or refused to think about) science knowledge networks and future science careers. We might have data on the comparative longitudinal effects of teaching outdated science in certain communities for decades, at a time when scientific discoveries were occurring at a rapid pace.
Instead, we are all in the second group now, getting out of date materials. It has to end.
AI will make this scandal impossible to ignore
Universities can no longer defend a business model where students pay tuition for a human to deliver information that an AI can personalize for each learner instantaneously. Once AI can deliver the entire general education curriculum, the university’s value proposition must shift from introducing students to knowledge to guiding students at the frontier of knowledge. The value of a university education will be the opportunity to discuss Plato with a renowned professor and to do experiments in a lab with a faculty mentor.
Teaching at the frontier is expensive. It is the opposite of the scalable, one-size-fits-all model that defines general education. Faculty are already too burdened by state-mandated learning outcomes and assessments that measure absorption of old knowledge. They would prefer to engage students in the messy, uncertain, and exhilarating process of creating new knowledge.
An undergraduate education should be organized around three questions: What do we know? How do we know it? And what remains unknown? This pedagogy cannot be done at scale. It requires direct engagement with primary sources, raw data, and research methodologies. Even now, as new studies praise AI for its potential to promote equity in science education, they repeat the old mistake of ignoring the outdatedness of the science being taught.
Ultimately, using AI for science education is both an opportunity to update stagnant content and radically change the structure of general education. AI is forcing a re-evaluation of what constitutes valuable learning in an age of instantaneous information. The textbook era needs to end.
If U.S. universities are to remain competitive, students must work at the frontier. It is the only part of education that cannot be automated, and the only thing worth the price of tuition.
[1] The NSF has paid attention to inequities in the STEM workforce. New studies address social inequities, showing that AI-driven platforms can “promot[e] inclusivity and equity in science education,” but the studies don’t mention outdated science.
Old physcist turned engineer here. You aren't really going to understand the issues they are considering at the frontier before you understand and master the basics. And it takes a long time to learn the basics. Now I think that some of the fundamentals can be taught better using more modern techniques - the Gibbs/Heavyside formulation of electromagnetism is probably getter handled via a geometric calculus formulation - which also better prepares the student for dealing with spin issues in Quantum Mechanics, but you need to get the student familiar with a lot of areas and techniques before you can start handling the research issues. My experience is that some seminar classes for the best undergraduate students may be approaching such areas in their senior year, typically students start approaching the research areas in or after their 2d year in graduate school. You could certainly give some very interesting classes overviewing what is going on in the research areas, but that is either additional classes or slows down the development of subject mastery.
There were several approaches to modernize the introductory curriculum in physics in the 60's - the Berkeley physics curriculium and the Fenyman lectures come to mind. The Fenyman lectures were and are excellent and some of the Berkeley books were excellent - but in retrospect both were instructional failures - experience showed that only the brightest students could handle them.
I attrited out of math for mathematicians in one semester - I was interested in using math, not doing math. I got a good grade, but I hated it. I still ended taking a LOT of math classes, but I was not interested in proofs for proofs sake. I have read comments from physics professors that they seem to be running a giant filter. I think the attrition rate when I was studying physics as an undergraduate was in excess of 95%.
I would note that being interested in a subject is necessary but not nearly sufficient. Some fields inherently require substantially above normal intelligence and may require other characteristics as well. My Junior mathematical physics text book noted that an understanding and facility with the math was necessary but not sufficient, as an understanding / intuition of the behavior of the physical systems being modeled was also necessary. Now working the problem sets and discussing solution approaches helped develop that understanding and intuition, but you absolutely had to have the necessary ability.
In the past few decades physics students have had mathematical assistance from dedicated algebraic manipulation software - Maple, Mathematica, ... This helps, but you still have to acquire an understanding of the subject matter, the tools, and the techniques to use the software effectively.
Further to this, I’d add the observation in a book I read by a mathematician - either Marcus Du Santoy or Eugenia Cheng - about the difference between those students who went on to become mathematicians vs maths teachers.
That there comes a point, for a mathematician, where you are more excited by the things not proven (the ABC conjecture, the Riemann hypothesis, Generalised Moonshine) than in solving equations.
The conjecture was that a lot of maths (and science) teachers are the people who did very well at the puzzle-solving of school level maths and want to get back there.
They aren’t much interested in their field post-1800.
I suspect the same is true in the sciences