35 Comments

I agree that some adjustments are needed, particularly with traditional assignments, but lecture courses have been replaceable for a long time. Schools - especially elite schools - are providing signaling and networking

Expand full comment

I thought of this same idea after sharing it with a friend. How long have college courses been freely available at the public library. AI is a lot more accessible, but people are also lazy. They need nudges, and college could be the nudge. We could look at the MOOCs, they have been free and or very cheap for a solid decade, and last time I checked most of the users have graduate degrees.

With that said, I mostly agree Prof Robbins.

I thought most college degrees were rip offs before AI. At some point parents and students aren't going to pay or go into debt to signal that they can use Chat GPT.

Expand full comment

Wonderful work here. Forward thinking and prescient. I just recorded my introductory lecture to my students and offered my own policy for working with AI, as my institution has not yet crafted one. In it I argued the value of LLMs as research tools and structural aids but demanded students resist the temptation to pass it off as their own work, citing as primary benefits the value of critical thinking and strong writing, both in their future scholarship and lives.

When I try to justify my own skill set, knowledge, or role in an institutional framework, I find myself turning to a Socratic inquiry, the first brought to my attention by Harold Bloom: where is wisdom to be found? Certainly AGI prompts knowledge, accumulation, logistical, arguments, and rhetoric. But how do we trust it in the realm of wisdom? As I’m principally concerned with the psychology of creativity, there is an inherent embodiment issue. Aesthetics can be taught, but not why something is of the moment, or indeed salient to any given situation. How does AGI handle intuition?

Today’s knowledge institutions are top heavy behemoths with little oversight and bloated administration budgets. We seem to be in an era of resistance to the swift oncoming, subtle knife of artificial intelligence and the obsolescence of the bureaucrat. As you rightly point out, there is an inherent redundancy to rote regurgitation. Though I’m sure you don’t suggest that the AGI inform the artist what to paint, what to sculpt, how to write, or for whom to make music.

As Nietzsche wrote in Dawn of Day, there’s a time for theater, when a people’s collective imagination becomes so weak that it must have its stories portrayed to it. But at all other times, it is too blunt and instrument. There’s not enough in it of dream or bird flight. However, I fear we are in that time now.

The time of AGI is also the time of the shamans rise, the artists power, and the poet’s pure vision. I have seen LLM poems that cut a neat caper, but they lack honesty for the simple reason that they are an amalgamation of billions of words with no organizing experience. A robot can squeeze orange juice for you, but it will never taste it.

Expand full comment

More deans need to be part of the conversation. So far I'm the only one and I'm an ex-dean!

Expand full comment

Your analysis of how AGI might reshape academic research raises important points, though I find several of your assertions problematic.

First, your claim that "AGI systems launching now can reason, learn, and solve problems across all domains, at or above human level" appears to contradict the consensus among AI researchers themselves, who broadly agree that true AGI has not yet been achieved.

Second, while you suggest that professors who cannot surpass AGI in generating new knowledge may become obsolete, this overlooks the fundamental reality of higher education: most institutions prioritize teaching over research. Consider that despite the widespread availability of high-quality recorded lectures online, students continue to enroll in traditional colleges and attend live classes in significant numbers. This persistence of in-person education—even when potentially superior alternatives are freely available—raises intriguing questions about the nature of effective teaching that predate any AGI considerations.

Additionally, your argument would benefit from clearly distinguishing between research universities and other types of higher education institutions. There's a common tendency among research faculty to universalize their experience, assuming all professors focus primarily on research and all students are traditional full-time undergraduates living on campus. This oversimplification undermines what could otherwise be a more nuanced analysis of AGI's potential impact on different educational contexts.

Expand full comment

This is the conversation I want to have, yes. My context is watching my former institution, Sonoma State (where I was a dean for four years), implode financially with 46 faculty, some full professors, laid off. The whole philosophy department! SSU is not an R1 institution; mostly it's teaching. A week later, the entire system contracts with OpenAI for all students and faculty (500K) to get access to good models. Did these faculty have the opportunity to make the case for their own very real value? No they did not and it pains me. My point is that faculty expertise does matter, it matters more than ever. It may not matter to administrators.

Expand full comment

A really engaging discussion! One of the problems I keep trying to work through is how will students learn the content and skills necessary to effectively get to "the last mile"? The problem, as I see it, is that in order to use AI effectively you need to possess a lot of basic foundational information. It is hard to ask the right questions without that. In a future world where AGI asks those questions for us, it would be impossible for us to do anything with an output that we don't understand. But teaching basic knowledge and the mechanics behind research seems really inefficient and slow in a world where AI can complete the tasks we would assign for training in seconds.

Expand full comment

Yes this is the conversation we need to be having. The way I see it, faculty need to be nose-deep in AI in order to see the last mile and explain what AI can't (yet) do. The challenge I'm seeing is getting faculty to wade into AI.

Expand full comment

The pace of change is really challenging. By the time I've figured out the limits of the current models and the implications for my courses, new models and tools are out and I'm starting all over.

Expand full comment

Yikes!

"In the humanities, it’s scholars working with newly discovered primary sources, untranslated manuscripts, or archaeological evidence not yet processed by AI, as well as those creating fundamentally new interpretive frameworks that transcend AGI’s pattern-recognition capacities."

Let me offer an example of a traditional framework for literary analysis that seems to transcend current LLMs. As for AGI, I simply don't know what that means, not in any robust way, no does anyone else as far as I can tell. Sure, if and when we produce an AI that has certainly by-no-means-rocket-science capabilities, that AI will be able to analyze text to see whether or not they exhibit ring-form organization. But it's not clear to me that we know how to create such a beast.

Ring-composition is a traditional idea, one mostly employed in classics and Biblical studies. I became aware of it through correspondence with the late Mary Douglas early in the millennium. She was using it to analyze biblical texts, but also Tristram Shandy. I set out to find ring composition in more modern texts and quickly found it. It's in the 1954 Japanese film, Gojira (Godzilla in English). Obama's eulogy for Clementa Pinckney exhibits ring composition, and does Joseph Conrad's Heart of Darkness (one of the most popular texts in college lit courses).

It's quite possible that 28 of Shakespeare's plays are ring compositions. About five years ago James Ryan published a book arguing that no less than 28 of Shakespeare’s plays are ring-compositions. Given Shakespeare's importance that would seem to be an important question. His book has not, as far as I know, received a full review. No one has checked his claim. I haven’t done it because of the opportunity cost. I’d have to read each play and conduct my own analysis. That would take me a month or more, and there’s no place I could publish that.

Finding ring-form is tricky. In my experience, there’s nothing about these texts that says they’re ring form. I sorta’ find these structures by accident. There's something in a text that grabs my attention, so I start poking around. Eventually I create a table where each row in the table corresponds to a section of the text. These tables can easily extend over three or four or more typed pages. The first column in the table is simply an identification number. The second column indicates where that section of text is located (page numbers, timings for films). The third column contains a short account of what happens in that section. There's often a fourth column I use for various notes. It's by analyzing such tables that I am able to find ring-composition.

As I've said, doing this is not rocket science. It doesn't require a high degree of arcane abstract reasoning. It does require attention to detail and it requires you to look back and forth and around and through the analysis table, comparing and contrasting this with that and the other thing time and again over and over. It's not easy, but it's not theory either. It's craft.

Anyhow, I ended up discussing this with Claude 3.5. Claude remarked: "This raises interesting questions about LLMs and ring composition. If these structures are subtle enough that even human scholars find them 'by accident,' how might they be encoded in the model's representations? The transformer architecture might capture such patterns implicitly through attention mechanisms operating at different scales, but detecting them deliberately would require new analytical approaches." That's a very interesting issue. That's why I have my doubts about the capabilities of the (currently) mythic AGIs.

The discovery process has two requirements that are tricky: 1) you have to be able to treat language as an object for inspection and analysis, and 2) you have to be able to move back and forth 'over' a text. LLMs have some capacity for the first. They can tell you the parts of speech of words in a sentence, for example. But to conduct ring-form analysis you have to be able to think of various ways you can classify a chunk of text whose boundaries may not be obvious. That requires you to juggle a number of ways of thinking about chunks of text, of classifying them, to see what 'clicks.' I'm not at all sure LLMs can do that. And this business of moving back and forth over a table, that's a bit like keeping track of partial results in many-digit arithmetic. We know that's difficult for LLMs and the difficulty is in the architecture.

For these reasons it is by no means obvious to me that we're going to have an AI that can do this anytime soon. Believe me, I wish it weren't so. I think the question is an important one. If most of Shakespeare's plays are ring-forms, we need to know that. What other texts are ring forms? Beyond that, what other formal patterns are there? We don't know. We don't have a standard inventory. I'd love to have computer help on this. But...

Anyhow, here's where you can find my complete conversation with Claude on these matters: https://www.academia.edu/127480159/From_LLM_mechanisms_to_ring_composition_A_conversation_with_Claude_3_5_Working_Paper

Expand full comment

AI excels at pattern recognition. If current LLMs cannot yet detect ring composition then additional fine tuning by training on a corpus of labelled ring composition examples (the text plus your annotated tables) should be sufficient for them to learn.

Expand full comment

I think you should read Claude's comment to me again and think about it carefully. You might even try reading the paper. There's a world of difference between 'registering' a pattern implicitly and identifying it deliberately and explicitly.

I've thought about this long and hard. I'm not holding my breath. Coming up with that labeled corpus is more easily said than done. We're not talking about annotating sentences or labeling pictures. It would take at least a day to do a single Shakespeare text, and they're relatively short. A whole novel? That depends. Two, three, four days, a week? This is not something you can get Mechanical Turk workers to do. And we've got the issue of the size of the context window. How many labeled examples do we need? 100? Even then it is by no means obvious to me that an LLM could do the job.

And forget about movies. It's going to be awhile before we have a device that can watch a feature film and make sense of it.

Expand full comment

"If most of Shakespeare's plays are ring-forms, we need to know that." Really? We do we need to know that? If somebody (or some computer program) says "Yes, they are," what then?

Expand full comment

You mean you don't care how these texts are constructed? You don't care how they work in the mind?

I know perfectly well the profession doesn't care about form. Oh, "formalism" is an important term. But it's meaning is a matter of dispute and anyhow, it seems mostly a philosophical stance about meaning. A stance that says literary meaning is special, independent of context, and so we can interpret texts without regard to context. Whatever it is, formalist criticism isn't a commitment to describing and analyzing the formal properties of texts – except, perhaps, and only perhaps, for rhyme and meter in poetry.

Expand full comment

Sorry, I meant to ask *why* we needed to know that thing.

You seemed to be taking seriously the commission to find things AI can't do (yet), when AI cannot do education at all, as I understand it.

My own development might have been enhanced by more time, say, learning to play a musical instrument.

Expand full comment

And music is all about form. If you study classical counterpoint you'll learn techniques to taking a small melodic shape and extending it into an elaborate melody by transforming it in various ways. So take a melodic shape (such as the four-note motif in Spielberg's "Close Encounters") and turn it upside down. That's called, you guessed it, inversion. Now run it backwards, thats the retrograde. If you turn it upside down and backwards you get a retrograde inversion. If you follow a melody with its retrograde inversion, you get what I've been calling a ring-form, thus: A, B, C...X...C', B ', A'.

Expand full comment

Spot on! And here is a controversial take.

The US educational system is absolutely world class, simply the BEST. As such, like any leading incumbent, especially one as dominant in its sphere as US Higher Ed is, we will have the hardest time changing.

The resistance to change will be substantial.

Higher Ed today (unlike when I was in grad school in 1990) is not worth it. The disciplined, self directed person can get a non-credentialed world class education for 1/10th the cost of a college degree in the US, even at an affordable state school

Non US educational institutions will have the opportunity to catch up and potentially overtake the US crown jewel, unless we demonstrate agility.

For example, consider this new institution in India: https://plaksha.edu.in/about-us

From what I understand, very low emphasis on traditional lectures and huge emphasis on interdisciplinary hands on project bases learnings

Expand full comment

This is the link I intended to include earlier: https://buffalo.app.box.com/v/AI-Without-Fear

Expand full comment

Reading now.

Expand full comment

AGI does not, as yet, exist. And there are arguments to the effect that it never will exist. (E.g. here: AI is universally bad at knowing when to chime in during a conversation.)

But more importantly, even if it did exist, we would not know where its limits are.

Expand full comment

This seems to be focused pretty completely on how AGI changes the human capital development function of a university. But higher education's value also has a significant signaling component, even if one doesn't buy Bryan Caplan's claim that signaling drives the majority of the private monetary return to education. Beyond the obvious difficulty introduced in telling whether students have engaged in AI enabled cheating to get their valuable credentials, how might AGI change the signaling process?

Expand full comment

This is a really important conversation that I noticed on X that Bryan Caplan isn't having yet. In the scenario I set out, the "best" faculty are the ones who know many miles more than AI/AGI, so working with those faculty are worth paying for and worth signaling. "I worked with this or that distinguished professor" is now said in graduate school but not as an undergraduate. Undergrads SHOULD want to study with the best. So that may be a consequence.

Expand full comment

Perhaps seminars should be saved? Or, classes devoted to exercising critical thinking and introspective reflection muscles. The “true” value of college for me was learning how to think well (philosophically, critically, strategically, etc), take on other perspectives, argue and debate and value things, and learn how to deploy my academic interests to create something socially useful or professionally relevant.

We can say that AGI can be a Socratic tutor, and that’s true, but as I see it, we need to build an academic and instructional architecture in which professors and TAs can motivate curious engagement with those tutors. To some degree, the passive form of seminar forced me to pay attention and learn from my peers. That in-person full body classroom, and assignments like analytical essays, forced me to improve my thinking skills.

Grad school (an MPA this past May) taught me to improve how I work with sources, to prove everything that I said, to translate it into practicable professional insights and implementations. It was mostly workshops for team-building to accomplish projects at lower stakes and with more mentorship than corporate work. I think these types of dialogue based workshops will also require the type of “critical thinking professor” I am talking about. But maybe a PhD is not required to serve that role, and a “human tutor and teacher” is more appropriate even at the collegiate level.

I look forward to reading more!

Expand full comment

Yes 100%, as I have said to my friend Zena Hitz (https://x.com/zenahitz) who teaches at St. John's College that her institution is future-proof (or in this case AGI-proof) for all the reasons you say. I've been writing mostly about state-minded general education requirements which are poorly delivered across the country and yet may cost students $40K. Its unconscionable to me.

Expand full comment

The irony here is that this post was most likely generated by AI

Expand full comment

I definitely used ChatGPT to spell check yes! It wanted me to change my tone to be more optimistic. I declined.

Expand full comment

I’m a bit confused by how this coheres with earlier blogposts, which were much more optimistic about AI. Only a month or two ago you wrote about secretaries and their importance. What changed? Technological improvements? Because the technological improvements we’re seeing now seemed to be predictable a few months ago, at least based on what I was reading and listening to at the time.

I will also say that assuming a sizeable space is preserved for humans as students, teachers, and researchers—and I don’t think it’s at all guaranteed that there will be space for this, particularly because institutional transformation will (I would guess) take some time and AI systems are improving rapid—most of the humans who are left will be outliers on traits like intelligence, conscientiousness, maybe introversion (if their job ends up consisting mostly of interacting with machines and assuming such interactions can’t be adjusted to tap into the parts of brains that respond to and need human social interaction), etc. Right now, there’s a large (and woefully too late) cultural conversation occurring about the gulf between university grads vs high school grads, about the role of universities in supercharging inequality, etc. I could see this becoming much worse.

Expand full comment

Thank you for your excellent comment & readership. I'm still optimistic that humans have an essential place in the "last mile" that AI/AGI can't do. I'm getting increasingly frustrated that university leaders are not even mentioning AI in their marketing to tuition-paying students. So this post is for them!

Expand full comment

Even before AI, the model of higher ed was in need of adjusting for the internet. The current model remains based on a time when one needed to be physically in a library and classroom. Sometimes waiting on physical books from interlibrary loan via snail mail. Having to type papers manually and making corrections was time consuming and laborious. For over 175 years the system has remained the same. I had many professors who, back in the day, did nothing more than occupy space.

Expand full comment

You make it sound like the student is there to acquire a basket of knowledge, sort of like a basket of fruit. If the AGI has all the knowledge, there is no need to acquire it at all.

I think that the goal of a college education has more to do with knowledge *transfer*. Learning to apply what you learn in one setting to a slightly different setting. If humans no longer need to do that, then a lot more becomes obsolete than lecture courses.

Expand full comment

I take it this is satire, like *A Modest Proposal,* showing the absurd conclusion of a belief that education is "information transfer."

Expand full comment

Or the beginning of a dystopian vision of the future? Or utopian within some perspectives? In any case, I think at least two questions always nag at me in conversations about AI and education. One is about the way our species is highly relational - even for the most technical of topics - such that it may matter a great deal that we talk with one another about knowledge, that we negotiate knowledge amongst ourselves, that we have the sense that "we" know things, rather than that "I" know things based on interactions with Claude or its successors - and no relational approach to education can happen with AI yet and perhaps ever. The other is that to become the people posing questions requires a working through of the foundations that I'm not sure can happen in an information transfer approach to knowledge. When you don't construct the knowledge yourself, when you are told things, you can't work with them in the way that you would need to be able to do, to move forward. In my research mentoring life, all of my students know how to compute basic statistics and draw the correct inferences - you can learn that from textbooks. Relatively few understood "how" the statistical machinery worked - but those students were capable of innovation and pushing things forward in a whole different way. And to bring that back full circle in this reply, those students spent a lot of time hanging with other stats-motivated people with varied levels of expertise, making sense of things in a constructive, rather than information-transfer way. Maybe there are visions where AGI can help in that constructive process in a relational way, but for that to be true requires exquisite capacities of reading human emotional and cognitive signals and adapting/engaging with those. We humans often have abilities that let us do that (not necessarily all in the identical way given neurodiversity) already.

Expand full comment

Yes yes and yes! My next post is about how faculty are truly hamstrung by the "learning outcomes" cage that does not allow them to teach in human ways, leaning on their expertise, experience, and humanity. If AGI and wholly restructuring the university allows human faculty to be human, I am all for it.

Expand full comment

this is quite an eye-opening analysis

Expand full comment