60 Comments
User's avatar
Kash's avatar

I agree that some adjustments are needed, particularly with traditional assignments, but lecture courses have been replaceable for a long time. Schools - especially elite schools - are providing signaling and networking

Expand full comment
James Gomes's avatar

I thought of this same idea after sharing it with a friend. How long have college courses been freely available at the public library. AI is a lot more accessible, but people are also lazy. They need nudges, and college could be the nudge. We could look at the MOOCs, they have been free and or very cheap for a solid decade, and last time I checked most of the users have graduate degrees.

With that said, I mostly agree Prof Robbins.

I thought most college degrees were rip offs before AI. At some point parents and students aren't going to pay or go into debt to signal that they can use Chat GPT.

Expand full comment
Jordan Myska Allen's avatar

Yes, in the optimistic world where AGI somehow doesn't disrupt/destroy everything, I think it will take more than four years for society to adjust, but less than ten—AGI is nearly unfathomably different than MOOCs because of deeply hand-tailored to the individual and constantly updating with new information it is... people will be much much better prepared for the world with an AI tutor than going to college, and people are not better prepared for the world with free online lecture courses.

That said there's something about universities that has long been about socialization and not actually learning... I went to a fantastic university with great courses and I still learned more managing the bar on campus

Expand full comment
Eskimo1's avatar

Yes. You can literally go to Harvard for free by walking into lectures, yet nobody does.

So don’t fret - the real AI risks are much, much worse.

Expand full comment
Bradley McDevitt's avatar

Wonderful work here. Forward thinking and prescient. I just recorded my introductory lecture to my students and offered my own policy for working with AI, as my institution has not yet crafted one. In it I argued the value of LLMs as research tools and structural aids but demanded students resist the temptation to pass it off as their own work, citing as primary benefits the value of critical thinking and strong writing, both in their future scholarship and lives.

When I try to justify my own skill set, knowledge, or role in an institutional framework, I find myself turning to a Socratic inquiry, the first brought to my attention by Harold Bloom: where is wisdom to be found? Certainly AGI prompts knowledge, accumulation, logistical, arguments, and rhetoric. But how do we trust it in the realm of wisdom? As I’m principally concerned with the psychology of creativity, there is an inherent embodiment issue. Aesthetics can be taught, but not why something is of the moment, or indeed salient to any given situation. How does AGI handle intuition?

Today’s knowledge institutions are top heavy behemoths with little oversight and bloated administration budgets. We seem to be in an era of resistance to the swift oncoming, subtle knife of artificial intelligence and the obsolescence of the bureaucrat. As you rightly point out, there is an inherent redundancy to rote regurgitation. Though I’m sure you don’t suggest that the AGI inform the artist what to paint, what to sculpt, how to write, or for whom to make music.

As Nietzsche wrote in Dawn of Day, there’s a time for theater, when a people’s collective imagination becomes so weak that it must have its stories portrayed to it. But at all other times, it is too blunt and instrument. There’s not enough in it of dream or bird flight. However, I fear we are in that time now.

The time of AGI is also the time of the shamans rise, the artists power, and the poet’s pure vision. I have seen LLM poems that cut a neat caper, but they lack honesty for the simple reason that they are an amalgamation of billions of words with no organizing experience. A robot can squeeze orange juice for you, but it will never taste it.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

More deans need to be part of the conversation. So far I'm the only one and I'm an ex-dean!

Expand full comment
Jacob Felson's avatar

Your analysis of how AGI might reshape academic research raises important points, though I find several of your assertions problematic.

First, your claim that "AGI systems launching now can reason, learn, and solve problems across all domains, at or above human level" appears to contradict the consensus among AI researchers themselves, who broadly agree that true AGI has not yet been achieved.

Second, while you suggest that professors who cannot surpass AGI in generating new knowledge may become obsolete, this overlooks the fundamental reality of higher education: most institutions prioritize teaching over research. Consider that despite the widespread availability of high-quality recorded lectures online, students continue to enroll in traditional colleges and attend live classes in significant numbers. This persistence of in-person education—even when potentially superior alternatives are freely available—raises intriguing questions about the nature of effective teaching that predate any AGI considerations.

Additionally, your argument would benefit from clearly distinguishing between research universities and other types of higher education institutions. There's a common tendency among research faculty to universalize their experience, assuming all professors focus primarily on research and all students are traditional full-time undergraduates living on campus. This oversimplification undermines what could otherwise be a more nuanced analysis of AGI's potential impact on different educational contexts.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

This is the conversation I want to have, yes. My context is watching my former institution, Sonoma State (where I was a dean for four years), implode financially with 46 faculty, some full professors, laid off. The whole philosophy department! SSU is not an R1 institution; mostly it's teaching. A week later, the entire system contracts with OpenAI for all students and faculty (500K) to get access to good models. Did these faculty have the opportunity to make the case for their own very real value? No they did not and it pains me. My point is that faculty expertise does matter, it matters more than ever. It may not matter to administrators.

Expand full comment
JMangahas's avatar

I was also perplexed by this characterization of AGI “launching now.” More precisely, I wondered which unfortunate faculty don’t possess “expertise [that] surpasses current AI.” Likewise those courses that fail to “demonstrate specific knowledge transfer or skill development that exceeds AGI capabilities.” For now I think I’d refuse to write a memo justifying my contributions against the standard of a non-existent AGI, while wondering if this vision of the future of higher ed is, as someone in earlier comments suggested, Swift-style satire. It certainly is an odd view of higher education, dismissive of everything beyond “knowledge/skill transfer” as mere “summer camp.” Plus I shudder to think of certain student services, in particular mental health counseling, taken over by “AGI.”

Expand full comment
Arnold Kling's avatar

You make it sound like the student is there to acquire a basket of knowledge, sort of like a basket of fruit. If the AGI has all the knowledge, there is no need to acquire it at all.

I think that the goal of a college education has more to do with knowledge *transfer*. Learning to apply what you learn in one setting to a slightly different setting. If humans no longer need to do that, then a lot more becomes obsolete than lecture courses.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

One of the conversations we need to be having is who is training this new generation of faculty.

Expand full comment
Mark Humphries's avatar

A really engaging discussion! One of the problems I keep trying to work through is how will students learn the content and skills necessary to effectively get to "the last mile"? The problem, as I see it, is that in order to use AI effectively you need to possess a lot of basic foundational information. It is hard to ask the right questions without that. In a future world where AGI asks those questions for us, it would be impossible for us to do anything with an output that we don't understand. But teaching basic knowledge and the mechanics behind research seems really inefficient and slow in a world where AI can complete the tasks we would assign for training in seconds.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

Yes this is the conversation we need to be having. The way I see it, faculty need to be nose-deep in AI in order to see the last mile and explain what AI can't (yet) do. The challenge I'm seeing is getting faculty to wade into AI.

Expand full comment
Mike Betz's avatar

The pace of change is really challenging. By the time I've figured out the limits of the current models and the implications for my courses, new models and tools are out and I'm starting all over.

Expand full comment
Matthew David Segall's avatar

I’m still seeing very stupid mistakes by even the most advance LLM models, but maybe that’s not the point because the limitations (ie, the machine not actually having any fucking clue what the words it spits out mean) will soon be overcome? I am impressed by the newer reasoning models’ ability to answer hard philosophical questions, but every few lines it just reorders some words in a way that, while grammatically correct, shows it doesn’t get it the deeper concepts at play at all. In terms of universities becoming obsolete, I am more worried about STEM than the humanities, since the publishable goods are more easily formalizable than good literature and creative philosophy are. Perhaps the "value proposition" of university education can return to what it used to be, helping foster sound judgment and good taste. Whatever "AGI" ends up looking like (the goal posts are always shifting), I hope our society does not forgo the importance of wise human judgment for the most important decisions.

Expand full comment
Deb Schiano's avatar

I so get the need for change, however teaching should not be reduced to information transfer. Teaching is about fostering in students the skills and dispositions necessary for learning. Do we no longer cherish the wisdom gained by thinking about ideas from various viewpoints, drawing original conclusions, and applying those learnings to future scenarios? Yes you can get information from AI; yes you can make that information as broad or specific as you like; yes AI can solve problems; yes it can even offer decisions based on certain parameters. However, is this what we want? Do we want people consulting AI every time their child asks for advice or for choosing words that show love and care? Without wisdom, humans are more like robots. I’m sorry I have greater hopes for us than that.

Expand full comment
David Pierce's avatar

I take it this is satire, like *A Modest Proposal,* showing the absurd conclusion of a belief that education is "information transfer."

Expand full comment
Monisha Pasupathi's avatar

Or the beginning of a dystopian vision of the future? Or utopian within some perspectives? In any case, I think at least two questions always nag at me in conversations about AI and education. One is about the way our species is highly relational - even for the most technical of topics - such that it may matter a great deal that we talk with one another about knowledge, that we negotiate knowledge amongst ourselves, that we have the sense that "we" know things, rather than that "I" know things based on interactions with Claude or its successors - and no relational approach to education can happen with AI yet and perhaps ever. The other is that to become the people posing questions requires a working through of the foundations that I'm not sure can happen in an information transfer approach to knowledge. When you don't construct the knowledge yourself, when you are told things, you can't work with them in the way that you would need to be able to do, to move forward. In my research mentoring life, all of my students know how to compute basic statistics and draw the correct inferences - you can learn that from textbooks. Relatively few understood "how" the statistical machinery worked - but those students were capable of innovation and pushing things forward in a whole different way. And to bring that back full circle in this reply, those students spent a lot of time hanging with other stats-motivated people with varied levels of expertise, making sense of things in a constructive, rather than information-transfer way. Maybe there are visions where AGI can help in that constructive process in a relational way, but for that to be true requires exquisite capacities of reading human emotional and cognitive signals and adapting/engaging with those. We humans often have abilities that let us do that (not necessarily all in the identical way given neurodiversity) already.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

Yes yes and yes! My next post is about how faculty are truly hamstrung by the "learning outcomes" cage that does not allow them to teach in human ways, leaning on their expertise, experience, and humanity. If AGI and wholly restructuring the university allows human faculty to be human, I am all for it.

Expand full comment
Bill Benzon's avatar

Yikes!

"In the humanities, it’s scholars working with newly discovered primary sources, untranslated manuscripts, or archaeological evidence not yet processed by AI, as well as those creating fundamentally new interpretive frameworks that transcend AGI’s pattern-recognition capacities."

Let me offer an example of a traditional framework for literary analysis that seems to transcend current LLMs. As for AGI, I simply don't know what that means, not in any robust way, no does anyone else as far as I can tell. Sure, if and when we produce an AI that has certainly by-no-means-rocket-science capabilities, that AI will be able to analyze text to see whether or not they exhibit ring-form organization. But it's not clear to me that we know how to create such a beast.

Ring-composition is a traditional idea, one mostly employed in classics and Biblical studies. I became aware of it through correspondence with the late Mary Douglas early in the millennium. She was using it to analyze biblical texts, but also Tristram Shandy. I set out to find ring composition in more modern texts and quickly found it. It's in the 1954 Japanese film, Gojira (Godzilla in English). Obama's eulogy for Clementa Pinckney exhibits ring composition, and does Joseph Conrad's Heart of Darkness (one of the most popular texts in college lit courses).

It's quite possible that 28 of Shakespeare's plays are ring compositions. About five years ago James Ryan published a book arguing that no less than 28 of Shakespeare’s plays are ring-compositions. Given Shakespeare's importance that would seem to be an important question. His book has not, as far as I know, received a full review. No one has checked his claim. I haven’t done it because of the opportunity cost. I’d have to read each play and conduct my own analysis. That would take me a month or more, and there’s no place I could publish that.

Finding ring-form is tricky. In my experience, there’s nothing about these texts that says they’re ring form. I sorta’ find these structures by accident. There's something in a text that grabs my attention, so I start poking around. Eventually I create a table where each row in the table corresponds to a section of the text. These tables can easily extend over three or four or more typed pages. The first column in the table is simply an identification number. The second column indicates where that section of text is located (page numbers, timings for films). The third column contains a short account of what happens in that section. There's often a fourth column I use for various notes. It's by analyzing such tables that I am able to find ring-composition.

As I've said, doing this is not rocket science. It doesn't require a high degree of arcane abstract reasoning. It does require attention to detail and it requires you to look back and forth and around and through the analysis table, comparing and contrasting this with that and the other thing time and again over and over. It's not easy, but it's not theory either. It's craft.

Anyhow, I ended up discussing this with Claude 3.5. Claude remarked: "This raises interesting questions about LLMs and ring composition. If these structures are subtle enough that even human scholars find them 'by accident,' how might they be encoded in the model's representations? The transformer architecture might capture such patterns implicitly through attention mechanisms operating at different scales, but detecting them deliberately would require new analytical approaches." That's a very interesting issue. That's why I have my doubts about the capabilities of the (currently) mythic AGIs.

The discovery process has two requirements that are tricky: 1) you have to be able to treat language as an object for inspection and analysis, and 2) you have to be able to move back and forth 'over' a text. LLMs have some capacity for the first. They can tell you the parts of speech of words in a sentence, for example. But to conduct ring-form analysis you have to be able to think of various ways you can classify a chunk of text whose boundaries may not be obvious. That requires you to juggle a number of ways of thinking about chunks of text, of classifying them, to see what 'clicks.' I'm not at all sure LLMs can do that. And this business of moving back and forth over a table, that's a bit like keeping track of partial results in many-digit arithmetic. We know that's difficult for LLMs and the difficulty is in the architecture.

For these reasons it is by no means obvious to me that we're going to have an AI that can do this anytime soon. Believe me, I wish it weren't so. I think the question is an important one. If most of Shakespeare's plays are ring-forms, we need to know that. What other texts are ring forms? Beyond that, what other formal patterns are there? We don't know. We don't have a standard inventory. I'd love to have computer help on this. But...

Anyhow, here's where you can find my complete conversation with Claude on these matters: https://www.academia.edu/127480159/From_LLM_mechanisms_to_ring_composition_A_conversation_with_Claude_3_5_Working_Paper

Expand full comment
James's avatar

AI excels at pattern recognition. If current LLMs cannot yet detect ring composition then additional fine tuning by training on a corpus of labelled ring composition examples (the text plus your annotated tables) should be sufficient for them to learn.

Expand full comment
Bill Benzon's avatar

I think you should read Claude's comment to me again and think about it carefully. You might even try reading the paper. There's a world of difference between 'registering' a pattern implicitly and identifying it deliberately and explicitly.

I've thought about this long and hard. I'm not holding my breath. Coming up with that labeled corpus is more easily said than done. We're not talking about annotating sentences or labeling pictures. It would take at least a day to do a single Shakespeare text, and they're relatively short. A whole novel? That depends. Two, three, four days, a week? This is not something you can get Mechanical Turk workers to do. And we've got the issue of the size of the context window. How many labeled examples do we need? 100? Even then it is by no means obvious to me that an LLM could do the job.

And forget about movies. It's going to be awhile before we have a device that can watch a feature film and make sense of it.

Expand full comment
David Pierce's avatar

"If most of Shakespeare's plays are ring-forms, we need to know that." Really? We do we need to know that? If somebody (or some computer program) says "Yes, they are," what then?

Expand full comment
Bill Benzon's avatar

You mean you don't care how these texts are constructed? You don't care how they work in the mind?

I know perfectly well the profession doesn't care about form. Oh, "formalism" is an important term. But it's meaning is a matter of dispute and anyhow, it seems mostly a philosophical stance about meaning. A stance that says literary meaning is special, independent of context, and so we can interpret texts without regard to context. Whatever it is, formalist criticism isn't a commitment to describing and analyzing the formal properties of texts – except, perhaps, and only perhaps, for rhyme and meter in poetry.

Expand full comment
David Pierce's avatar

Sorry, I meant to ask *why* we needed to know that thing.

You seemed to be taking seriously the commission to find things AI can't do (yet), when AI cannot do education at all, as I understand it.

My own development might have been enhanced by more time, say, learning to play a musical instrument.

Expand full comment
Bill Benzon's avatar

And music is all about form. If you study classical counterpoint you'll learn techniques to taking a small melodic shape and extending it into an elaborate melody by transforming it in various ways. So take a melodic shape (such as the four-note motif in Spielberg's "Close Encounters") and turn it upside down. That's called, you guessed it, inversion. Now run it backwards, thats the retrograde. If you turn it upside down and backwards you get a retrograde inversion. If you follow a melody with its retrograde inversion, you get what I've been calling a ring-form, thus: A, B, C...X...C', B ', A'.

Expand full comment
Uncertain Eric's avatar

As stark as this is, it's still not stark enough. Any course on AI being designed right now will already be outdated by September. The speed at which this technology is evolving makes multi-year academic planning fundamentally unworkable. Universities, which are structured around slow institutional adaptation, simply cannot keep pace with an industry that reinvents itself every business quarter.

And this problem won’t be limited to universities. The Software-as-a-Service to Employee-as-a-Service paradigm shift is going to gut the workforce at an unprecedented scale. By the end of 2025, the collapse will be obvious—entire industries hollowed out, professional roles automated away, and the economic structure of white-collar employment fundamentally altered. This is the final stretch before the robots come for everything.

Universities should be the kind of institutions capable of confronting this reality. They have the expertise, the frameworks, the capacity for deep analysis. But in their current state, they’re structurally incapable of responding to it. Administrative bloat, bureaucratic inertia, financial entanglements, and the need to justify their existence in the face of obsolescence all make it nearly impossible for universities to address this crisis with the urgency it demands.

What’s happening now isn’t just a transformation of higher education. It’s a systemic shift that will leave nothing untouched. Academia, like the rest of the knowledge economy, has no future unless it radically rethinks its role immediately. And even then, survival is not guaranteed.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

I agree.

Expand full comment
David Borinsky's avatar

Students already have access to the full range and depth of information relevant to a particular subject matter. It’s called Books. Therefore, to that extent, this post is over-invested in a false premise. (Just to be clear, I enjoyed and appreciated the post nonetheless)

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

Books are excellent yes!

Expand full comment
Adam Train's avatar

A fascinating take. I can say with confidence that I learned precisely zero facts throughout my university education that could not have been taught to me by Deep Research. I use "facts" carefully because I did benefit from the experience in other ways—probably not $100,000 worth of benefit, though, netted against whatever else I could've done with those years instead.

I think the majority of students I met when I went to school (2013) were there because that's The Thing You Do and not because of any intrinsic motivation to learn, not because that motivation didn't exist, but because college was already losing its symbolic zeal. The internet means you don't need a school's giant library for 90% of what you want to know. Most lecturers are frankly not very good, especially in lower division courses. I went to a large, reputable public university and almost everyone I knew who enjoyed learning was underwhelmed by the academic experience there. I don't know anyone in the last ten years who has convincingly represented the typical (non-elite/Ivy) undergrad experience any differently than that in the aggregate.

I think AI will have a similar concentrating effect among schools as it will in the private sector: the very best will continue to be highly valued, and everyone else (in this case, all the non-elite universities) will fight for their life and probably die out. But that'll take a while for culture to change.

Expand full comment
Isaiah Hemmen's avatar

As a follow-up, I then fed GPT the following:

“If Dewey, speaking for university English professors, were prompted with the following, what would he say in his defense? Craft a detailed memo.

What specific knowledge do I possess that AGI does not? What unique insights or capabilities can I offer that exceed AGI systems? Which students, and in which topics, would benefit enough to pay to learn from me and why?”

GPT’s response sheds a positive light on how faculty can stand their ground:

**Memo: The Enduring Value of the University English Professor in the AGI Era**

*To: University Administration and Fellow Educators*

*From: [John Dewey, speaking on behalf of University English Professors]*

*Subject: Defending the Unique Contributions of English Faculty Beyond AGI Capabilities*

*Date: [Current Date]*

---

**Introduction**

In light of the recent call to quantify our value solely by the metric of “what specific knowledge I possess that AGI does not,” I submit this memo as a defense of the uniquely human contributions that English professors offer. While Artificial General Intelligence (AGI) has made impressive strides in processing, analyzing, and even generating language, it remains fundamentally limited in its ability to engage in the kind of reflective, experiential, and community-oriented education that lies at the heart of our discipline. My purpose here is to articulate the distinct insights, capabilities, and educational roles that we, as university English professors, bring to the academic and personal development of our students.

---

**1. Cultivating Critical and Reflective Thinking**

- **Beyond Data Processing:** AGI systems excel in retrieving and synthesizing vast amounts of information. However, the essence of literary studies is not in the regurgitation of facts or surface-level analysis but in the deep interrogation of texts and ideas.

- **Fostering Inquiry:** As educators, we nurture an environment where students learn to question, interpret, and critique—skills that are the very bedrock of democratic thought and reflective judgment. This process is iterative and dialogical, relying on human experience, intuition, and ethical consideration—facets of learning that AGI cannot replicate.

---

**2. Engaging with Nuance, Context, and Ambiguity**

- **Interpretation in Context:** Literature is rich with nuance, metaphor, and historical context. While AGI can identify patterns and themes, it cannot replicate the human ability to perceive subtleties or appreciate the lived experiences that shape literary expression.

- **Dynamic Interpretation:** We offer personalized guidance in interpreting texts, drawing on both scholarly traditions and the contemporary socio-cultural realities of our students. This dynamic interpretive process encourages students to forge connections between the text and their own experiences, an insight that transcends algorithmic pattern matching.

---

**3. Fostering Democratic Discourse and Community**

- **A Space for Dialogue:** The classroom is a crucible for democratic engagement. It is here that students learn to articulate diverse viewpoints, engage in respectful debate, and develop a sense of community through shared inquiry.

- **Mentorship and Empathy:** Our role extends far beyond knowledge transmission. We mentor students in the art of critical discourse, helping them become thoughtful, empathetic citizens. The human touch in guiding such transformative experiences is something AGI, regardless of its sophistication, cannot offer.

---

**4. Encouraging Creativity and Personal Insight**

- **Inspiration for Original Thought:** The study of literature invites creative exploration and the cultivation of a unique voice. We help students navigate and generate original ideas, fostering creative problem-solving and imaginative expression.

- **Individualized Learning:** Each student’s journey through literature is deeply personal. We tailor our approach to meet diverse intellectual and emotional needs, an individualized mentorship that builds self-awareness and critical self-reflection—a process inherently bound to human interaction.

---

**5. Integrating Ethical and Civic Education**

- **Moral and Ethical Inquiry:** Literature has long served as a catalyst for examining ethical dilemmas and the human condition. We guide students in exploring moral complexities and developing their own ethical frameworks—a practice that informs responsible civic engagement.

- **Building a Foundation for Democratic Life:** In teaching literature, we contribute to the formation of well-rounded citizens capable of navigating the moral ambiguities of modern society. This commitment to ethical education is not a matter of technical skill but of human judgment, empathy, and the lived experience of teaching and learning.

---

**Conclusion**

In response to the challenge of demonstrating what specific knowledge we possess that AGI does not, I assert that the value of the university English professor lies not in isolated bits of information or replicable algorithms but in the art of teaching itself—a process that involves inquiry, dialogue, context, creativity, and ethical guidance. These human dimensions of education form the core of our discipline, ensuring that students not only learn to interpret texts but also to engage critically with the world around them. While AGI may serve as a valuable tool for information processing, it cannot substitute for the mentorship, reflective dialogue, and communal learning experiences that define the human pursuit of knowledge.

In short, our role is to inspire, challenge, and cultivate the critical faculties of our students—an endeavor that remains irreplaceable by any machine.

---

Respectfully,

[John Dewey]

On Behalf of University English Professors

Expand full comment
Joel M De Gan's avatar

I work in tech and a while back myself and coworkers were all talking about the question of "Why are we still funding our kids' college funds?" and we pretty much all concluded that there isn't any reason to do so anymore.

I like your take on this, my only point would be that it assumes a 'slow' takeoff, which is seeming less likely by the day right now.

Expand full comment
Dirichlet-to-Neumann's avatar

The first issue with this plan is that you can't really run before you learn to walk. You won't get to "domain specific expertise above what AGI can do" without first going through "basic general lecture courses".

The second issue is that as AGI progress those few domains where expert humans still have an edge on AI will just vanish.

Expand full comment
JMangahas's avatar

Hi! I take issue with (among other things) this prediction: “In the AGI era, research splits into two intertwined tracks. First, there are research faculty who both propose new questions to AGI—shaping its line of inquiry with domain-specific insight—and validate the system’s theoretical suggestions through experimental design and documentation. Second, there are laboratory specialists who perform the physical experiments, calibrate equipment, and record results.” To me this takes a narrow view of research. It excludes, for example, theoretical mathematics, where the division of labor I am familiar with is opposite: computers generate guesses, sift through examples, and try vastly many experiments, while humans make sense of that output, do the creative work of teasing out theory, and the very human work of identifying what is meaningful about whatever mathematical phenomena discovered, what is worth pursuing and sharing with others.

Expand full comment
Hollis Robbins (@Anecdotal)'s avatar

This is the kind of fine grained conversation that I want to have inside the institution. It isn't happening there yet! There are no curriculum committees discussing AI or AGI yet.

Expand full comment