This is a bold and fascinating vision, and one that exposes just how fragile the current scaffolding of higher ed is, especially in systems like CSU where standardization has already stripped so much meaning from teaching roles.
But I also worry: if we move to AI-led dialogue and prompt-based assessment for Gen Ed, will we finally confront the fact that our systems haven’t taught students how to think, ask, or challenge in the first place?
I've been writing about some of this on my own Substack—how education, behavior, and societal systems often reinforce surface-level outputs over deep engagement. In a world where AI can deliver the "content," the real question becomes: what kind of learners—and citizens—are we shaping?
I’m not sure Harvard would survive this solution. There is pressure to run an educational organization like a business.
But even if we assume that Harvard can resist the temptation of using low cost AI instead of humans to teach, the only reason to do that is if humans are actually better teachers. (I believe they are.). That means thousands of students will be getting a lesser education by attending schools either AI learning.
And in K 12 education? With the enormous pressure to at least slow rising costs, public schools will be sent down this path. And it will be poorly implemented in many states.
Great piece. As a Sac State grad who started in the JC system (Sierra College) I find this scenario fascinating. I can’t imagine the large systems actually doing this (too many entrenched interests). But I can imagine smaller, more entrepreneurial systems and independent colleges doing it. The future’s going to be wild.
As I said when I restacked this, I love the provocation. Skeptics of AGI, like me, get fussy about how AGI is such a vague term. And we keep warning about automating elements of education that matter.
It would be simpler if we all just agreed that AGI had arrived and moved on to more interesting questions. And rather than see ChatGPT as a threat to education, let's just automate the parts that don't really matter much, like learning outcomes assessment.
I still want to write the definitive "how we actually do this" essay with you, given your wealth of experience. Or nudge you to write it. The automation details will matter -- I'm more presuming than anything else.
This is a brilliant piece, Hollis. Thanks for putting your perspective out into the world. To show you just how far it reached—I came across it through Rob Berger’s newsletter on retirement planning and personal finance. That’s the power of interesting ideas! Please consider writing the definitive how-to guide on doing this. It has the potential to spark a whole new kind of conversation, one we really need!
I'm skeptical that automation, once it starts, can be contained. So maybe better not to start. I'm also hesitant to adopt an oracular frame. Too much the skeptic of what large AI models can do and of my own capacity to anticipate the future.
Exactly, this just assumes a student is having a genuine dialogue, and doing some kind of actual work to provide the step by step output, which of course can be easily outsourced.
Sounds just dystopian enough.
This is a bold and fascinating vision, and one that exposes just how fragile the current scaffolding of higher ed is, especially in systems like CSU where standardization has already stripped so much meaning from teaching roles.
But I also worry: if we move to AI-led dialogue and prompt-based assessment for Gen Ed, will we finally confront the fact that our systems haven’t taught students how to think, ask, or challenge in the first place?
I've been writing about some of this on my own Substack—how education, behavior, and societal systems often reinforce surface-level outputs over deep engagement. In a world where AI can deliver the "content," the real question becomes: what kind of learners—and citizens—are we shaping?
Thank you -- this is the conversation we need to be having. I'll go look at your posts now...
I’m not sure Harvard would survive this solution. There is pressure to run an educational organization like a business.
But even if we assume that Harvard can resist the temptation of using low cost AI instead of humans to teach, the only reason to do that is if humans are actually better teachers. (I believe they are.). That means thousands of students will be getting a lesser education by attending schools either AI learning.
And in K 12 education? With the enormous pressure to at least slow rising costs, public schools will be sent down this path. And it will be poorly implemented in many states.
Great piece. As a Sac State grad who started in the JC system (Sierra College) I find this scenario fascinating. I can’t imagine the large systems actually doing this (too many entrenched interests). But I can imagine smaller, more entrepreneurial systems and independent colleges doing it. The future’s going to be wild.
When Silicon Valley professionals start sending their own kids to AI gen-ed centers, it will be credible. Not until.
As I said when I restacked this, I love the provocation. Skeptics of AGI, like me, get fussy about how AGI is such a vague term. And we keep warning about automating elements of education that matter.
It would be simpler if we all just agreed that AGI had arrived and moved on to more interesting questions. And rather than see ChatGPT as a threat to education, let's just automate the parts that don't really matter much, like learning outcomes assessment.
I still want to write the definitive "how we actually do this" essay with you, given your wealth of experience. Or nudge you to write it. The automation details will matter -- I'm more presuming than anything else.
This is a brilliant piece, Hollis. Thanks for putting your perspective out into the world. To show you just how far it reached—I came across it through Rob Berger’s newsletter on retirement planning and personal finance. That’s the power of interesting ideas! Please consider writing the definitive how-to guide on doing this. It has the potential to spark a whole new kind of conversation, one we really need!
Thank you Kevin! I would love to see the newsletter -- would it be possible for you to forward to Hollisrobb@aol.com? I would be grateful!
Will do!
Got your email thank you and I just subscribed to Adam Mastroianni — thank you for that!
I'm skeptical that automation, once it starts, can be contained. So maybe better not to start. I'm also hesitant to adopt an oracular frame. Too much the skeptic of what large AI models can do and of my own capacity to anticipate the future.
They’ll need to insure that the person signed up for the course is taking it. And that they’re not just feeding Grok’s answers into ChatGPT’s prompts.
Exactly, this just assumes a student is having a genuine dialogue, and doing some kind of actual work to provide the step by step output, which of course can be easily outsourced.
This open access article responds to many of Hollis' points: https://link.springer.com/article/10.1007/s13347-025-00883-8
Very nice piece.
Have you specific ideas on how staff could re-focus on teaching that allows students to be more upwardly mobile? What might that look like?
I linked to a piece about LaGuardia community college -- that is the best idea I've seen.