The university system, moving confidently into the future, will eventually see the benefits of AI and use it to develop the appropriate role and necessary tasks of an AI Czar. Then, in the always inevitable budget constraints brought on by creating new roles in the institution, the fearless leaders will understand AI can better handle the complicated requirements of an AI Czar, and replace she/he/them with AI. Problem solved.
As an “AI czar,” my portfolio is quite broad. The main focus is providing a bridge between faculty and administration on AI issues. This allows me to contribute to policy discussions, assist in the design of the AI strategic plan, and hold listening sessions with departments. But it’s also largely project driven. Last semester I led the creation of a free AI certificate that helped put our university on the map (nearly 50k enrollees across 150 countries!). This semester I’m working on designing a workflow for procuring new AI tools. I also give a ton of talks in the community (20 so far this year, with 10 more between now and September 2026). The thing that makes an AI czar successful isn’t actually technical knowledge, it’s how well they collaborate across campus.
I still say that the number one opportunity for AI in higher education is to replace administrators and strangely I never hear people who are in AI czar positions suggest this.
Great note, and excellently comprehensive. The Cornel post ends well: AI on tap.
The AI czar should be using aigents to do each of the tasks, without needing more headcount. Demonstrating the promise of getting thru paperwork & comm work in minutes, instead of hours.
It’s the success of the czar that will lead others to want to copy, which creates that so important, but not emphasized here, motivation to learn how to harness aigents to assist in getting work done. More/faster, yet usually better.
Thanks for posting this. As always, it contains illuminating insights and important suggestions about AI and higher ed. I am currently in the CAO role at a small private religious university, which has a different educational mission in many respects from a large research university and less emphasis on research and knowledge production—though not none. How does AI fit with formative models of education, if it does at all?
At the same time, a coherent approach to AI in teaching and learning is an urgent need. My specific question concerns the (apparent) assumption that all fields have “known knowns” that are best conveyed through some AI technology, and, relatedly, there is an optimal use for AI in every discipline in connection with those known knowns. Are those things obviously true? I have read your essays touching on general studies courses and AI, but what about disciplines like philosophy, theology, and other humanities fields? Is it possible that in some instances optimal AI is not using it at all?
I certainly agree with that. Thanks again for your work in this area. It is very helpful and I am continually forwarding your essays to colleagues wrestling with the same issues.
For what it’s worth, Mays Business School at Texas A&M has an Assistant Dean for Artificial Intelligence and a dissertation competition and business pitch competition with an AI focus.
Can you write the job description for every top AI strategy hires at all large institutions. This was excellent. I will say that coming away from reading it that this seems like an impossible job to competently execute, just by virtue of how large the problems are, how quickly they are evolving, and human finitude.
Why not a small panel of experts instead of one czar?
1) an ML prof with expertise who worked in "AI Winter"
2) a prof who can think with a business mindset - someone with expertise in growth, scale, and sustainability
3) a prof from an ed background with experience K through 16 (do those exist? sorry, career HS educator here) with specialization in UBD and ULD
4) a prof from psych/neuro background
5) a prof with theology and philosophy credentials
The university system, moving confidently into the future, will eventually see the benefits of AI and use it to develop the appropriate role and necessary tasks of an AI Czar. Then, in the always inevitable budget constraints brought on by creating new roles in the institution, the fearless leaders will understand AI can better handle the complicated requirements of an AI Czar, and replace she/he/them with AI. Problem solved.
Resistance is futile.
Czar for subsidiarity :)
As an “AI czar,” my portfolio is quite broad. The main focus is providing a bridge between faculty and administration on AI issues. This allows me to contribute to policy discussions, assist in the design of the AI strategic plan, and hold listening sessions with departments. But it’s also largely project driven. Last semester I led the creation of a free AI certificate that helped put our university on the map (nearly 50k enrollees across 150 countries!). This semester I’m working on designing a workflow for procuring new AI tools. I also give a ton of talks in the community (20 so far this year, with 10 more between now and September 2026). The thing that makes an AI czar successful isn’t actually technical knowledge, it’s how well they collaborate across campus.
Check out what META is doing with bottom up, dectralized AI experimentation.
I still say that the number one opportunity for AI in higher education is to replace administrators and strangely I never hear people who are in AI czar positions suggest this.
Great note, and excellently comprehensive. The Cornel post ends well: AI on tap.
The AI czar should be using aigents to do each of the tasks, without needing more headcount. Demonstrating the promise of getting thru paperwork & comm work in minutes, instead of hours.
It’s the success of the czar that will lead others to want to copy, which creates that so important, but not emphasized here, motivation to learn how to harness aigents to assist in getting work done. More/faster, yet usually better.
Thanks for posting this. As always, it contains illuminating insights and important suggestions about AI and higher ed. I am currently in the CAO role at a small private religious university, which has a different educational mission in many respects from a large research university and less emphasis on research and knowledge production—though not none. How does AI fit with formative models of education, if it does at all?
At the same time, a coherent approach to AI in teaching and learning is an urgent need. My specific question concerns the (apparent) assumption that all fields have “known knowns” that are best conveyed through some AI technology, and, relatedly, there is an optimal use for AI in every discipline in connection with those known knowns. Are those things obviously true? I have read your essays touching on general studies courses and AI, but what about disciplines like philosophy, theology, and other humanities fields? Is it possible that in some instances optimal AI is not using it at all?
Absolutely! But if students are using it, faculty should know how it works.
I certainly agree with that. Thanks again for your work in this area. It is very helpful and I am continually forwarding your essays to colleagues wrestling with the same issues.
For what it’s worth, Mays Business School at Texas A&M has an Assistant Dean for Artificial Intelligence and a dissertation competition and business pitch competition with an AI focus.
Can you write the job description for every top AI strategy hires at all large institutions. This was excellent. I will say that coming away from reading it that this seems like an impossible job to competently execute, just by virtue of how large the problems are, how quickly they are evolving, and human finitude.