What should a university AI czar do?
Advice for a flourishing future
A provost at a state flagship university (not my own) asked how I would think about a job description for an AI czar, someone who would oversee AI strategy across the institution. We are both thinking deeply about changes ahead, wondering about the role of AI in the sciences and the humanities. I’ve known this provost a long time; she reads my Substack. She is super smart.
A recent Chronicle of Higher Education piece, “The Rise – and Fall? – of the AI Czar,” gives a good glimpse of what other universities have been doing. Interviews are filled with phrases like “comprehensive roadmap,” visioning task force,” “ethical guidelines,” and “leadership frameworks,” suggesting that the AI czar’s role at most institutions is “managing” AI. GMU’s AI czar appointed a task force and writes a substack, with some LLM tells, about how things are going. And so on. The “key driver” for adopting AI seems to be “saving money.” As the Chronicle headline suggests, success stories are rare. More recently, the first UNC Chapel Hill AI czar departed for another role after four months. The University of Colorado is facing faculty, student, and staff unrest over its “rollout.”
Admittedly universities are under tremendous financial and political pressure without having to think about a revolutionary knowledge technology. And you can’t conceive of what an “AI czar” should do until you articulate what you think a university is for. For large state universities operating under the efficient education bureaucracy model, the goal seems to be AI-enhanced optimization: more students, faster completion, better workforce pipelines, measurable gains in operational workflows; improved retention metrics, graduation metrics, and completion of micro-credentials and certificates. Mental health services will use AI chatbots. Career services will use AI to match the keywords on every graduate’s resume to the keywords on ghost job ads. In this vision, everything stays the same, except that AI will make the machinery that has been hollowing out American higher ed for forty years grind even faster.
With some exceptions (like Cornell’s new AI strategist) AI czar roles are organized around either metrics (AI literacy, workforce pipelines, adoption strategy) or technology (compute, vendor relations, platform deployment). I hear lawmakers say things like “AI fluency should become a graduation requirement.” I have no idea what they mean. Neither do they. But this is the kind of flailing I see as higher ed leaders treat AI as something to be “adopted” and “implemented” rather than understood.
I think the top priority for a university should be knowledge: its creation, transmission, and conservation. The AI czar position I imagined, accordingly, would be designed to make extraordinary things happen. Put the right AI tool in the hands of experts, across every discipline of a research university, and support students and faculty working together to produce discoveries at a speed and scale that no university has ever achieved. A university that organizes itself this way could become the most productive knowledge-creation and knowledge-transmission institution in history.
AI can make colleges and universities immensely more productive, which is different from making them more efficient. Making execution cheaper does not necessarily improve knowledge production. If the goal is for the university to create and transmit more knowledge with AI than it did before, that means expert faculty spending more time on the hard problems. Let AI deliver known-known content so that expert time is freed for supervising students in the work that requires someone who knows where the knowledge frontier is. That means students produce work that has an audience beyond a general education instructor. A student who has documented anomalies in field data, cataloged an unprocessed collection, or validated a model’s output against real-world evidence has done work that compounds.
AI can help institutions leverage the knowledge they have. Every university sits on proprietary knowledge assets, including unpublished research, oral histories, special collections, locally maintained datasets, institutional memory that become more valuable because they are the knowledge AI systems do not contain. AI can help build institutional capacity, across every division, to judge whether a model’s output is right. In some ways the AI czar’s job as I see it is to build the institutional capacity to evaluate what parts of the university should keep running at all.
I don’t know how much good a knowledge-first AI czar can do in a metrics-first institution. But there are universities who are thinking differently, encouraging faculty and staff to reach for the frontiers of knowledge, boldly and bravely.
The AI czar role I sent my friend (attached below) is unlike anything else in the landscape. It covers every form of AI a research university uses, well beyond the usual LLMs and chatbots to predictive systems, scientific machine learning, computer vision, clinical decision support, and optimization engines. The role includes tracking “institutional decay,” a continuous, institution-wide diagnostic of where the gap between how systems were designed and how people are working has grown large enough to matter, reported to the provost as measurements, not impressions. The role requires that the medical enterprise’s experience with verification, evidence standards, and human oversight, where the consequences of error are highest and the discipline of checking is most mature, should circulate to every other division of the university. And the role must be designed to extend to the next frontier, which is autonomous agents, because a position built for today’s AI will be obsolete before the first annual review.
This provost is, as far as I can tell, asking questions no other institution is asking. That is how to build institutional capacity and how her university will flourish.
*************************************************
AI Czar Position Summary
A new senior advisor/ academic officer who understands the full range of AI use across a research university: large language models (LLMs), predictive systems, scientific machine-learning models, computer vision, optimization systems, recommendation engines, clinical decision-support tools, and other forms of model-based inference used in teaching, research, medicine, administration, and public service. Each of these works differently, fails differently, and requires different knowledge to use well.
The core responsibility for the AI advisor is ensuring clarity over how model output becomes university action. When does a generated answer become a public claim, a hiring screen, a grade, a comment on student work, a research result, a patient note, a forecast, or a budget recommendation? What kinds of checking, documentation, oversight, and human responsibility are required before that happens?
The AI advisor builds the institutional conditions under which the university will learn how to use AI productively in service of the institutional mission: where it helps, where it fails, what kinds of evidence it produces, what kinds of labor it displaces or creates, what kinds of errors it makes, and when it should not be used at all. The role is responsible for building the infrastructure through which operational AI knowledge develops and circulates across the whole university: academic affairs, research, the medical enterprise, the library system, enrollment management, student affairs, administration, and finance.
No existing office can absorb this charge. The CIO manages platforms and security. Faculty committees govern curriculum. The VP for Research coordinates the research enterprise. No one owns the problem of ensuring that people across every division develop the operational knowledge required to use AI well in their specific domains. Making execution cheaper does not make knowledge production better.
The absence of a designated AI role produces pushback, as can be seen at other institutions where AI adoption is treated as a technology problem rather than a knowledge problem, where vendor contracts are signed without knowledge infrastructure, faculty governance is bypassed, adoption is stalled, and the underlying competence gap remains exactly where it was.
Core Responsibilities
1. Build the University’s AI Knowledge Infrastructure
Establish durable ways for the campus to learn from practice: fellows programs, short-term working groups, consultative clinics, shared documentation, and model-testing support.
Design regular, public, case-based demonstrations of what AI failures look like across the university’s domains and how the people who caught them knew what to look for. The model is closer to medical grand rounds than to a campus-wide email about responsible use.
Map the operational AI expertise that already exists across the institution but is scattered, informal, and invisible to the organizational chart. The AI advisor builds the structures—forums, working groups, consultative pairings—that turn this scattered practical knowledge into institutional capacity and connect it to the theoretical-knowledge holders who can explain why the failures happen.
Work with the library, IT, research computing, centers for teaching and learning, institutional research, and the medical enterprise to create secure environments for experimentation, documentation, provenance, and reproducibility.
Help the university distinguish between local experimentation, shared infrastructure, and campus-wide deployment—three different things that require different levels of support, governance, and institutional commitment.
2. Lead Institutional Design for AI in Teaching and Learning
AI can simulate competence in the subject being taught: it can produce essays, solve problem sets, generate code, and summarize research in ways that are difficult to distinguish from student work. The central pedagogical challenge is that the tool performs the very tasks through which students develop understanding. The AI advisor helps faculty and departments rethink assignments, assessment, feedback, and learning objectives in light of this reality.
Support discipline-specific course policies rather than one campus rule for every field, because what AI means for a writing seminar, an organic chemistry lab, a clinical rotation, and a data-science capstone are four different problems. Build the faculty development infrastructure—departmental consultation, cross-disciplinary exchange, domain-specific workshops—that enables this rethinking to happen at scale, led by faculty who understand their disciplines.
Develop campus capacity in AI literacy, understood as the ability to ask what a model was trained on, what kind of output it produces, what errors it makes, and what checking is required before that output counts as evidence of learning.
Pay particular attention to places where model output becomes an institutional judgment: feedback, tutoring, grading, proctoring, academic-integrity processes, and recommendations. These are the points where the gap between using a tool that produces fluent output and understanding what that output contains has the highest institutional stakes.
Protect faculty judgment, student privacy, accessibility, and the integrity of evaluation.
3. Lead Institutional Design for AI in Research, Scholarship, and Creative Work
AI is transforming the questions fields can ask, the methods available to answer them, and the scale at which evidence can be gathered and analyzed. A computational chemist’s relationship to AI is different from a medical imaging researcher’s, which is different from a computational linguist’s: the models differ, the validation standards differ, the reproducibility requirements differ, and the ways results can mislead differ. The AI advisor supports the responsible use of AI across sciences, social sciences, humanities, arts, engineering, agriculture, and medicine with this variation in view.
Help establish discipline-appropriate norms for model selection, data quality, validation, reproducibility, benchmark choice, uncertainty, recordkeeping, authorship, citation, intellectual property, and research integrity.
Pay particular attention to the point at which model output becomes a result, a figure, a claim, or a finding—the moment where institutional reputation and scholarly integrity are at stake.
Work with the offices of research, sponsored programs, compliance, the IRB, export control, the library, and research computing so that researchers can use models without guessing their way through data, compute, security, or reporting questions. Build the training and consultation infrastructure that lets researchers develop AI competence specific to their methods and standards of evidence.
Actively support efforts in securing external research funding and philanthropic gifts to support societal-scale AI solutions. Represent the university in high-stakes regional infrastructure consortiums to provide the campus with access to significant compute resources that drive AI discovery and scale research.
4. Build Capacity for AI in Medicine and the Health Professions
Work with the health system and the schools of medicine, nursing, pharmacy, public health, and allied health to evaluate clinical, operational, and educational uses of AI.
Ensure that when models affect patient care, scheduling, triage, imaging, documentation, or clinical education, there are clear standards for validation, auditability, human oversight, bias testing, and responsibility.
Navigate the regulatory and liability landscape specific to clinical AI, including FDA pathways for software as a medical device, the validation standards that clinical deployment requires, and the ongoing monitoring obligations that follow.
Help connect research uses of AI in biomedicine with the real constraints of clinical practice, patient safety, and public trust.
Ensure that lessons learned from clinical validation—where the consequences of error are highest and the discipline of verification is most mature—circulate to the rest of the university. The medical enterprise’s experience with auditability, evidence standards, and human oversight is an institutional asset that can inform how every other division approaches AI verification.
5. Improve Administrative and Student-Facing Uses
Help units evaluate AI use in advising, enrollment, communications, finance, HR, procurement, student services, and other administrative settings. Enrollment management increasingly depends on predictive modeling. Student affairs encounters AI through mental health applications, advising platforms, and student-facing chatbots. Finance and budgeting use optimization tools. Each of these carries its own data governance requirements and accountability structures.
Require that serious deployments state the problem they are solving, the evidence that the tool works, the likely error costs, who can appeal or override a result, what staff training is needed, and what the university will do if the system fails.
Drive enterprise AI literacy and change management, including training staff on AI capabilities, limitations, and safe use. Introducing new work practices and upskilling programs is necessary to address automation anxiety and overcome resistance to adoption when tools are perceived as imposed mandates.
Recognize that a first-year undergraduate, a doctoral candidate, a medical student, and a professional-school student have fundamentally different relationships to AI tools. The common educational problem across all levels is the gap between using a tool that produces fluent output and understanding what that output contains—whether it is accurate, where it is incomplete, what assumptions it encodes, and when it fails. Build curricular and co-curricular infrastructure that develops verification skills appropriate to each level.
Recognize the library’s distinctive position as the university’s knowledge infrastructure—the place where information is organized, retrieved, and evaluated—and the fact that AI changes every dimension of that work. The library is a central partner in building the university’s AI literacy, information-evaluation capacity, and documentation practices.
Favor uses that improve service, access, and understanding over uses that merely increase monitoring or offload judgment onto opaque systems.
6. Build Shared Governance Around AI
Ensure that faculty, staff, and students are involved before systems are purchased or scaled, especially when those systems affect teaching, employment, student status, research workflows, or patient care.
Translate across technical and nontechnical communities so that decisions are made by the people who understand the work, the people who do the work, and the people affected by the work.
Work through existing governance where possible and create temporary working groups where necessary, rather than building a large permanent bureaucracy.
7. Track Institutional Decay and Prevent the Exploitation Trap
Every curriculum, assessment, and administrative workflow was designed for people working without AI. The people inside those systems are no longer working without AI, and they are adapting faster than the systems can be redesigned. The AI advisor runs a continuous, institution-wide diagnostic of where the distance between design assumptions and actual practice has become large enough to matter, and reports the findings to the Provost and relevant division heads as measurements.
Ensure the institution uses AI to do new work rather than defaulting to familiar work done faster. Without deliberate effort, AI will be used to write compliance language faster, generate slides, and produce summaries, because there is no infrastructure connecting the person who has an unexplored research question or an uncatalogued collection with the person who understands what AI tools could do with it. The Vice Provost designs that infrastructure.
Identify the university’s proprietary knowledge assets—special collections, archival material, locally maintained datasets, unpublished research, institutional memory—that become more valuable as AI becomes more prevalent, because they are precisely the knowledge that AI systems do not contain. Initiate the projects that surface, organize, and make these assets usable, pairing human expertise and supervised student labor with AI tools so that the archive becomes a working site of discovery rather than a warehouse.
8. Direct Enterprise Data Governance and Strategy
Implement and oversee enterprise processes for data quality monitoring, issue remediation, and continuous improvement.
Ensure the accuracy, completeness, timeliness, and reliability of institutional data used for analytics, AI models, and clinical decision support tools, recognizing that AI strategy relies entirely on a robust data foundation.
9. Advise the Provost and President on Institutional Priorities
Recommend targeted investments in people, training, compute, data stewardship, library capacity, research support, and evaluation infrastructure.
Help the institution distinguish between areas where it needs local capacity, areas where shared procurement makes sense, and areas where it should say no.
Determine what the institution needs based on operational knowledge developing across divisions, and then procure accordingly. The sequence matters: the alternative—signing a vendor contract and working backward to adoption—produces faculty resistance, governance failures, and expensive tools that sit unused because no one built the knowledge infrastructure required to make them useful. Technology procurement is a downstream function of this role.
Manage and consolidate enterprise-wide AI software licensing to curb the scattered adoption of AI tools on campus, thereby preventing duplicated development and spending, inconsistent standards, and preventable missteps associated with shadow AI.
Represent the university in state, federal, and inter-institutional conversations about AI where the university’s academic, civic, and public missions are at stake.
Recognize that the literacy standard rises as the technology moves. The current challenge is large language models and AI-assisted code generation. The next challenge is autonomous agents—tools that act on a user’s behalf, execute tasks, and make decisions the user did not authorize. The role must be designed to extend to each frontier as it reaches the institution.



Why not a small panel of experts instead of one czar?
1) an ML prof with expertise who worked in "AI Winter"
2) a prof who can think with a business mindset - someone with expertise in growth, scale, and sustainability
3) a prof from an ed background with experience K through 16 (do those exist? sorry, career HS educator here) with specialization in UBD and ULD
4) a prof from psych/neuro background
5) a prof with theology and philosophy credentials
The university system, moving confidently into the future, will eventually see the benefits of AI and use it to develop the appropriate role and necessary tasks of an AI Czar. Then, in the always inevitable budget constraints brought on by creating new roles in the institution, the fearless leaders will understand AI can better handle the complicated requirements of an AI Czar, and replace she/he/them with AI. Problem solved.
Resistance is futile.