“In sooth, I know not why I am so sad”
.
Last night I attended a seminar at Harvard on “On Generative AI ‘Reading’ Shakespeare.” The paper, by a Renaissance scholar from a nearby college, was structured around an essay written by ChatGPT on The Merchant of Venice (1598), specifically on the character of Antonio, “the titular merchant of Venice,” as the LLM put it. The speaker offered a very smart critique of the ChatGPT-generated essay, paragraph by paragraph, pointing out the LLM’s distinctive preference for spatial metaphors, lists of three, the word “center,” and the “not x but y” sentence structure. She showed how much “better” several paragraphs generated by the pro-model ChatGPT 5 were, with more “nuances” about Antonio’s “complexities.”
Every teacher should be close-reading ChatGPT output every day before breakfast. They should have been doing it since 2023. But I was glad to see it being done with such intelligence in a Harvard seminar room in 2025, even while I was dismayed at much of the discussion, beginning with the introduction of the speaker with a series of hallucination-filled LLM-produced bios (to general mirth) and the weary attitude of “why should I have to spend my time on this?”
The discussion that followed was more interesting than the presentation. One scholar noted that AI-generated work is better than many student papers she’d received over the years. Another noted that we’d trained the AI on our own jargony criticism, so the mediocrity the speaker identified was mediocrity we had taught it. A third mentioned that he used AI for wholly different purposes, counting frames per second in a film course, and that’s the sort of thing we should be focusing on. A fourth said she doesn’t grade writing any more, just content. Another asked if professors should assign papers at all any more. There was, it seemed, no consensus on what to think or say or do about ChatGPT.
Only at the end, as people were putting on their coats, were existential questions asked: is any of what humanities scholars do still viable? Has the entire edifice of take-home essays and research papers been rendered obsolete? Is it over for us?
Walking out down the marble steps, I thought about Venice, specifically, John Ruskin’s The Stones of Venice (1851-1853) about a city in decline for three and a half centuries. Ruskin describes the gorgeous architecture of Venetian Gothic, the Doge’s Palace, the Ca’ d’Oro, the carved capitals and window tracery being badly restored and disappearing. He wanted to record in meticulous detail the architectural and moral significance of city before it was gone entirely.
Ruskin doesn’t really dwell on how Venice peaked, architecturally, just as things were about to fall apart. The Ca’ d’Oro was completed in 1437 and the Doge’s Palace in 1442. Christopher Columbus was born in 1451. His voyage in 1492 turned all eyes from the Mediterranean to the Atlantic. Five years later, Vasco da Gama rounded Africa and reached India, in 1498, opening up a new and swifter route for spices. By the early 1500s, the new all-sea routes to the East had bypassed Venice’s long-held monopoly on high-value trade, and its entire economic model was being superseded.
What must it have been like to be a merchant in Venice then, focused on shipping routes and tariffs, just as the first reports were trickling in about da Gama and Columbus. I wondered if I would have seen what was coming, noticed that the old ways weren’t working quite as well as they used to, that the numbers were looking slightly worse each year, that ships had to travel further, that there would be troubling competition from faraway ports.
I wondered, drawing on critics of Merchant of Venice such as Walter Cohen and Steve Mentz, what Antonio might have known, or what Shakespeare, writing in 1598, understood about Venice’s decline. Antonio opens the play with “In sooth, I know not why I am so sad,” relatable under the circumstances, melancholy that his “ventures” were gambles in a failing system. Shylock, that cold assessor of risk, surely understood the new geography. He would have known that ‘argosies’ bound “to Tripolis... to the Indies... at Mexico... for England” (Act 1 Scene 3) were sailing on routes far beyond Venetian control, making them a much more volatile collateral than they’d been a century before.
We were a room full of Venetian merchants last night, whether most of the attendees recognized it or not.
Over 92% of students now use generative AI for coursework. It’s later than you think. The current higher ed model, built around content delivery at a standard pace, around semester-long courses and take-home essays and the Carnegie Unit, was designed for a world where individual cognitive labor was the only way to process information and demonstrate learning. That world no longer exists.
What we have now are institutions that still look like universities, even as their online divisions, which subsidize the Collegiate Gothic and brick buildings, will not withstand AI and will be the first to collapse. The business model is crumbling.
Ruskin was writing about a dead civilization, explicitly. Venice by the 1850s was still functioning but impoverished, living off tourism and past glory. The republic had been dissolved by Napoleon in 1797. Everyone knew Venice was finished. Ruskin’s task was to document what had been beautiful about it before the last physical traces disappeared.
There are a lot of faculty and leaders inside higher education who don’t yet have clarity about AI. The buildings are still standing. The endowments are still large. Students still show up and pay tuition. The credentials still have market value. It’s possible to sit in a Harvard seminar room and believe that everything is fundamentally fine, that professors just need to adjust their assignments to account for AI. Like most people in Venice in 1500, most people inside successful elite universities don’t see the future yet, because everything looks and feels normal.
Ruskin had three and a half centuries of Venetian decline to document. Higher education has perhaps ten years.1
And yet, maybe there is hope. This morning I asked ChatGPT (the free model that most students use) whether a student should mention Columbus in an essay on The Merchant of Venice. It said no. It explained that he doesn’t appear in the play and is not “directly relevant” to the play’s main themes (justice, mercy) as represented in its training data.
LLMs are built to synthesize and reproduce what has already been said and written. Newer, better models are better at reasoning, seeing, and forging new connections, with the help of scholars. The ChatGPT model today was being a not-very-thoughtful Venetian merchant in 1500, perfectly optimizing its analysis of the known Adriatic shipping routes. It cannot see the Atlantic opening up because that “new” connection isn’t a statistically probable path in its data.
This is the last mile I have been advocating: expert human thought and scholarship that exceeds AI capability. The mentorship in frontier research, the hands-on training with physical equipment and archival materials AI cannot access, the imaginative leaps AI would not make. That is what we must preserve.



What you described here — the Harvard seminar, the paragraph-by-paragraph critique of a ChatGPT-generated essay on Merchant of Venice — marks an inflection point that’s been approaching for some time: the moment when academia begins to treat AI not as a novelty, but as a textual presence that must be understood, interpreted, and responsibly framed.
There is a second development happening in parallel — quieter, but just as consequential.
A small number of us have been building what you might call the “moral infrastructure” for AI: the frameworks that define how humans and AI are supposed to interact, read each other, and create meaning together. We’re not trying to teach machines morality. We’re giving people a structure so the human side of the exchange remains intact, traceable, and accountable.
That effort eventually became something we call The Faust Baseline — a discipline for AI-assisted writing and reasoning that treats every AI output as a collaborative artifact rather than an opaque generation. It requires transparency, a steady tone, clear moral grounding, and a readable chain of intent. Not guardrails. Not restrictions. A shared language for human + AI co-work.
What you’re pointing to in your post — scholars dissecting an LLM’s interpretations, mapping its narrative choices, questioning the emotional stance of its voice — is the other half of the same need. If AI is going to appear in classrooms, seminar tables, and research workflows, then someone has to provide a framework for how to read it. Not just how to critique it, but how to understand what’s actually happening inside the exchange between a human mind and a generative system.
That’s the work we’ve been doing: building the structure for responsible, interpretable, morally-anchored AI conversation so that the academic world isn’t left improvising its own rules one seminar at a time.
Your post captures the emerging frontier exactly:
AI is a text that must be read with discipline, not awe or fear.
Our work takes that further:
AI as a collaborator whose output must be contextualized, timestamped, morally framed, and understood as part of a shared authorship — never as a replacement for human insight.
Both movements are part of the same shift. And it’s encouraging to see academic voices beginning to map the terrain, because that is where the public understanding will ultimately take shape.
Brilliant & scathing - sadness can also make one smile :)