When "regime change" entered the American lexicon in the early 2000s, many noted how language was being used to sanitize political disruption. By transforming the active process of overthrowing a government into a clean nominal phrase, intervention sounded as natural as changing clothes. Two decades later, we’re confronting a new transformative force in AI and we must pay close attention to language that converts agency into abstract nouns that hide who is doing what to whom and to what end. An increase of AI-produced nouns may betoken an existential threat. Verbs will free and protect us.
Noam Chomsky's 1970 paper "Remarks on Nominalization" – it’s dense, so feel free to ChatGPT it – looks at how these linguistic sleights-of-hand work by systematically erasing the very questions we most need to ask. He begins with the two major approaches – the Transformationalist Hypothesis, which argues that nominalizations (like "John's refusal of the offer") are derived through transformational rules from underlying sentence structures (like "John refused the offer") and the Lexicalist Hypothesis, which argues that many nominalizations are simply listed in the lexicon (dictionary) as separate items, rather than being derived – and argues in favor of the Lexicalist Hypothesis, showing that while gerundive nominals (like "John's refusing the offer") can be explained through transformations, derived nominals (like "John's refusal of the offer") have properties that make more sense if they're treated as separate lexical entries.
Derived nominals have more idiosyncratic meanings, that they are less productive (you can't freely form them from any verb), they have different internal structure from sentences, and that their relationship to the corresponding verb is often dubious and unpredictable.
Recall “securitization” from the 2008 financial crisis. The term starts with "secure" (a verb or adjective), adds the "-ize" suffix to create a dubious verb "securitize" (already a questionable transformation), and then adds "-ation" to create a derived nominal. This double derivation process works to obscure crucial semantic relationships: What is being made secure? Who is doing the securing? For whom? The nominalized term does not want you to ask.
"Securitization of mortgage debt" sounded more palatable than "bundling risky home loans together and selling them as safe investments." The term "securitization" created an artificial technical abstraction that masked ethically significant actions (given the valence of the term “secure”). Nominalizations make potentially questionable processes sound inevitable and technically neutral.
Look at the noun phrase “consistent delivery of course material” from the recent story about a UCLA comp lit course using an AI. Who is doing what to whom? “Delivery” presumably means teaching but the entire story suggests that teachers and teaching aren’t really necessary, just consistent “leading,” “helping,” and “delivering” materials that were the result of a “course creation process” involving “materials development.” It’s simple: no subject matter expertise needed!
Busy human brains find nominalizations tricky and often will glide over them. When one reads a phrase like “ensure consistent delivery of course materials” one must recognize that "delivery" is really the verb "deliver" in noun form; grasp that "delivery" is being used abstractly; grasp that there’s some unclear consistency problem (what is it?) that needs to be addressed; and then try to understand what concrete action is actually being described (who or what is doing the ensuring?) and who is in charge. With clear verbs like "deliver," one can directly map the action to subject and object relationships: a teacher is teaching course materials.
That whole UCLA story is a master class in evasive nominalizations. Chomsky’s paper provides the vocabulary to say when agency is hidden and when active processes (that many might want to question) are transformed into seemingly objective, abstract nouns that resist ethical scrutiny. The linguistic opacity of the term “delivery” is the same as "securitization" in concealing power relations and choices.
When you have nominalizations on your mind you start seeing them everywhere. "Core competency leveraging." "Strategic alignment facilitation." "Female liquidization."
In the higher ed space, a phrase like "operational excellence" similarly obscures both agency and concrete meaning. The transformation from what could be a clear call to action like "let’s try to operate better" into a nominal form creates what Chomsky would identify as an artificial derived nominal that combines a derived adjective ("operate" → "operational") with a nominalized quality ("excellent" → "excellence") in a way that violates normal English lexical patterns. Important semantic relationships, which are present in their verbal forms, disappear! "Operational excellence" blocks all connection to who is operating what and what concrete attributes might constitute excellence. This creates an abstraction that serves to mystify rather than clarify.
In essence, evasive nouns violate not just style preferences but also create zombie phrases that appear superficially grammatical while being semantically vacant, or worse, serving as a kind of linguistic sleight-of-hand to mask actual operations that may be far from excellent.
With the rise of LLMs, we should think about grammar and agency.
Nominalizations are designed to sound more precise and technical, even while they create more ambiguity and require more processing to understand the basic action being described. Right now, with guardrails on LLMs and safety watchers watching, only humans are inclined to take a clear instruction ("improve how we work") and deliberately make it harder to understand ("workflow enhancement methodology implementation").
But the fact that LLMs may grasp nominalizations more easily than humans may also signal something concerning about their relationship to meaning and truth. Unlike humans who stumble over these abstract constructions, LLMs process them smoothly, because they're dealing with statistical patterns rather than trying to map language to real-world actions and responsibilities. LLM training likely includes a large corpus of corporate, academic, and bureaucratic text where nominalizations are common. LLMs may reproduce these patterns, since they process language as statistical patterns and transformations, somewhat similar to how Chomsky describes it.
If the dominant pattern in a certain genre of text – maddening business consultant memos, say – involves the liberal deliberate use of evasive nominalizations, LLMs would generate similar nominalizations as pattern transformations. What incentive is there for an LLM to generate more direct, active language? To use verbs?
More importantly, LLMs do not have the natural human intuition that Chomsky describes, the gut feeling that certain nominalizations just sound bonkers. LLMs could be programmed to recognize and avoid them, and not to make up insidious new ones, and it might be worth our time to do so, to program a preference for verbs.
Program LLMs to use verbs
Applying Chomsky’s theoretical framework here, noun-to-verb transformations in English are generally "productive" and feel immediately acceptable ("I'll email you," "Let's bookmark this"). Some seem artificial ("let's calendar the Zoom meeting," "I'll PowerPoint that"). But usually the slide into verbs is much more fun, as we all Zoom (in verb form), Teams, Instagram, and now ChatGPT regularly.1
While nominalization obscures agency, verbalization can clarify who is doing what to whom. When "calendar" becomes a verb there is a clear agent (us), action (calendaring), and object (the meeting). When someone says, "Let's action this," it might sound odd, but at least we know who is taking action. A phrase like "the actioning of initiatives," however, obscures the responsibility and concrete next steps.
Note the trend toward nominalization in business and politics is top-down while verbalizations come from the bottom up. Tech workers' creation of verbs like "to Slack" or "to GitHub" reflects a culture of direct action and responsibility, while management's preference for terms like "communication platform utilization" reflects a desire to abstract away from specific actions and actors.
The relationship between language and agency has never been more critical than in our emerging AI future. As we've seen, nominalizations serve to obscure responsibility and concrete action, converting dynamic verbs into static nouns that resist ethical scrutiny. While humans deploy these linguistic transformations strategically, often to maintain power or deflect accountability, artificial intelligence may adopt them as default patterns, learning from the vast corpus of bureaucratic and technical writing that favors such constructions. This linguistic evolution could presage a more fundamental shift: as AI systems become more sophisticated, their facility with nominalizations might enable them to obscure their own agency and decision-making processes. In the AI era, verbs might be our flying cars: the technological promise we really need, keeping agency clear and actions accountable in a world of increasingly abstract intelligence.
Elegant and insightful thank you - I will use Clauding as the verb :-)
The expression of profound appreciation for this essay and the gratitude extended for its sharing cannot be overstated. Comprehensive prioritization of efforts toward the maximization of resistance to identified tendencies is imperative.
Best,
G.A.