Discussion about this post

User's avatar
Justin Reidy's avatar

Does the book address any “bottoms up” root causes of the lack of alignment between tech ambition and national self-interest?

Arguably there hasn’t been a decrease in educational emphasis on universals, only a shift to universals that the authors don’t like (cosmopolitanism, equity over freedom, “critical” thought generally)

But this shift in top down educational emphasis has been paired with a fraying of bottoms up social cohesion, the Bowling Alone diagnosis that when combined with a rejection of traditional values results in an inevitable “what’s in it for me” bottom line.

Put another way, education and culture have torn down the entire concept of “nationhood” while daily life has dismantled group based social cohesion. It’s hard to prioritize your country when you don’t even prioritize your neighborhood.

Expand full comment
Bill Benzon's avatar

I started reading this, Hollis, and started getting impatient about a quarter to a third of the way in, so I did what I often do in these situations. I skipped all the way to the end to see where this is going. “Yet in calling for a renewed technological republic built on ownership and cultural cohesion, Karp and Zamiska leave a crucial question unanswered: what role will the humanities, the disciplines that cultivate “truth, beauty, and the good life,”play in this reimagined future? If shared culture, language, and storytelling are as essential to national solidarity as the authors argue, then those who teach these traditions deserve more than a footnote in their vision.” That’s all I need. I am quite willing to assume that you are a competent reader of this book and so you rummaged around between lines looking for at least some scraps of awareness. As far as I can tell, the people who build this technology, who fund it, who rhapsodize about how wonderful it is, and who natter on about the need the build, they’re narrowly educated people who don’t know what they don’t know and are proud of it.

My standard analogy for this situation, crude as it is, is that the current AI enterprise is like a 19th century whaling voyage where the captain and crew know all there is to know about their ship. They can get more speed out of it than any other crew, under any conditions, they can tack into the wind, they can turn it, if not on a dime, at least on a $50 gold piece. If whaling were about racing, they’d win. But whaling isn’t about racing, it’s about killing whales. To do that you have to understand how whales behave, and you have to understand the waters in which the whales live. On those matters, this captain and crew are profoundly ignorant; they haven’t even sailed around Cape Horn.*

That’s the AI industry these days.

I got interested in the computational view of mind decades ago. Why? Because I set out to do a structuralist analysis of “Kubla Khan” and couldn’t make it work. I ended up with an analysis that didn’t look like any structuralist analysis I’d ever seen, nor any other kind of literary analysis. The poem was structured like a pair of matryoshka dolls, it looked like a pair of nested loops. https://www.academia.edu/8155602/Articulate_Vision_A_Structuralist_Reading_of_Kubla_Khan_

I ended up writing a dissertation which was as much a quasi-technical exercise in computational linguistics as in literary theory. I chose one of Shakespeare’s best known sonnets, 129, The Expense of Spirit, as my example, and published my analysis in the 100th anniversary issue of MLN: Cognitive Networks and Literary Semantics, https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics. That represents a serious attempt to come up with a computational analysis of a profound and deeply disturbing human experience, compulsive sexuality.

The current crew will tell you, I’m sure, that that represents old technology, symbolic technology, which has been rendered obsolete by machine learning. Guess what? David Hays (my teacher and mentor) both knew that symbolic technology was not fully up to the job, that it had to be grounded in something else. And we were working on something else at the time, but meanwhile we did what we could with the tools we had. My point is that in order to conduct the analysis had I to spend as much time thinking about human behavior and language as I did about the technical devices of knowledge representation. Whatever success I may have had in that work, I paid for it in thinking about the human mind.

The current regime is quite different. They don’t have to think about the human mind at all. If Claude is capable of writing decent prose, well, that didn’t cost the folks at

Anthropic anything. They got it for nothing. And so that’s the value they place on the human mind. For them I’m afraid “truth, beauty, and the good life” are just empty words they trot out for the hype. Theirs is an Orwellian technology. They’re stuck on the wrong side of 1984.

*As I’m sure you know, Mark Andreessen likes to use whaling as a precedent for venture capital. Out of curiosity, I did a little digging and found an article by Barbara L. Coffee in the International Journal of Maritime History, “The nineteenth-century US whaling industry: Where is the risk premium? New materials facilitate updated view.” https://journals.sagepub.com/doi/abs/10.1177/08438714211013537 It’s quite interesting. Those whaling captains kept good records, and those records have been preserved. After examining the records of 11,257 voyages taken between 1800 and 1899 Coffee concluded: “During the nineteenth century, US government bonds, a risk-free asset, returned an average of 4.6%; whaling, a risky asset, returned a mean of 4.7%. This shows 0.1% as the risk premium for whaling over US government bonds.” What are the chances that current investment in LLMs will do better? Oh, there will be some success, but averaged across the whole industry and over the longterm?

Expand full comment
7 more comments...

No posts