Does the book address any “bottoms up” root causes of the lack of alignment between tech ambition and national self-interest?
Arguably there hasn’t been a decrease in educational emphasis on universals, only a shift to universals that the authors don’t like (cosmopolitanism, equity over freedom, “critical” thought generally)
But this shift in top down educational emphasis has been paired with a fraying of bottoms up social cohesion, the Bowling Alone diagnosis that when combined with a rejection of traditional values results in an inevitable “what’s in it for me” bottom line.
Put another way, education and culture have torn down the entire concept of “nationhood” while daily life has dismantled group based social cohesion. It’s hard to prioritize your country when you don’t even prioritize your neighborhood.
These are the questions! I kept circling back to where the systemic problem is. The authors don't seem to think in systems, even bottom up or top down. There is very little "agency" that I could discern. But so fascinating!
I started reading this, Hollis, and started getting impatient about a quarter to a third of the way in, so I did what I often do in these situations. I skipped all the way to the end to see where this is going. “Yet in calling for a renewed technological republic built on ownership and cultural cohesion, Karp and Zamiska leave a crucial question unanswered: what role will the humanities, the disciplines that cultivate “truth, beauty, and the good life,”play in this reimagined future? If shared culture, language, and storytelling are as essential to national solidarity as the authors argue, then those who teach these traditions deserve more than a footnote in their vision.” That’s all I need. I am quite willing to assume that you are a competent reader of this book and so you rummaged around between lines looking for at least some scraps of awareness. As far as I can tell, the people who build this technology, who fund it, who rhapsodize about how wonderful it is, and who natter on about the need the build, they’re narrowly educated people who don’t know what they don’t know and are proud of it.
My standard analogy for this situation, crude as it is, is that the current AI enterprise is like a 19th century whaling voyage where the captain and crew know all there is to know about their ship. They can get more speed out of it than any other crew, under any conditions, they can tack into the wind, they can turn it, if not on a dime, at least on a $50 gold piece. If whaling were about racing, they’d win. But whaling isn’t about racing, it’s about killing whales. To do that you have to understand how whales behave, and you have to understand the waters in which the whales live. On those matters, this captain and crew are profoundly ignorant; they haven’t even sailed around Cape Horn.*
That’s the AI industry these days.
I got interested in the computational view of mind decades ago. Why? Because I set out to do a structuralist analysis of “Kubla Khan” and couldn’t make it work. I ended up with an analysis that didn’t look like any structuralist analysis I’d ever seen, nor any other kind of literary analysis. The poem was structured like a pair of matryoshka dolls, it looked like a pair of nested loops. https://www.academia.edu/8155602/Articulate_Vision_A_Structuralist_Reading_of_Kubla_Khan_
I ended up writing a dissertation which was as much a quasi-technical exercise in computational linguistics as in literary theory. I chose one of Shakespeare’s best known sonnets, 129, The Expense of Spirit, as my example, and published my analysis in the 100th anniversary issue of MLN: Cognitive Networks and Literary Semantics, https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics. That represents a serious attempt to come up with a computational analysis of a profound and deeply disturbing human experience, compulsive sexuality.
The current crew will tell you, I’m sure, that that represents old technology, symbolic technology, which has been rendered obsolete by machine learning. Guess what? David Hays (my teacher and mentor) both knew that symbolic technology was not fully up to the job, that it had to be grounded in something else. And we were working on something else at the time, but meanwhile we did what we could with the tools we had. My point is that in order to conduct the analysis had I to spend as much time thinking about human behavior and language as I did about the technical devices of knowledge representation. Whatever success I may have had in that work, I paid for it in thinking about the human mind.
The current regime is quite different. They don’t have to think about the human mind at all. If Claude is capable of writing decent prose, well, that didn’t cost the folks at
Anthropic anything. They got it for nothing. And so that’s the value they place on the human mind. For them I’m afraid “truth, beauty, and the good life” are just empty words they trot out for the hype. Theirs is an Orwellian technology. They’re stuck on the wrong side of 1984.
*As I’m sure you know, Mark Andreessen likes to use whaling as a precedent for venture capital. Out of curiosity, I did a little digging and found an article by Barbara L. Coffee in the International Journal of Maritime History, “The nineteenth-century US whaling industry: Where is the risk premium? New materials facilitate updated view.” https://journals.sagepub.com/doi/abs/10.1177/08438714211013537 It’s quite interesting. Those whaling captains kept good records, and those records have been preserved. After examining the records of 11,257 voyages taken between 1800 and 1899 Coffee concluded: “During the nineteenth century, US government bonds, a risk-free asset, returned an average of 4.6%; whaling, a risky asset, returned a mean of 4.7%. This shows 0.1% as the risk premium for whaling over US government bonds.” What are the chances that current investment in LLMs will do better? Oh, there will be some success, but averaged across the whole industry and over the longterm?
I’ve been thinking a bit about the history of AI and how it led the the current situation where it is decoupled from any attempt to understand the human mind.
AI began as an attempt to simulate the human mind. The people who did the work also thought about the mind. AI work on chess led to psychological investigation of how humans played chess. The most commercially successful early AI programs were so-called expert systems, from the 60s on into the 80s. To develop such a system you would ask human experts to think through problems out-loud so you could record their thoughts. The recordings would then be transcribed. This became developed into a systematic methodology called “protocol analysis.” My point is simple, this AI work was closely linked to work on human thinking.
The big breakthrough in machine learning came in 2012 with a machine vision system called AlexNet, developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. (https://en.wikipedia.org/wiki/AlexNet) It was based on something called a convolutional neural network. CNNs are based on Fourier analysis, which had been used in understanding the visual system going back to the late 1960s. So, at this point the technical basis of the artificial system remained in touch with the study of human perception.
That changed with the development of GPTs. The technical basis of those systems had nothing to do with technical models of language and cognition. With GPT-3 things exploded. Its language capacity was far beyond anything else that had been done. The field quickly figured out that they could improve performance simply by scaling up. The enterprise was now effectively decoupled from any attempt to understand the human mind. Improvements in system performance were NOT linked to deeper understanding of language and cognition.
Of course, no one’s happy that the inner workings of LLMs are mysterious. It makes so-called “alignment” a hellish problem. At the same time, the fact of that mystery makes it easy to imagine whatever you wish about the technology. Thus the black box nature of these systems is convenient for the generation of hype. You can imagine future capacities to be whatever you will. Reality is not going to get in your way, and least not now in the present.
I'm struck that the diagnosis in the book seems to be that cosmopolitanism and an allegiance to markets shaped by consumer capitalism are the problem, and Palantir is the solution. I would say that cosmopolitanism in the form of a curriculum that includes all those you name check, along with the critiques within that tradition of Anna Julia Cooper, W.E.B. Du Bois, Edward Said, and Gayatri Chakravorty Spivak, would be a powerful foundation for thinking about what could have been (and still could be) a powerful combination of a Silicon Valley committed to not being evil and a United States committed to exporting democracy in addition to consumer culture.
Building a better rifle is a terrible thing to aim for unless it is firmly within the framework of an empire for liberty. So far, the American project has failed miserably on the "for Liberty" part. I want that to change, but the prospects are pretty dim.
Yes I went into so much detail so this would be clear for those who wouldn't invest in it. And yes, the evasions about the positive aspects of cosmopolitanism (plus a total misreading of Appiah) gave me lots and lots of pause. Still, the central importance of the humanities to the argument, even if operationalizing the instruction was way too vague, made me want to take the book seriously.
Fair...if they are willing to think about the humanities in terms of ongoing critique and dialogue, they are within the circle. I worry about the tendencies in Silicon Valley and the academy to shut down critique and dialogue in favor of winning. What is to be won is seldom fully articulated, and when it is, ugh.
I’d say also a tendency to pick from philosophy, history, or literature the pieces that promote your market approach, while leaving much of the story in the cutting room floor.
"The argument is that Silicon Valley, even while its very existence was made possible by the military industrial complex post WWII and the creation of the internet, has created a culture without national pride, that doesn’t believe in war, that doesn’t have values, and that doesn’t know for what it stands. Here’s the key paragraph:"
Silicon valley barely depended on the military industrial complex or the internet. TCP/IP was not the first networking protocol, or even a particularly large breakthrough. It was an improvement over things like X25 and quite an obvious one that would have been created by someone out there if the government hadn't already, in the same way that people created Linux, CSS, BitTorrent. There were internet services like Prestel and Minitel that ran on X25. one of many competing networking protocols. France's Minitel ran on X25 before the internet took off. The period of growth of the internet from the mid-90s has more to do with how cheap hardware was becoming and the expiry of the RSA algorithm patent than government action.
Does the book address any “bottoms up” root causes of the lack of alignment between tech ambition and national self-interest?
Arguably there hasn’t been a decrease in educational emphasis on universals, only a shift to universals that the authors don’t like (cosmopolitanism, equity over freedom, “critical” thought generally)
But this shift in top down educational emphasis has been paired with a fraying of bottoms up social cohesion, the Bowling Alone diagnosis that when combined with a rejection of traditional values results in an inevitable “what’s in it for me” bottom line.
Put another way, education and culture have torn down the entire concept of “nationhood” while daily life has dismantled group based social cohesion. It’s hard to prioritize your country when you don’t even prioritize your neighborhood.
These are the questions! I kept circling back to where the systemic problem is. The authors don't seem to think in systems, even bottom up or top down. There is very little "agency" that I could discern. But so fascinating!
I started reading this, Hollis, and started getting impatient about a quarter to a third of the way in, so I did what I often do in these situations. I skipped all the way to the end to see where this is going. “Yet in calling for a renewed technological republic built on ownership and cultural cohesion, Karp and Zamiska leave a crucial question unanswered: what role will the humanities, the disciplines that cultivate “truth, beauty, and the good life,”play in this reimagined future? If shared culture, language, and storytelling are as essential to national solidarity as the authors argue, then those who teach these traditions deserve more than a footnote in their vision.” That’s all I need. I am quite willing to assume that you are a competent reader of this book and so you rummaged around between lines looking for at least some scraps of awareness. As far as I can tell, the people who build this technology, who fund it, who rhapsodize about how wonderful it is, and who natter on about the need the build, they’re narrowly educated people who don’t know what they don’t know and are proud of it.
My standard analogy for this situation, crude as it is, is that the current AI enterprise is like a 19th century whaling voyage where the captain and crew know all there is to know about their ship. They can get more speed out of it than any other crew, under any conditions, they can tack into the wind, they can turn it, if not on a dime, at least on a $50 gold piece. If whaling were about racing, they’d win. But whaling isn’t about racing, it’s about killing whales. To do that you have to understand how whales behave, and you have to understand the waters in which the whales live. On those matters, this captain and crew are profoundly ignorant; they haven’t even sailed around Cape Horn.*
That’s the AI industry these days.
I got interested in the computational view of mind decades ago. Why? Because I set out to do a structuralist analysis of “Kubla Khan” and couldn’t make it work. I ended up with an analysis that didn’t look like any structuralist analysis I’d ever seen, nor any other kind of literary analysis. The poem was structured like a pair of matryoshka dolls, it looked like a pair of nested loops. https://www.academia.edu/8155602/Articulate_Vision_A_Structuralist_Reading_of_Kubla_Khan_
I ended up writing a dissertation which was as much a quasi-technical exercise in computational linguistics as in literary theory. I chose one of Shakespeare’s best known sonnets, 129, The Expense of Spirit, as my example, and published my analysis in the 100th anniversary issue of MLN: Cognitive Networks and Literary Semantics, https://www.academia.edu/235111/Cognitive_Networks_and_Literary_Semantics. That represents a serious attempt to come up with a computational analysis of a profound and deeply disturbing human experience, compulsive sexuality.
The current crew will tell you, I’m sure, that that represents old technology, symbolic technology, which has been rendered obsolete by machine learning. Guess what? David Hays (my teacher and mentor) both knew that symbolic technology was not fully up to the job, that it had to be grounded in something else. And we were working on something else at the time, but meanwhile we did what we could with the tools we had. My point is that in order to conduct the analysis had I to spend as much time thinking about human behavior and language as I did about the technical devices of knowledge representation. Whatever success I may have had in that work, I paid for it in thinking about the human mind.
The current regime is quite different. They don’t have to think about the human mind at all. If Claude is capable of writing decent prose, well, that didn’t cost the folks at
Anthropic anything. They got it for nothing. And so that’s the value they place on the human mind. For them I’m afraid “truth, beauty, and the good life” are just empty words they trot out for the hype. Theirs is an Orwellian technology. They’re stuck on the wrong side of 1984.
*As I’m sure you know, Mark Andreessen likes to use whaling as a precedent for venture capital. Out of curiosity, I did a little digging and found an article by Barbara L. Coffee in the International Journal of Maritime History, “The nineteenth-century US whaling industry: Where is the risk premium? New materials facilitate updated view.” https://journals.sagepub.com/doi/abs/10.1177/08438714211013537 It’s quite interesting. Those whaling captains kept good records, and those records have been preserved. After examining the records of 11,257 voyages taken between 1800 and 1899 Coffee concluded: “During the nineteenth century, US government bonds, a risk-free asset, returned an average of 4.6%; whaling, a risky asset, returned a mean of 4.7%. This shows 0.1% as the risk premium for whaling over US government bonds.” What are the chances that current investment in LLMs will do better? Oh, there will be some success, but averaged across the whole industry and over the longterm?
I’ve been thinking a bit about the history of AI and how it led the the current situation where it is decoupled from any attempt to understand the human mind.
AI began as an attempt to simulate the human mind. The people who did the work also thought about the mind. AI work on chess led to psychological investigation of how humans played chess. The most commercially successful early AI programs were so-called expert systems, from the 60s on into the 80s. To develop such a system you would ask human experts to think through problems out-loud so you could record their thoughts. The recordings would then be transcribed. This became developed into a systematic methodology called “protocol analysis.” My point is simple, this AI work was closely linked to work on human thinking.
The big breakthrough in machine learning came in 2012 with a machine vision system called AlexNet, developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. (https://en.wikipedia.org/wiki/AlexNet) It was based on something called a convolutional neural network. CNNs are based on Fourier analysis, which had been used in understanding the visual system going back to the late 1960s. So, at this point the technical basis of the artificial system remained in touch with the study of human perception.
That changed with the development of GPTs. The technical basis of those systems had nothing to do with technical models of language and cognition. With GPT-3 things exploded. Its language capacity was far beyond anything else that had been done. The field quickly figured out that they could improve performance simply by scaling up. The enterprise was now effectively decoupled from any attempt to understand the human mind. Improvements in system performance were NOT linked to deeper understanding of language and cognition.
Of course, no one’s happy that the inner workings of LLMs are mysterious. It makes so-called “alignment” a hellish problem. At the same time, the fact of that mystery makes it easy to imagine whatever you wish about the technology. Thus the black box nature of these systems is convenient for the generation of hype. You can imagine future capacities to be whatever you will. Reality is not going to get in your way, and least not now in the present.
I'm struck that the diagnosis in the book seems to be that cosmopolitanism and an allegiance to markets shaped by consumer capitalism are the problem, and Palantir is the solution. I would say that cosmopolitanism in the form of a curriculum that includes all those you name check, along with the critiques within that tradition of Anna Julia Cooper, W.E.B. Du Bois, Edward Said, and Gayatri Chakravorty Spivak, would be a powerful foundation for thinking about what could have been (and still could be) a powerful combination of a Silicon Valley committed to not being evil and a United States committed to exporting democracy in addition to consumer culture.
Building a better rifle is a terrible thing to aim for unless it is firmly within the framework of an empire for liberty. So far, the American project has failed miserably on the "for Liberty" part. I want that to change, but the prospects are pretty dim.
Yes I went into so much detail so this would be clear for those who wouldn't invest in it. And yes, the evasions about the positive aspects of cosmopolitanism (plus a total misreading of Appiah) gave me lots and lots of pause. Still, the central importance of the humanities to the argument, even if operationalizing the instruction was way too vague, made me want to take the book seriously.
Fair...if they are willing to think about the humanities in terms of ongoing critique and dialogue, they are within the circle. I worry about the tendencies in Silicon Valley and the academy to shut down critique and dialogue in favor of winning. What is to be won is seldom fully articulated, and when it is, ugh.
I’d say also a tendency to pick from philosophy, history, or literature the pieces that promote your market approach, while leaving much of the story in the cutting room floor.
"The argument is that Silicon Valley, even while its very existence was made possible by the military industrial complex post WWII and the creation of the internet, has created a culture without national pride, that doesn’t believe in war, that doesn’t have values, and that doesn’t know for what it stands. Here’s the key paragraph:"
Silicon valley barely depended on the military industrial complex or the internet. TCP/IP was not the first networking protocol, or even a particularly large breakthrough. It was an improvement over things like X25 and quite an obvious one that would have been created by someone out there if the government hadn't already, in the same way that people created Linux, CSS, BitTorrent. There were internet services like Prestel and Minitel that ran on X25. one of many competing networking protocols. France's Minitel ran on X25 before the internet took off. The period of growth of the internet from the mid-90s has more to do with how cheap hardware was becoming and the expiry of the RSA algorithm patent than government action.