The announcement of the new OpenAI o1 reasoning AI model allows consumers to choose between different artificial intelligences – those that can reason and those that know. Which is the kind of intelligence you need? How to solve a complex math problem or the decline in chalkboard purchases in 21st century?
What is intelligence anyway? Consider the following question:
Tree, saw, axe, hammer. Which one doesn’t belong?
Here is ChatGPT 4.o Mini:
In that list, the one that doesn’t belong is "tree." The other items—saw, axe, and hammer—are tools used for working with wood, while the tree is the object that you work on with those tools.[1]
Here is Anthropic Claude 3.5 Sonnet:
Hammer. The other three items (tree, saw, axe) are all directly related to cutting or felling trees:
· A tree is the object being cut
· A saw is used to cut trees
· An axe is used to chop trees
A hammer, on the other hand…belongs to a different category of tools used mainly in construction or carpentry.[2]
Here is Microsoft Copilot, which uses GPT-4:
The odd one out is tree. Here’s why:
· Tree is a natural object.
· Saw, axe, and hammer are all tools used for cutting or shaping wood.[3]
Which do you want? Reasoning or knowledge? Reasoning is useful. I would like to know about the relationship of hammers and cutting so it is good to know there are options.
Mostly, though, I would be like to have a thing I could hold in my hand, perhaps the size of a magic 8-ball or an apple (or a big strawberry) and I could ask it anything. How would my life be different? How would my life be different as an academic?
Outside of academia, there is already marveling at what AI knows and how it reasons, at least in tech quarters. Inside academia, there more interest in what AI doesn’t know and how it fails at reasoning.
Most scholars value the time we’ve spent in archives and special collections, delighting in unexpected finds, piecing pieces of information together to come up with something new and exciting. They would ask an AI apple all sorts of things, checking if it knows the things they spent years looking for or reasoned its existence. Every scholar who discovered or published something new would ask about it. But if the AI apple doesn’t acknowledge the source of the information or discovery, the scholar would be irritated. If the AI apple credited another scholar, there would be fury.
I would ask an AI apple to address all sorts of mysteries and unknown questions in my field. I’d ask it about lost issues of historic newspapers or unknown authors. I would ask about famous mysteries. Could it tell me more about the Antikythera mechanism or the Voynich manuscript than what we currently know? Could it say exactly how the Great Pyramid of Giza was constructed, how the Nazca Lines were created, where the Stonehenge stones came from and how, what happened to the Indus Valley civilization, or what the Phaistos Disc is?
Scientists may want to know useful things, like new alloys or new proteins or new crystals. Engineers may want to ask it for new designs for bridges, rocket ships, heart valves. Here in Salt Lake City it could offer solutions to the problem of the Great Salt Lake drying up. It could plan a pipeline from the Pacific to the lake, detailing the route and the costs of materials and buying rights-of-ways. Maybe it would tell us if cell phone use really caused depression or if drinking a gallon of water a day really kept us healthy.
But what change would actually come to the world? How soon until the better bridges were built? Would zoning boards still hold power? Would it change NIMBY? And how would a knowledge machine change what we do inside of academia, how we research and teach?
I noted in a previous post that currently there is no interface between humans and an information source beyond the five human senses: eyes, ears, nose, tongue, skin. Human beings must pay attention to an information source and absorb the data. Nothing has yet been invented that transmits data directly to the human brain without the requirement that the human be paying attention.
A few emerging technologies and methods for learning that extend beyond traditional visual and auditory means include haptic feedback technologies, augmented and virtual reality, sensory substitution devices that convert information from one sense to another (visual to auditory), wearable technology, neurofeedback, cognitive training, and some very early olfactory and gustatory learning approaches.
But until there are brain-computer interfaces (BCIs), direct neural inputs that can deliver information to the brain and test absorption and ‘learning,’ people are going to need to pay attention to information, whether or not delivered by an AI. We’re going to need to engage with AI and LLMs the way we engage with books, the internet, and human teachers. To those who pay attention go the spoils.
Am I ahead or behind the conversation about how AI will change the way we teach and learn? All of the celebrators of new models are already paying attention. The ones who are not are not. We are not yet Picard on the Enterprise asking the computer for information and hearing the voice of Majel Barrett answering immediately and factually.
The image above is Owl of Minerva flying at dusk with Eve’s apple in its beak. Maybe a year from now more people would want it to be OpenAI’s strawberry. When I asked Claude for some more literary engagement with knowledge it suggested Christopher Marlowe’s Dr. Faustus (1592), good), and Mark Twain’s A Connecticut Yankee in King Arthur’s Court (1889), also good. When I prodded it, it suggested Alfred Hitchcock’s The Man Who Knew Too Much (1956). Now that’s funny.
[1] When prompted for another answer, ChatGPT 4.o replies: Another possible answer could be that the "hammer" doesn't belong because it's primarily used for driving nails and other tasks, whereas the saw, axe, and tree are all directly related to cutting or working with wood. The hammer is a bit different in its primary function.
[2] When prompted for another answer, Claude replies: While "hammer" is the most obvious answer, you're right to ask about other possibilities. There could be alternative interpretations:
1. Tree: It's the only living thing and not a tool, unlike the others.
2. Saw: It's the only tool that typically has teeth or a serrated edge, while the others don't.
3. Axe: It's the only single-piece tool (assuming a traditional axe). The saw has a blade and handle, the hammer has a head and handle, and the tree has many parts.
[3] When prompted for another answer, Copilot replies: Yes, there is another way to look at it! The odd one out could also be hammer. Here’s why:
· Saw, axe, and tree are all related to cutting or being cut.
· Hammer is not used for cutting.
The 1956 version of "The Man Who Knew Too Much" is actually a remake of a same-named film made by Hitchcock in the 1930s. So: more laughs?
"Nothing has yet been invented that transmits data directly to the human brain without the requirement that the human be paying attention."
Why brain rather than mind? Why attention rather than consciousness?
The brain-as-computer metaphor is seductive and almost impossible to avoid, but it results in a theory of mind in which there is no "mind" or "thought," just data and its retrieval... which might be another term for "memory"... which is another dimension of the mind undergoing data-overdosed atrophy.