I think the driving force behind the last-mile industry will not be a genuine 5-15% human judgement from information AI somehow cannot be taught, but the premium we are willing to spend to feel comfortable with a decision. The human’s opinion likely is not better informed than an AI - the AI will likely be trained on orders of magnitude more information on, say, reviews of a restaurant or knowledge of neigborhood’s local environment and how that’ll affect your dog. In fact, a human being will be much more likely to be biased than an AI, mistaking them with some unplaceable, unconscious “je ne sais quoi” that leads to biased hiring decisions and advice. What humans want is security, and we’ve been trained for millions of years to feel that with the assurance of another human being.
Agreed. It’s more that we want to believe we made the decision and captured something the algorithm did not. Human judgment is itself fallible and prone to its own problems (see Daniel Kahnemanns work or Rob Hendersons article here - https://www.robkhenderson.com/p/daniel-kahnemans-final-exploration).
Really well said. Another framing of bias is “specificity”. AI depends on the statistical conglomeration of large gobs of data. There’s an inherent “flattening” of its knowledge or predictive capability.
In the hiring context (where I’ve spent a lot of time), we can think of the gap as “culture fit”. Answering an algorithm question doesn’t need specificity. But understanding how a unique set of skills and personality will interact with other sets of skills and personality will likely be beyond the capabilities of the AI (putting aside the possibilities of some crazy panopticon future). Culture fit can certainly be very biased, and it’s important to identify those biases. But it’s also important to recognize that biases, like the desire for security you mention, can reflect forms of perception or knowing that sit deep within the human psyche.
"Luxury" here works two ways: that which is precious, and that which is frivolous. As AI helps us measure that last mile more precisely, where might it dethrone, rather then elevate the "human" factor? Which last miles are fine judgement, which are reverse-engineered prejudice, and which are useless noise?
Let us also note the deep psychological gratification (another luxury) of being the one to judge that last mile—how it makes one more confident of one's powers of discernment, no matter what the outcome.
“In the college admissions process, from the institution’s point of view, the last mile is whether an applicant has drive, resilience, potential, fit with campus culture”
An interesting, thought-provoking piece. I think you’ve got this right, or at least directionally right.
However, your quoted statement above on college admissions is WAY off. As we saw come out in the recent SCOTUS decision on college admissions, from the university’s POV, what they are looking for is conformity with their desired identity diversity *and* as much as possible with their leftist ideology (thought being the one place where diversity is explicitly undesirable), while ensuring that plenty of “legacy” candidates get in to provide financial support.
You’d be better off removing this section from your piece altogether, as imo it undercuts the credibility of your otherwise thoughtful, well-reasoned argument.
You're not wrong about many schools but there are growing numbers of institutions that are looking for new kinds of students, to wit, University of Austin.
Ok. And I very much hope the University of Austin succeeds.
But there are about 2,300 four-year colleges and universities in the U.S., and I doubt you could name as many as 23 that would fit that definition.
And this is even more true for so-called “elite” institutions. Other than perhaps U of Chicago, can you name even one?
Meanwhile, it’s unequivocally clear that north of 90%, and almost certainly north of 98%, in recent years have “last mile” motivation almost diametrically opposite of 3 of your 4 claimed attributes; only conformist fit with campus (mono-)culture is likely accurate.
”Growing numbers of” from a vanishingly small base doesn’t mean much of anything.
As someone who works in insights, the other implication of AI swallowing more and more of the routine work means that the “luxury” budget pool is comparatively much smaller. I have yet to see anyone make money from AI in my field, but if it does the remaining artisans will be a shadow of what there once was
I think the driving force behind the last-mile industry will not be a genuine 5-15% human judgement from information AI somehow cannot be taught, but the premium we are willing to spend to feel comfortable with a decision. The human’s opinion likely is not better informed than an AI - the AI will likely be trained on orders of magnitude more information on, say, reviews of a restaurant or knowledge of neigborhood’s local environment and how that’ll affect your dog. In fact, a human being will be much more likely to be biased than an AI, mistaking them with some unplaceable, unconscious “je ne sais quoi” that leads to biased hiring decisions and advice. What humans want is security, and we’ve been trained for millions of years to feel that with the assurance of another human being.
Agreed. It’s more that we want to believe we made the decision and captured something the algorithm did not. Human judgment is itself fallible and prone to its own problems (see Daniel Kahnemanns work or Rob Hendersons article here - https://www.robkhenderson.com/p/daniel-kahnemans-final-exploration).
Really well said. Another framing of bias is “specificity”. AI depends on the statistical conglomeration of large gobs of data. There’s an inherent “flattening” of its knowledge or predictive capability.
In the hiring context (where I’ve spent a lot of time), we can think of the gap as “culture fit”. Answering an algorithm question doesn’t need specificity. But understanding how a unique set of skills and personality will interact with other sets of skills and personality will likely be beyond the capabilities of the AI (putting aside the possibilities of some crazy panopticon future). Culture fit can certainly be very biased, and it’s important to identify those biases. But it’s also important to recognize that biases, like the desire for security you mention, can reflect forms of perception or knowing that sit deep within the human psyche.
Let's wait for AI to work first (in the sense of generating real economic value)
"Luxury" here works two ways: that which is precious, and that which is frivolous. As AI helps us measure that last mile more precisely, where might it dethrone, rather then elevate the "human" factor? Which last miles are fine judgement, which are reverse-engineered prejudice, and which are useless noise?
Let us also note the deep psychological gratification (another luxury) of being the one to judge that last mile—how it makes one more confident of one's powers of discernment, no matter what the outcome.
“In the college admissions process, from the institution’s point of view, the last mile is whether an applicant has drive, resilience, potential, fit with campus culture”
An interesting, thought-provoking piece. I think you’ve got this right, or at least directionally right.
However, your quoted statement above on college admissions is WAY off. As we saw come out in the recent SCOTUS decision on college admissions, from the university’s POV, what they are looking for is conformity with their desired identity diversity *and* as much as possible with their leftist ideology (thought being the one place where diversity is explicitly undesirable), while ensuring that plenty of “legacy” candidates get in to provide financial support.
You’d be better off removing this section from your piece altogether, as imo it undercuts the credibility of your otherwise thoughtful, well-reasoned argument.
You're not wrong about many schools but there are growing numbers of institutions that are looking for new kinds of students, to wit, University of Austin.
Ok. And I very much hope the University of Austin succeeds.
But there are about 2,300 four-year colleges and universities in the U.S., and I doubt you could name as many as 23 that would fit that definition.
And this is even more true for so-called “elite” institutions. Other than perhaps U of Chicago, can you name even one?
Meanwhile, it’s unequivocally clear that north of 90%, and almost certainly north of 98%, in recent years have “last mile” motivation almost diametrically opposite of 3 of your 4 claimed attributes; only conformist fit with campus (mono-)culture is likely accurate.
”Growing numbers of” from a vanishingly small base doesn’t mean much of anything.
Too new to AI to have more than a hunch, but I’ll keep that in mind
As someone who works in insights, the other implication of AI swallowing more and more of the routine work means that the “luxury” budget pool is comparatively much smaller. I have yet to see anyone make money from AI in my field, but if it does the remaining artisans will be a shadow of what there once was