19 Comments
User's avatar
Daniel Kalish's avatar

I think the driving force behind the last-mile industry will not be a genuine 5-15% human judgement from information AI somehow cannot be taught, but the premium we are willing to spend to feel comfortable with a decision. The human’s opinion likely is not better informed than an AI - the AI will likely be trained on orders of magnitude more information on, say, reviews of a restaurant or knowledge of neigborhood’s local environment and how that’ll affect your dog. In fact, a human being will be much more likely to be biased than an AI, mistaking them with some unplaceable, unconscious “je ne sais quoi” that leads to biased hiring decisions and advice. What humans want is security, and we’ve been trained for millions of years to feel that with the assurance of another human being.

Justin Reidy's avatar

Really well said. Another framing of bias is “specificity”. AI depends on the statistical conglomeration of large gobs of data. There’s an inherent “flattening” of its knowledge or predictive capability.

In the hiring context (where I’ve spent a lot of time), we can think of the gap as “culture fit”. Answering an algorithm question doesn’t need specificity. But understanding how a unique set of skills and personality will interact with other sets of skills and personality will likely be beyond the capabilities of the AI (putting aside the possibilities of some crazy panopticon future). Culture fit can certainly be very biased, and it’s important to identify those biases. But it’s also important to recognize that biases, like the desire for security you mention, can reflect forms of perception or knowing that sit deep within the human psyche.

Winston Smith London Oceania's avatar

It always comes down to our gut feeling in the end though, doesn't it? That's the true last mile.

Adham Bishr's avatar

Agreed. It’s more that we want to believe we made the decision and captured something the algorithm did not. Human judgment is itself fallible and prone to its own problems (see Daniel Kahnemanns work or Rob Hendersons article here - https://www.robkhenderson.com/p/daniel-kahnemans-final-exploration).

Winston Smith London Oceania's avatar

AI will likely have difficulty distinguishing a biased restaurant review from an honest one. Humans will always make biased decisions in the end - no matter what "advice" we might get from either AI or an "expert". We nearly always end up going with our gut instincts.

Boogaloo's avatar

Let's wait for AI to work first (in the sense of generating real economic value)

Doubtline's avatar

"Luxury" here works two ways: that which is precious, and that which is frivolous. As AI helps us measure that last mile more precisely, where might it dethrone, rather then elevate the "human" factor? Which last miles are fine judgement, which are reverse-engineered prejudice, and which are useless noise?

Let us also note the deep psychological gratification (another luxury) of being the one to judge that last mile—how it makes one more confident of one's powers of discernment, no matter what the outcome.

Winston Smith London Oceania's avatar

"...that which is precious, and that which is frivolous".

And in some cases, that which is pernicious.

Pritesh's avatar

Thanks, this is a very sharp articulation.

Curiosity Sparks Learning's avatar

This is an intriguing article, with examples I'd not thought of before. I'll be pondering where the human element will flourish, in a world organized by AI algorithms and AI agents. The last mile is where 'you spend your time', for that 'special' element. Sadly, I agree that the majority will not pursue it, as they will learn to just go along with what the AI offers. Really, isn't that already happening, especially as more personals tasks are being handed over to an AI , that will never pursue 'the last mile'. What a lost to local community, and ultimately, the breakdown of community services.

Daniel Welch's avatar

One very apparent example of this is music playlists. An early (pre-LLM) application of AI, all the major music services do a reasonable job of compiling playlists based on a few seed tracks or a general musical "feel." But, they almost always fail to deliver like a true tastemaker, often including tracks that fit the algo but don't actually fit the "vibe" or totally missing artists and tracks that fit the vibe but were overlooked by the algo.

Musical vibes are highly nuanced and often personal, and a Spotify playlist can get you 85% of the way there, but it takes human intuition and taste to make a playlist truly exceptional. I think this analogy is useful for thinking about how LLMs will broadly work more broadly across industries and applications.

Andy G's avatar

“In the college admissions process, from the institution’s point of view, the last mile is whether an applicant has drive, resilience, potential, fit with campus culture”

An interesting, thought-provoking piece. I think you’ve got this right, or at least directionally right.

However, your quoted statement above on college admissions is WAY off. As we saw come out in the recent SCOTUS decision on college admissions, from the university’s POV, what they are looking for is conformity with their desired identity diversity *and* as much as possible with their leftist ideology (thought being the one place where diversity is explicitly undesirable), while ensuring that plenty of “legacy” candidates get in to provide financial support.

You’d be better off removing this section from your piece altogether, as imo it undercuts the credibility of your otherwise thoughtful, well-reasoned argument.

Hollis Robbins's avatar

You're not wrong about many schools but there are growing numbers of institutions that are looking for new kinds of students, to wit, University of Austin.

Andy G's avatar

Ok. And I very much hope the University of Austin succeeds.

But there are about 2,300 four-year colleges and universities in the U.S., and I doubt you could name as many as 23 that would fit that definition.

And this is even more true for so-called “elite” institutions. Other than perhaps U of Chicago, can you name even one?

Meanwhile, it’s unequivocally clear that north of 90%, and almost certainly north of 98%, in recent years have “last mile” motivation almost diametrically opposite of 3 of your 4 claimed attributes; only conformist fit with campus (mono-)culture is likely accurate.

”Growing numbers of” from a vanishingly small base doesn’t mean much of anything.

Anders's avatar

Too new to AI to have more than a hunch, but I’ll keep that in mind

Ross Denton's avatar

As someone who works in insights, the other implication of AI swallowing more and more of the routine work means that the “luxury” budget pool is comparatively much smaller. I have yet to see anyone make money from AI in my field, but if it does the remaining artisans will be a shadow of what there once was

Curiosity Sparks Learning's avatar

And that will be incredibly tragic.

lucille robbins's avatar

Was what God told Abram a Last Mile invitation?

Winston Smith London Oceania's avatar

AI is all about quantitative analysis. Judgement is about qualitative analysis. More often than not, a gut instinct, an intuitive category as you put it.

I for one prefer to rely on my own judgement rather than pay some "expert" for it. Luxury good? The luxury of not thinking and feeling for oneself? No thanks.