While we worry about AI replacing human judgment, the real story may be how AI is creating a market for that judgment as a luxury good, available only to those who can pay for the “last mile” of human insight. What do I mean by this?
The challenge of mail delivery from the post office to each home or from a communication hub to each individual end user is known as a “last mile” problem. In the paper newspaper era, the paper boy was the solution to the last mile problem, hawking papers on street corners or delivering papers house by house in the early morning before school. The postal carrier is a solution to the last mile problem. DoorDash is a solution to the last mile problem in the food business.
Tree trunks and branches are another way to think about it, though the term “twigs” doesn’t really capture the sense of distance that “last mile” does. Laying that last bit of cable to each home or stringing that last bit of telephone wire or sewer line is expensive.
What I’m calling “the last mile” here is the last 5-15% of exactitude or certainty in making a choice from data, for thinking beyond what an algorithm or quantifiable data set indicates, when you need something extra to assurance yourself you are making the right choice. It’s what the numbers don’t tell you. It’s what you hear when you get on the phone to check a reference. There are other terms for this — the human factor or the human element, but these terms don’t get at the element of distance between what metrics give you and what you need to make a decision.
Scale leaves us with this last mile of uncertainty. As AI is going to do more and more matching humans with products and services (and other people), the last mile problem is going to be the whole problem.
Imagine you're having the perfect AI-assisted day. Your smart calendar has optimized your meetings. Your robo-advisor has rebalanced your portfolio. Your AI health app has designed your ideal breakfast and workout. Then real life kicks in: the "ideal" audiobook the algorithm found for you depressed you on your drive in. Your top job candidate has perfect metrics but sets off weird alarm bells in the interview. The lunch restaurant with five-star ratings turned out to be empty while the "mediocre" place next door was packed with locals and your business competitors, who clearly knew something your AI assistant didn’t. By 6pm, you're ignoring your AI assistant's urgent suggestion to try the new "trending" cocktail bar and heading to the slightly shabby pub where the bartender remembers your name.
This isn't about AI failing — it's about that crucial gap between data and reality that no algorithm can quite bridge. Even as AI models get better and better, the gaps between data and reality will be the anecdotes that circulate. These anecdotes will be the bad date, the awful hotel, the concert you should have gone to, the diagnosis your app missed. The issue isn't that AI assistants get things wrong — it's that they get things almost right in ways that can be more dangerous than obvious errors. They're missing local knowledge: that messy, contextual, contingent element that often makes all the difference.
Say you need to make a hire. Your AI assistant has sent you a list of names based on a review of resumes and self assessments, having been programmed to account for unreliability and the human tendency to present idealized versions of themselves. Your AI assistant may also have factored in additional data, from professional assessments, standardized personality test scores, the result of structured interviews. Here you are, looking at the list, perhaps scrolling through the attached files, knowing that you don’t have the whole picture. You want specific things like collaborative style, meeting demeanor, behavior after work or at off-site conventions, sense of service and “citizenship.”
You want the “last mile” of context-dependent complexity: aspects of a person that resist systemization; the way a person acts and reacts differently in different contexts according to mood and chance and personal conflicts; how timing changes everything; how past experiences shape current responses.[1] You want nuanced interpersonal dynamics. You want to see where the profile is tinted or tainted by comparing it to the candidate as a person. But for legal and privacy reasons this is hard to get. Ideally you have to collect this information yourself (in reference calls) and keep the data in your own files. (Who "owns" this sensitive information? Sensitive last mile data should not go back into the algorithm for the same legal and privacy reasons: discrimination based on cultural differences or personality traits, disputed facts, changes in behavior over time, the risk of creating "untouchable" categories.)
In the AI Assistant world some truths emerge:
Quantifiable data will get you 85-95% there, as algorithms will not quite be able to offer full context and nuance
The last 5-15% depends on a human assessment of complexity as emergent rather than additive (not more data but rather a fundamentally different kind of information). Indescribable things may fundamentally change meaning.
Paying for "last mile" context is a matter of priorities and values. How nimble are you? Are the risks really risks?
“The last mile” is an intuitive category. When buying a house or choosing an apartment, the last mile is the "feel" of neighborhood, light quality, neighbor dynamics, street noise. Last mile data is crucial. You pay for it with your time.
In restaurant choice the last mile is ambiance, service style, noise level, "vibe." But if you dine out often, last mile data is collected with each choice. Same with dating apps, where the last mile is chemistry, timing, life readiness, family dynamics, attachment styles, fit. You don’t have to choose once and that’s that. You can go out on many dates.
In the college admissions process, from the institution’s point of view, the last mile is whether an applicant has drive, resilience, potential, fit with campus culture. From the applicant’s point of view, the last mile factors (beyond cost) may be prestige, location, culture, opportunity.
“Fit” as a matter of hiring or real estate or many other realms is often a matter of class: recognizing cultural codes, knowing unwritten rules, speaking the “right” language, knowing the “right” people and how to reach them, having read the right books and seen the right movies, present themselves appropriately, reading subtle social cues, recognizing institutional cultures and power dynamics. Because class isn’t spoken about as often or categorized as well as other aspects of choice or identity, and because class markers change over time, the AI assistant may not be attuned to fine distinctions.
So here’s an uncomfortable truth: the "last mile" will become the new gated community, where inside the velvet ropes are expert humans with discernment. While AI promises to democratize everything from college admissions to job hunting, this human discernment will become a luxury good. The wealthy parent hires the counselor who "just knows" which college is the right fit. The well-connected job seeker gets the coffee chat that no algorithm can replicate. Even as AI gets better at matching people with opportunities, those crucial final judgment calls — the ones that often make all the difference — will remain hidden behind the velvet rope of privilege and social capital. We're creating a world where AI algorithms serve the majority while human insight becomes the ultimate premium service.
Who would have guessed that the real opportunity in the AI gold rush would be humans, in the race to monetize the last mile? Executive recruiters will soon proudly advertise themselves as "post-AI" specialists, selling their human judgment as the essential complement to algorithmic screening. Real estate agents are rebranding as "neighborhood insight consultants," will offer the kind of granular local knowledge that no database can capture. Instead of AI eliminating these middlemen, it's creating a premium market for human expertise. The future belongs not only to those who can build better algorithms, but also to those who can master that crucial space where algorithms end and human judgment begins. The last mile isn't just a gap — it's an economic sector. Brookings gets this a little bit right in focusing on customization as the last mile, but really, it’s the human element.
In the end, the AI revolution won't democratize opportunity — it will simply change who guards the gates, as human judgment becomes the ultimate premium upgrade to algorithmic efficiency.
[1] Tyler Cowen and Daniel Gross devote an entire book to this: Talent: How to Identify Energizers, Creatives, and Winners Around the World
I think the driving force behind the last-mile industry will not be a genuine 5-15% human judgement from information AI somehow cannot be taught, but the premium we are willing to spend to feel comfortable with a decision. The human’s opinion likely is not better informed than an AI - the AI will likely be trained on orders of magnitude more information on, say, reviews of a restaurant or knowledge of neigborhood’s local environment and how that’ll affect your dog. In fact, a human being will be much more likely to be biased than an AI, mistaking them with some unplaceable, unconscious “je ne sais quoi” that leads to biased hiring decisions and advice. What humans want is security, and we’ve been trained for millions of years to feel that with the assurance of another human being.
Let's wait for AI to work first (in the sense of generating real economic value)