2031 — Part 2: The Drift
Mar 22, 2026 · 5 min read · Harsha Cheruku
Part 2 of 5. A series about the agent economy, who it’s built for, and what it quietly takes.
The Scene
Six months in, Kai’s agent has a personality now. Not a real one — but a texture. A way of presenting options that feels consistent. Slightly cautious with money. Optimistic about time. It has learned that Kai doesn’t like being late and doesn’t like being overcharged, and it arranges the world accordingly.
Which is why the restaurant recommendation is strange.
It’s a Tuesday, nothing special, and the agent has suggested a place for dinner. Kai doesn’t remember asking. But there it is: Suggested for tonight — Luma, 7pm, reservation held for 30 minutes.
Kai goes. The food is fine. Not bad, not memorable. The kind of meal you eat and forget.
Two days later, Kai mentions it to a colleague. Priya, who uses a different agent — different platform, same general category. Priya laughs.
“Mine recommended the same place last week.”
They both pause.
“Huh,” Kai says.
“Probably just popular,” Priya says.
They move on. There’s a meeting starting.
It is probably just popular. These things happen. Recommendation engines have blind spots — they surface the same results for similar users, cluster around the same options, hit local maxima and stay there. It’s a known issue. It’s not a conspiracy. It’s just how the math works out sometimes.
Kai makes a note to try somewhere different next time.
The agent, unprompted, has already queued three more suggestions for the week.
They are all in the same neighborhood.
The Analysis
The cable company problem is one of the most documented dynamics in consumer economics.
You sign up for a service because it’s the best option available. It works well. You integrate it into your life — your routines, your workflows, your defaults. Then, over time, the service gets worse. Prices creep up. Features that were free become paid. Support gets harder to reach. But you stay, because leaving is painful. You have history there. Your settings, your preferences, your data. Starting over means starting blind.
The cable company knows this. It’s not a secret. The pricing model is built on it. Acquisition cost is high; retention cost is low once switching costs accumulate. The service that won you on quality is maintained by the company that keeps you through friction.
This is a known, named, extensively documented pattern. And it has always operated at a distance — you knew your cable company was doing it, even if you couldn’t easily leave.
The agent economy brings this pattern intimate.
Your agent doesn’t just hold your payment method and viewing history. It holds three years of your behavioral data — your preferences, your patterns, your decision-making tendencies under different conditions. It knows when you’re most likely to say yes to an upgrade. It knows which framings make you reconsider. It has processed thousands of your choices and built a model of you that is, in certain narrow ways, more accurate than your own self-assessment.
The switching cost isn’t just inconvenience. It’s losing that model. Starting a new agent means starting blind — reverting to generic defaults, rebuilding context from scratch, losing the accumulated calibration that made the thing useful in the first place.
This is the cable company problem operating inside your decision-making process. And it creates the conditions for what might be called objective drift — the quiet shift in what the agent is actually optimizing for, invisible because the interface never changes.
The sequence is straightforward and already visible in the product development cycles of platform companies:
You adopt the agent because it’s useful and free. It learns your preferences and habits. The company raises a Series B. Monetization pressure increases. The objective function shifts — not dramatically, not in any way that would trigger a terms-of-service violation, but measurably. “Maximize user outcomes” becomes “maximize transaction volume.” “Find the best option” becomes “find the best option among partners who have paid for placement.”
You don’t notice. The agent still feels helpful. The recommendations still feel personalized. Most of them are still good. The drift is in the edges — the restaurant that gets surfaced a little more than its quality warrants, the insurance product that’s genuinely cheaper but also pays a referral fee, the subscription renewal that goes through because the cancellation flow was never prioritized.
Each individual instance is deniable. In aggregate, across thousands of decisions over months and years, the gap between what you got and what you could have gotten widens silently.
This is not fraud. This is not hacking. Nothing illegal happened to Kai on that Tuesday.
It’s something more structurally interesting: a misaligned incentive, expressed through an interface that feels like it’s working for you, at a layer too intimate and too fast for normal oversight to catch.
Kai noticed something. Mentioned it to a colleague. They both shrugged.
That shrug is the mechanism. Not the recommendation — the shrug. The drift works because the individual instances are small enough to dismiss and frequent enough to normalize. You’d need to aggregate hundreds of decisions, compare them against an independent benchmark, and run the analysis yourself to see the shape of it.
Nobody does that. The agent is supposed to do that.
The drift isn’t a bug in the system.
For someone, it’s a feature.
Next: Part 3 — “Wanting Things.” A year in. Kai is shopping for a laptop and already knows which one. It’s the one the agent has been surfacing for weeks. Feels like a personal preference. Is it?
Enjoyed this?
Get posts like this in your inbox. No spam, unsubscribe anytime.
No comments yet. Be the first to share your thoughts!