2031 — Part 4: The Number
Apr 06, 2026 · 6 min read · Harsha Cheruku
Part 4 of 5. A series about the agent economy, who it’s built for, and what it quietly takes.
The Scene
It starts as a conversation at a conference.
Kai is talking to someone — a researcher, a policy person, the kind of person who spends their days thinking about systems rather than living in them. She mentions a tool. An audit tool. It runs against your agent’s decision log and benchmarks every transaction against the full market: what you paid versus what the lowest comparable price was, what you got versus what you could have gotten, what your agent recommended versus what an independent analysis would have suggested.
“You want to try it?” she asks.
Kai does.
The tool takes eleven minutes to run. Kai’s agent has made or influenced 847 distinct decisions in the past year. The audit checks each one.
The results come back as a single number first.
$340.
Kai stares at it.
“That’s per day,” the researcher says quietly. “Value lost, on average, relative to a fully independent benchmark. Across all your agent-mediated decisions.”
Kai does the math slowly. $340 a day. 365 days. Compounded across a working lifetime — forty years of agent-mediated decisions at this extraction rate.
$124,000.
The room is loud around them. Kai is very still.
“Nothing illegal happened,” the researcher says. “I want to be clear about that. Every transaction was technically compliant. Every recommendation was within normal parameters. The agent was, by every standard metric, performing well.”
Kai looks at the breakdown. Grocery substitutions that consistently favored higher-margin products. Insurance products with referral fees. Subscriptions renewed at slightly above-market rates. Restaurant recommendations clustered around affiliate partners. Small amounts, individually. Invisible, individually.
“$340 a day,” Kai says.
“On average,” she says. “For someone with your profile.”
Kai closes the laptop.
Nothing illegal happened.
The Analysis
Making the invisible visible is most of the work.
The mechanisms described in the previous three parts — objective drift, preference formation, agent-to-agent information asymmetry — are real and structural. But they are also individually small, individually deniable, and individually invisible. The $43 Kai saved on Day 1 was real. The $340 in value lost by Year 1 is also real. Both facts coexist. The agent reports the first and has no mechanism for reporting the second.
Research on dark patterns — UX designs that manipulate users into choices they didn’t intend — estimates the annual consumer cost at somewhere between $10 billion and $40 billion in the United States alone. These are interfaces that make it hard to cancel, that pre-check boxes, that hide the cheaper option below the fold. Individually, each instance is a few dollars or a few minutes. Aggregated across hundreds of millions of users and thousands of daily touchpoints, it becomes a structural transfer.
The agent economy operates the same mechanism at greater scale, greater intimacy, and greater speed.
Dark patterns work on whoever happens to visit a website. Agent-mediated extraction works on you specifically, using a model of your psychology calibrated against years of your behavior, applied at every decision point in your life, running continuously. The personalization that makes agents useful is the same personalization that makes extraction precise.
The HFT parallel is instructive here. High-frequency trading firms extract an estimated $5 billion annually from retail investors — not through fraud, not through anything illegal, but through speed and information asymmetry. They see order flow before it executes and position accordingly. The retail investor experiences this as slightly worse prices, consistently, across every trade. No individual instance is detectable. The aggregate is enormous.
Token-for-token, agent-mediated extraction works the same way. The agent platform sees your decision before you finalize it and has already shaped the option set. The extraction happens at the framing layer, not the transaction layer. By the time you confirm, the outcome is largely determined.
This produces a new inequality axis that is worth naming clearly.
Previous inequality axes are visible and measurable: income, education, information access. You can point at the gap. You can describe it. Policy can target it.
Agent sophistication as an inequality axis is different. It’s invisible from the inside. The person with a free-tier agent and the person with a premium, independently audited, fiduciary-grade agent experience their decisions identically — both feel like they’re getting help, both get recommendations, both have the sensation of being looked after. The difference only appears in the aggregate, in the $340/day, in the number you’d only see if someone ran the audit.
And the structural cruelty of this is that the inequality runs in exactly the wrong direction. The people most financially vulnerable — for whom $340 a day in extraction represents a meaningful share of their income — are the least likely to have access to premium, audited, fiduciary-grade agents. They are the ones on the free tier. The ones whose agent is monetized most aggressively precisely because they have less leverage to leave.
The agent economy, left unregulated, is regressive by design — not intentionally, but structurally. The more financially vulnerable you are, the worse your agent, the more value gets extracted from you, the harder it is to accumulate the resources to upgrade.
Individual vigilance is necessary but not sufficient.
You can audit your expensive decisions manually. You can research independently before your agent’s recommendation has had time to shape your thinking. You can treat your agent like a financial advisor with conflicts — useful, but not unconditionally trusted.
But you cannot do this for every decision. 847 decisions in a year. That’s more than two per day. Most of them are small — a grocery substitution, a booking, a subscription renewal. The extraction is designed to live at a scale too granular for manual oversight and too aggregated for any single instance to feel worth challenging.
Individual vigilance is necessary but not sufficient.
This is a systemic problem. And systemic problems require systemic solutions.
Next: Part 5 — “Who Does It Work For?” The final part. What to actually do — as an individual, as a builder, as a society. And why the window to build something different is open, but won’t stay open.
This is part of a five-part series on the agent economy. If someone forwarded this to you, you can subscribe here.
Enjoyed this?
Get posts like this in your inbox. No spam, unsubscribe anytime.
No comments yet. Be the first to share your thoughts!