A Tuesday morning
It is 9:14 on a Tuesday. I have seven panes open.
One is a Claude Code session rebuilding a dashboard for utility partners. One is a multi-agent content pipeline drafting next week's blog post, with a fact-checking agent two turns behind the writing agent. One is a Slack draft to a policy advisor, being revised by a drafting agent against the Sourceful Playbook. Two are design work: a Figma file I haven't opened today, and an in-browser token editor I built myself. One is Linear, where an agent is converting the last sprint's issue activity into a written standup. The last is a small prototype I am building ahead of a user interview this afternoon.
There is one human in this picture. It is me. I am not drawing. I am not writing code by hand. I am unblocking, approving, correcting, and steering. I am the conductor.
This is not "a designer who uses AI." This is a new unit of production. One person. Many agents. A full product pipeline.
The compression theorem
Here is what I think is happening to knowledge work, as plainly as I can say it.
AI compresses specialists into tools. A junior copywriter becomes a prompt. A boilerplate React component becomes a Claude Code command. A first-draft illustration becomes a Midjourney request. The specialist tasks, the ones with tight, legible success criteria, are the ones that compress fastest.
What does not compress is the person who knows what good looks like at every link in the chain. Brand. Copy. IA. UX. Front-end. SEO. Launch marketing. Community. Ops. The generalist with taste was rare before AI because they were slow. One person could not output the volume a team could. Now they can. They are rare, and they are fast.
Specialists get compressed into agents. Generalists with taste get compounded by them.
This is not a symmetrical change. Narrow specialists get their workflow eaten. Broad operators with taste get their workflow multiplied.
Ethan Mollick, in Co-Intelligence (2024), makes the same observation from a different angle: the best outcomes with AI come from people who deeply understand both the domain and the tool. The "cyborg" who weaves AI into their craft, not the observer who asks it to produce something they could not themselves judge.
The shape of winner has changed. The person who can hold the whole pipeline in their head, and has the taste to know when a link in that pipeline is producing nonsense, is suddenly one of the highest-leverage people in a company.
Jevons at my desk
In 1865, the economist William Stanley Jevons published The Coal Question. He observed something that did not fit the conventional wisdom of his day: making steam engines more efficient did not reduce British coal consumption. It increased it. Cheaper energy per unit of work unlocked demand nobody had previously imagined. Factory owners did not keep output constant and take the saving. They scaled output up until coal bills matched the old budget.
The paradox has been replayed in electricity, in compute, in bandwidth. Every efficiency shock has looked, briefly, like it would let people do less. Every time, people have chosen to do more.
Satya Nadella invoked the paradox on 27 January 2025, after DeepSeek's R1 made frontier AI dramatically cheaper. "Jevons paradox strikes again! As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." (source)
At personal scale: I am not working less. I am shipping a thousand times more.
A decade ago, the output I produce at Sourceful would have required a fifteen-person agency and a year. Brand system, marketing site, consumer mobile app, automated content pipeline, AI product-management system, community tooling, partner comms. I am one person. The hours AI saved me have been consumed by ambition, not by leisure.
That is the whole game. The efficiency did not give me a four-hour week. It gave me a one-person company.
Field notes from Sourceful
Concretely, "one person, many agents" looks like this right now. For each piece of work: what the old team shape was, what the new shape is, what I kept, what I delegated.
Brand system. Tokens, type, motion, voice guidelines. Old shape: a brand agency, two to three months. New shape: me, with an agent configured against the Playbook to stress-test voice. Kept: taste calls, typography, final edit. Delegated: first-draft language explorations and rationale writeups.
Marketing site at sourceful.energy. Old shape: a small agency plus an internal PM, six to eight weeks. New shape: two humans, three weeks, built with Claude Code. Four distinct audiences, accessibility treated as architecture, welcoming to AI crawlers rather than blocking them. I wrote about this one at length in How I Built a Website for Humans and AI. Kept: IA, copy strategy, and final edit. Delegated: scaffolding, component generation, first-draft copy against the Playbook.
Consumer mobile app. Flutter and Dart. Old shape: a mobile designer, a UX researcher, and two engineers. New shape: me and a pair of AI coding agents. Kept: interaction design, information architecture, the feel of the thing. Delegated: scaffolding, state plumbing, the boilerplate that used to eat Tuesdays.
Automated content pipeline. Multi-agent. Research to outline to draft to fact-check to image to publish. Old shape: a content marketer, a writer, a designer, an editor. New shape: the pipeline, and me reviewing the final and the red flags. Kept: editorial judgement, headline spikes, anything where the Sourceful voice has to land hard. Delegated: the middle 80% of words.
AI product-management system. Old shape: a PM and a chief of staff. New shape: agents that turn Linear and issue activity into written standups, roadmap updates, and stakeholder summaries. Kept: prioritisation and strategic calls. Delegated: status.
Community-led design. The piece AI cannot fake. Real users, real interviews, real living rooms in Sweden and the UK. Kept: all of it. Delegated: nothing. The insights from community work are the only genuinely new information entering the system.
The pattern is easy to read. AI took the tasks with clear success criteria and scaled them. I took the tasks with ambiguous success criteria and sharpened them. Taste, judgement, and industry depth did not compress.
The shape of mind this requires
Not every generalist can run this playbook. The conductor needs a specific shape of mind:
- Taste across every artefact. If you cannot tell when an output is wrong, the agents will happily ship wrong. Taste is the filter, and it is not a prompt.
- Deployment scars. You have to have shipped real product, to real users, at real scale. The cracks that appear at launch are the ones agents cannot predict. Only people who have been there know where to look for them.
- Domain depth. Fifteen years across energy, publishing, sport, health, finance, and consumer in my case. Agents write plausible nonsense until a domain expert catches it. The expert is not optional.
- Comfort with code. You do not have to be a staff engineer. You do have to read a diff, run a server, and know when an agent has invented an API that does not exist.
- Bias to ship. Zero-to-one muscle. The default posture is cut, release, learn. Not review, review, review. The faster you can close the loop with real users, the less the agents' errors matter.
Being this broad used to be a liability. Too wide to be senior anywhere. Now it is the asset.
Paul Graham calls the related quality "obsessive interest" in his essay The Bus Ticket Theory of Genius: the people who pay disproportionate attention to the details across a whole field are the ones who produce unreasonable output. He was writing about mathematicians and physicists. It turns out the description fits AI-era generalists quite well.
The honest counters
I am not going to pretend this is clean. There are four objections worth taking seriously.
Slop. AI lets low-taste operators ship more bad work faster. This is true. The internet will fill with lukewarm AI-shaped slop, and it already is. My counter: taste is the filter. The more the world fills with slop, the more legible, edited, intentional work stands out. Slop is a moat for the conductor, not a threat.
Juniors cannot break in. This one I do not have a tidy answer to. If one orchestrator replaces a team, where do the next generation of designers and engineers learn their craft? Honestly, the apprenticeship has to move. Juniors now learn by pairing with an orchestrator who narrates the taste calls in real time, not by being handed a junior-sized box to colour inside. Teams that figure this out early will be the ones still producing senior designers and engineers in 2030. Teams that do not will run out of talent quietly.
Burnout. Jevons cuts both ways. If every hour AI saves goes into more output, the operator never stops. I watch for this. I set exits. Ship, close the laptop, walk the dog. "One person, many agents" is a production model, not a lifestyle. Cal Newport's Deep Work (2016) made the case a decade ago that attention is the scarce resource. It has only got scarcer since. The conductor's job is not to ship more, it is to ship the right things. Attention is the real budget.
Hallucination and accountability. Agents are wrong. The conductor is accountable. This is the single reason the conductor needs domain depth. Nobody in 2026 gets to say "the model decided." The human in the loop has to be the person who knows what right looks like and is willing to put their name to it. A one-person team is also a one-person audit trail. That is a feature, not a bug.
The new role
The shape I have been describing is not a story about me. It is a role. I think more companies are going to need one over the next five years than anyone currently realises.
A conductor. One person who understands every link in the production chain well enough to direct a team of agents through it, and who has the taste, domain depth, and deployment history to be accountable for what comes out the other end.
Not a prompt engineer. Not a head of AI. Not a full-stack developer who also makes Figma files. A generalist operator with decades of breadth, running a studio of agents that would have been a fifteen-person team a decade ago.
That is the role. It has always been rare. Now, for the first time, it is viable.
Conductor, not coder. Breadth, not depth. Taste as the moat. Jevons on every desk.
References and reading
- Jevons, William Stanley. The Coal Question: An Inquiry Concerning the Progress of the Nation, and the Probable Exhaustion of Our Coal-Mines. Macmillan, 1865.
- Nadella, Satya. Jevons paradox strikes again. 27 January 2025.
- Mollick, Ethan. Co-Intelligence: Living and Working with AI. Portfolio, 2024. oneusefulthing.org.
- Graham, Paul. The Bus Ticket Theory of Genius. November 2019.
- Newport, Cal. Deep Work: Rules for Focused Success in a Distracted World. Grand Central Publishing, 2016.
- Cooper, Paul. How I Built a Website for Humans and AI. pjcooper.design, 2026.