Skip to main content
TechnologyMarch 11, 20269 min read

In the Future, Teams Will Be Generated, Not Hired

Three converging technologies — frontier models, agent harnesses, and context management — are transforming teams from fixed organizational units into dynamically generated infrastructure. The shift is already underway.

#ai-agents#multi-agent-systems#future-of-work#agentic-ai#organizational-transformation

When you need 50 customer service reps for holiday season, you don't hire them full-time. You scale up, then scale down. The staffing agency model has worked this way for decades. Now imagine doing that with entire engineering teams — in minutes, not months.

Three technologies have converged to make this real: frontier models, agent harnesses, and context management. Each alone is insufficient. Together they create something fundamentally new — AI agent teams you can generate on demand, tailored to the problem, and dissolve when the work is done.

This isn't speculation. Gartner projects 40% of enterprise applications will embed AI agents by end of 2026. Multi-agent system inquiries surged 1,445% in a single year. The infrastructure for generated teams isn't being theorized. It's being built right now.

The Convergence That Changes Everything

The reason this moment feels different from previous AI hype cycles is that three independent technology tracks matured at the same time. Any two of the three produce something interesting but limited. All three together produce emergence — capabilities none could create alone.

Frontier Models — Claude, GPT, Gemini — provide reasoning, task decomposition, and code generation. They're the intellectual engine. Without them, agent harnesses have nothing to orchestrate.

Agent HarnessesClaude Code Agent Teams, Devin, CrewAI, LangGraph — provide orchestration. Tool use, multi-step execution, inter-agent communication. Without them, models are impressive chatbots. Request in, response out. No sustained action.

Context Management — long context windows, RAG, persistent memory, MCP — provides continuity. Without it, agents have amnesia. They lose the thread mid-task and start repeating work they already did ten minutes ago. Context is the actual moat, not the model.

Here's the cleaner way to think about it:

  • LLMs without agent harnesses are chatbots. Impressive, but fundamentally request-response.
  • Agent harnesses without good models produce garbage. Automation of incompetence.
  • Models without context management forget what they learned between steps. Brilliant but amnesiac.

The convergence is what matters. Like how the internet required computing, networking, and protocols simultaneously — none of them alone would have produced what we got.

ComponentAloneConverged
Frontier ModelSmart conversationStrategic reasoning across complex tasks
Agent HarnessRigid workflow automationAutonomous multi-step execution
Context ManagementInformation retrievalInstitutional memory that persists

The market reflects this convergence. Multi-agent AI is projected to grow from $7.84 billion in 2025 to $52.62 billion by 2030 — a 46.3% CAGR. That's not hype money. That's infrastructure investment following a proven convergence pattern.

From Fixed Teams to Generated Teams

Think about how team structures have evolved — and where they're heading.

Fixed Human Teams. Roles predefined. Capacity fixed. Hiring cycle measured in months. A team of 8 engineers is a team of 8, whether the sprint needs 3 or 15. You carry the overhead either way.

AI-Assisted Teams. This is where most organizations sit today. AI supports humans, but the team structure stays the same. Copilots make individuals faster. The org chart doesn't change. You still have the same headcount, the same standup, the same sprint planning.

Hybrid Adaptive Teams. This is emerging now. AI agents join the team as needed and scale with demand. Some workflows are fully agent-handled. Humans assign, review, and redirect. The team composition shifts week to week based on what the project actually requires.

Agentic Teams with Human Oversight. This is 2026-2028. One person orchestrates a full team of specialized agents. The human sets direction, approves plans, governs outcomes. Agents handle execution.

The effort distribution tells the story:

In a traditional team, roughly 60% of engineering effort goes to manual code comprehension, 30% to boilerplate coding, and 10% to architecture review. In an agentic model, that inverts: 85% architecture strategy and review, 10% boilerplate coding, 5% manual code reading.

The principle is straightforward. Autonomous execution, human governance. As agent autonomy increases, human effort shifts from doing the work to directing and governing it.

Teams as Elastic Infrastructure

Before the power grid, every factory ran its own generator. Before cloud computing, every company ran its own servers. We're at that same inflection point with teams — the shift from owning fixed capacity to generating it on demand.

The parallels are exact.

On-demand provisioning. Need a security review? Generate a security specialist agent. Need load testing? Spin one up. Done in minutes, not hiring cycles. No recruiter, no interview loop, no onboarding.

Auto-scaling. Project complexity increases? Add more agents. Scope shrinks? Scale down. No severance packages. No awkward conversations.

Pay-per-use. Agents consume resources only when working. No bench time, no idle salaries, no carrying capacity you don't need for six months until the next big project.

Configuration as code. Team compositions defined declaratively, version-controlled, reproducible. "Give me the same migration team template we used on Project X." That sentence becomes literal, not metaphorical.

The economics are already compelling. AI agents cut labor costs up to 45% while improving efficiency 60-70%. Companies report average ROI of 171%. LLM API prices dropped 80-90% since early 2024 — and they're still falling. Enterprise multi-agent systems cost $150K-$400K to build but pay back in 3-6 months.

Here's the number that should make this concrete: at Anthropic, roughly 90% of Claude Code is written by Claude Code itself. The team building the AI coding tool is increasingly composed of AI agents. This isn't a proof of concept. It's production.

What Becomes Possible

Not by adding more people. By changing how teams are formed.

Already happening. Amazon Q Developer modernized thousands of legacy Java applications — work that would have taken human teams years. Nubank achieved 12x efficiency improvement using Devin for multi-million line ETL migrations. Genentech built agent ecosystems on AWS to automate drug discovery workflows, freeing scientists for the kind of breakthrough research that actually requires human intuition.

Emerging now. On-demand AI specialists embedded in teams — generate an accessibility expert, a performance optimizer, or a security auditor only when the sprint needs one. Agentic councils for governance, where compliance, security, and code review agents run continuously, catching issues before they reach production. Long-running autonomous research, where agents explore solution spaces for days or weeks and report findings asynchronously.

On the horizon. Continuous market and scenario simulation. Real-time adaptive supply chain optimization. Cross-functional agent teams that span engineering, design, and operations simultaneously. Agent-led marketing teams assembled for a campaign and dissolved after launch.

57% of organizations already deploy agents for multi-stage workflows. 16% are running cross-functional processes across multiple teams. The question isn't whether this will happen. It's how fast your organization adapts.

The Human Role Shifts — Governance, Not Execution

This is the question everyone asks. What happens to the people?

The honest answer: the role transforms. The engineer of 2026 spends less time writing foundational code and more time orchestrating a dynamic portfolio of AI agents, reusable components, and external services. The role moves from creator to curator. From implementer to governor.

A new skill set is emerging:

  • Prompt engineering becomes team composition design
  • Code review becomes plan approval and output governance
  • Individual contribution becomes agent orchestration
  • Technical execution becomes strategic direction

The data backs this shift. 40% of job roles in Global 2000 companies will directly involve working with AI agents by 2026. 45% of organizations with extensive AI adoption expect reductions in middle management layers — because agents handle the coordination that middle managers traditionally provided.

Governance is becoming a first-class organizational capability. Singapore launched the world's first Model AI Governance Framework for Agentic AI in January 2026. Three-tiered oversight: low-risk agents run freely, medium-risk get enhanced controls, high-risk require human-in-the-loop checkpoints. Other nations are following.

This isn't about replacing humans. It's about elevating human work to where it creates the most value — strategy, judgment, ethical decision-making, creative vision. The teams are generated. The governance is human.

What Still Needs Solving

Not everything is figured out. Pretending otherwise would undermine the thesis.

Technical gaps are real. Context management at scale — maintaining coherent institutional knowledge across hundreds of generated teams — remains hard. Failure modes aren't trivial. When an agent team goes off track, diagnosing what went wrong across multiple autonomous actors is genuinely complex. The infrastructure wasn't designed for this. AI agents have exposed a fundamental gap in cloud architecture: existing cloud was built for stateless production workloads, not the dynamic, exploratory computing that agents require.

Organizational gaps are real. How do you structure compensation when humans orchestrate but don't execute? What does career progression look like when execution is automated? These aren't hypothetical questions. They need answers before most enterprises can adopt this model at scale.

The deployment gap is real. 72% of organizations are testing agentic AI, but only 11% have production deployments. The distance between experimentation and enterprise-grade adoption is wider than the hype suggests.

Governance gaps are real. Accountability when agents make autonomous decisions across multiple systems is an open problem. Agents execute across three enterprise systems in under a second. Human oversight at that speed is retrospective, not real-time. We need new oversight patterns, not just faster versions of old ones.

Acknowledging these gaps doesn't weaken the thesis. It strengthens it. The organizations solving these problems now will define the operating model for the next decade.

The Playbook Won't Be the Same

Three technologies converged. Frontier models, agent harnesses, and context management. The convergence — not any single component — is what makes generated teams viable.

The timeline isn't distant. Gartner says 40% of enterprise apps by end of 2026. Amazon, Nubank, Anthropic, and Genentech are already operating this way. This isn't a 2035 prediction. It's a 2026 reality being adopted unevenly.

If you're building infrastructure: the gap between what agents need and what cloud provides is the biggest opportunity in enterprise tech right now.

If you're leading teams: start experimenting with hybrid workflows. Understand orchestration before you're forced to learn it under pressure.

If you're setting strategy: the organizations that master team generation will have a structural advantage that hiring optimization cannot match. Fixed teams compete with fixed teams. Generated teams compete on a different axis entirely.

We spent a century optimizing how we hire, onboard, and manage human teams. The next decade will be about mastering how we generate, orchestrate, and govern AI ones. The playbook won't be the same — and that's the point.