03.02.2026 09:46Author: Viacheslav Vasipenok

Sequoia Capital Declares: 2026 — This Is AGI

News image

In a bold January 2026 essay titled "2026: This Is AGI," Sequoia Capital partners Pat Grady and Sonya Huang argue that we have already entered the era of Artificial General Intelligence — not in the narrow, philosophical sense of human-like consciousness, but in the pragmatic, functional sense that matters most for the real world: the ability to "figure things out" autonomously.

The core thesis is simple yet provocative: AGI isn't about mimicking human reasoning perfectly. It's about systems that can independently tackle complex, open-ended tasks without constant hand-holding, step-by-step prompting, or human intervention. Sequoia sees this breakthrough manifesting most clearly in long-horizon agents — AI systems capable of sustained, multi-step work over extended periods, correcting errors, backtracking from dead ends, and persisting toward a goal until it is achieved.

As the authors put it: "AGI is the ability to figure things out. That’s it." And in 2026, they declare, long-horizon agents are functionally AGI, with coding agents firing the first shot and many more domains soon to follow.


The Three Ingredients That Made "Figuring It Out" Possible

Sequoia breaks down the emergence of this capability into three sequential, compounding layers:

1. Pre-training (Baseline Knowledge)

The foundation was laid in 2022 with the ChatGPT moment. Large language models absorbed an enormous corpus of human knowledge and developed strong basic language competence. This gave AI the raw "what" — facts, patterns, and domain understanding — but not yet the "how" to apply it independently over time.

2. Inference-time Compute (Reasoning Over Knowledge)

The next leap came in late 2024 with OpenAI's o1 series and similar reasoning models. These systems were trained to "think longer" before responding — allocating more compute during inference to chain thoughts, explore alternatives, and self-correct. This introduced depth and deliberation, turning static knowledge into dynamic problem-solving.

3. Iteration / Agent Loops (Long-Horizon Agents)

The most recent and decisive breakthrough arrived in the final weeks of 2025 / early 2026, exemplified by tools like Claude Code and other advanced coding agents. These systems cross a critical threshold: they can now plan, use tools, maintain state/memory, take actions in the world, evaluate outcomes, and loop through multiple attempts until success. They don't just answer questions — they **do work** autonomously for tens of minutes (and increasingly longer), navigating ambiguity, forming hypotheses, testing them, hitting walls, and pivoting.

The authors emphasize that this third layer — iteration over time — is what transforms capable reasoners into generally intelligent agents. Humans excel at long-horizon tasks because we can sustain attention, remember context, and adapt; until recently, AI could not. Now it can.


A Striking Example: Autonomous Recruiting in 31 Minutes

To illustrate, Sequoia walks through a real-world recruiting task delegated to an agent:

A founder asks for help finding a Developer Relations lead who is technically deep enough to earn engineers' respect yet engaging and fun on social media (especially Twitter/X), ideally targeting platform/dev-tool companies.

The agent independently:

  • Searches LinkedIn for relevant titles ("Developer Advocate," etc.) at companies like Datadog, Temporal, LangChain.
  • Filters for strong signals: YouTube conference talks with high engagement (50+ likes/views).
  • Cross-references with active Twitter accounts showing real followings and interaction.
  • Narrows to a shortlist by checking recent activity drops (potential disengagement).
  • Researches further, rules out candidates (e.g., recently took new roles or raised funding).
  • Identifies one strong match: a senior DevRel at a Series D company post-layoffs, with relevant expertise, 14k followers, and engineer-appealing memes.
  • Drafts a personalized, sincere outreach email.

Total time: 31 minutes. No scripts, no step-by-step guidance — just a high-level goal and the agent "thinking out loud" through hypotheses, dead ends, and corrections, exactly like a top human recruiter would.

This is not magic; it's the compounding of knowledge + reasoning + persistent iteration.


Two Parallel Paths of Technical Progress

Sequoia distinguishes two main vectors driving long-horizon agent capability:

  • Reinforcement Learning (RL): Intrinsic training that teaches models to maintain focus, chain behaviors coherently, and handle long sequences. This is largely scaled by frontier labs (OpenAI, Anthropic, etc.) and includes multi-agent systems, tool use, and reward shaping.
  • Agent Harnesses / Scaffolding: External engineering layers that work around model limitations — long-term memory, state handoff, compaction, guardrails, tool integration, and retry logic. This is where application builders (Manus, Factory’s Droids, Claude Code integrations) are innovating fastest.

Both paths are scaling rapidly, but the harnesses are enabling productization today.


Exponential Trajectory and Bold Projections

Tracking from METR shows long-horizon task performance doubling roughly every 7 months.

Extrapolating:

  • Reliable day-long human-expert tasks by ~2028;
  • Year-long tasks by ~2034;
  • Century-long tasks by ~2037.

Failures remain (hallucinations, context loss, wrong paths), but they are becoming rarer and more fixable. The direction is clear.

Also read:


The Big Shift: From Talkers to Doers

The essay's closing contrast is stark:

  • 2023–2024 AI apps were talkers — sophisticated conversationalists with limited real impact.
  • 2026–2027 AI apps will be doers — autonomous agents that act like tireless colleagues, running in parallel, handling all-day persistence tasks.

This unlocks entirely new business models: founders can productize work that requires sustained attention (e.g., cross-referencing clinical trials, mining customer support tickets). Interfaces will evolve from chat to delegation ("hire an agent"). The ultimate litmus test for AGI, as Sarah Guo suggested: Can you hire it?

Sequoia's verdict is unequivocal:  
"AGI is here, now. Long-horizon agents are functionally AGI, and 2026 will be their year."

The era of chatbots is over. The era of autonomous, outcome-delivering AI has begun.


0 comments
Read more