26.06.2025 22:14

State of Foundation Models 2025: A Summary of Innovation Endeavors’ Report

Video thumbnail

In June 2025, Innovation Endeavors, the venture capital firm co-founded by former Google CEO Eric Schmidt, released a comprehensive 126-page report titled State of Foundation Models 2025, authored by Davis Treybig and Eric Schmidt.

The report dives into the transformative landscape of foundation AI models, highlighting their rapid mainstream adoption, technical advancements, economic complexities, and broader societal impacts. Below is a concise summary of the key insights from the report, as shared in posts on X and the accompanying YouTube presentation.


Generative AI Goes Mainstream

2025 marks a pivotal year for generative AI, with adoption reaching unprecedented levels. One in eight workers globally now uses AI tools at least monthly, with 90% of this growth occurring in the past six months. AI applications are generating billions in annual revenue across industries like engineering, design, accounting, and law, fundamentally reshaping workflows and business models.

LLMs Outperform Humans in Complex Tasks

Large Language Models (LLMs) have achieved remarkable milestones, surpassing human performance in specialized domains. They outperform doctors in various diagnostic tasks and solve Olympiad-level geometry problems better than 99% of humans.

A surprising finding: smaller models (e.g., 3B parameters) equipped with advanced reasoning mechanisms can outperform significantly larger models (e.g., 70B parameters) when given time to "think" through problems, highlighting the power of reasoning-focused training.


Exponential Growth in Model Capabilities

Technical metrics for foundation models are scaling at an extraordinary pace. Performance, intelligence, and context windows are growing over 10× annually. Context windows, for instance, have expanded from approximately 8,000 tokens to over one million, while the cost of generating a single token on large models has dropped nearly 1,000-fold in just two years.

The average duration of tasks a model can autonomously complete doubles roughly every seven months, underscoring the rapid evolution of AI capabilities.


Reasoning Models: Think First, Then Speak

The report emphasizes a new paradigm: “smart models think before they speak.” Models trained with Chain-of-Thought (CoT) reasoning and reinforced through post-training (e.g., reinforcement learning with reward models) are unlocking new scaling pathways. The report suggests that post-training may soon eclipse pre-training in importance, as it enables models to refine their reasoning and adaptability.


The Complex Economics of Foundation Models

The economics of foundation models are a mixed bag. Leading players generate hundreds of millions in revenue, but training costs are astronomical — LLaMA 4 reportedly costs over $300 million, GPT-4 around $100 million, and OpenAI’s annual training and data expenses approach $3 billion.

Models become outdated in just three weeks due to fierce competition, with open-source platforms nearly matching proprietary ones in performance, intensifying market pressures.

Workforce Transformation

AI is reshaping organizational structures. Roles for narrowly specialized professionals are increasingly absorbed by generalists augmented with AI assistants, while middle management positions are declining.

This shift reflects AI’s ability to automate routine decision-making and enhance productivity across skill levels.


Model Context Protocol (MCP) as the Integration Standard

The Model Context Protocol (MCP) is emerging as a universal standard for integrating AI models with tools like email, design software, and chat platforms. Increasingly, AI systems serve as “clients” for other AI systems, enabling self-configuring CRMs and databases through autonomous agents.


Hardware Keeps Pace

The hardware landscape is evolving to meet AI’s demands. Selling raw GPU hours is proving more profitable than bundled software solutions, and additional GPU time often outweighs the benefits of software optimization. NVIDIA remains the dominant player, with a reported 10× increase in token generation for inference in Q1 2025. A new wave of startups is developing transformer-specific chips, justifying the cost of rewriting AI software for new hardware as computational expenses far exceed engineering salaries.


Venture Capital Floods AI

AI’s investment landscape is booming, with venture capital allocations rising from 10% in 2024 to over 50% in 2025. Companies like Anthropic are generating $2 billion in annual revenue with 2× growth, yet valuations at 30× revenue raise concerns about a potential bubble. Some startups secure funding at the idea stage without minimum viable products, amplifying risks of market overheating.


A Cautionary Note on Trends

The report warns that not all AI trends translate into sustainable businesses. For example, 75% of AI-powered photo applications lost most of their revenue within six months of their peak, underscoring the volatility of trendy sectors. The rapid obsolescence of models further complicates long-term business viability.


Also read:


Conclusion

The State of Foundation Models 2025 report paints a picture of a dynamic, fast-evolving AI ecosystem that is both transformative and fraught with challenges. As generative AI becomes ubiquitous, technical advancements accelerate, and economic and workforce dynamics shift, the industry faces a delicate balance between innovation and sustainability. The full report and accompanying recap talk provide deeper insights into these trends and their implications for the future.


0 comments
Read more