Skip to content

The Agentic Surge – A Castle Built On Sand

By: Stephen Toback

I recently read an article by Yann LeCun, Chief AI Scientist at Meta, where he described the current push toward LLM-based agentic AI as a “recipe for disaster.” At a time when every tech giant is pivoting toward “AI Agents”—systems that can browse the web, execute code, and manage your calendar—LeCun’s skepticism feels like a direct contradiction to the industry’s momentum.

To understand why we might need to pause and evaluate this surge, we have to look at the fundamental disagreement between those building these agents and those, like LeCun, who believe the foundation is flawed.

The Recipe for Disaster

LeCun’s primary argument is that Large Language Models (LLMs) do not possess a “world model.” They are trained on text, not reality. When we give these models “agency”—the power to take actions in the real world—we are essentially giving a highly sophisticated calculator the keys to our digital lives without giving it any common sense.

LeCun argues that because LLMs are auto-regressive (predicting the next word based on the previous ones), they are prone to compounding errors. In a conversation, a small error is a typo; in an agentic workflow, a small error could be a deleted database or a security breach.

The Counter-Perspective: Who Disagrees?

While LeCun is a pioneer, many other leaders in the field believe that “reasoning” is an emergent property of scale and that we don’t need a separate “world model” architecture to achieve safe agency.

1. Sam Altman (CEO of OpenAI)

Altman and the team at OpenAI have leaned heavily into the idea that scale—more data and more compute—eventually leads to the reasoning capabilities LeCun says are missing. With the release of models like o1, which use “Chain of Thought” to think before they speak, OpenAI is betting that LLMs can indeed learn to plan.

The Perspective:

“I think one of the most surprising things about LLMs is that they do seem to learn these internal models of the world… it’s not just a statistical parrot.” — Sam Altman, interview with Lex Fridman, 2024.

2. Jensen Huang (CEO of NVIDIA)

As the provider of the hardware that powers these models, Huang views the move toward agentic AI as the natural and inevitable next step of the industrial revolution. He argues that “AI workers” or agents will simply be a new type of software.

The Perspective:

“In the future, every company will have a large collection of agents… these agents will be able to understand the context, understand the mission, and then break down the mission into a set of tasks.” — Jensen Huang, HP Discover Keynote, 2024.

3. Andrew Ng (Founder of DeepLearning.AI and Landing AI)

Andrew Ng is one of the most prominent voices encouraging the industry to move toward agentic workflows right now. He argues that even if the models aren’t perfect, the “workflow” around them (iterative loops where the AI checks its own work) makes them significantly more capable than a single prompt.

The Perspective:

“I think AI agentic workflows will drive a huge trend this year—maybe even more than the next generation of foundation models. This is an important trend for all AI developers to follow.” — Andrew Ng, Sequoia Capital’s AI Ascent, 2024.

Why the “Pause” Matters

The disagreement isn’t about whether AI is useful; it’s about reliability.

If LeCun is right, we are currently trying to build skyscrapers on a foundation of sand. If he is wrong, we are witnessing the birth of a new type of digital workforce. For those of us managing technical media environments and complex production workflows, the “brittleness” of these agents remains the primary concern.

Until an AI can “reason” why a specific action might be destructive—rather than just calculating that the action is statistically likely—the move toward full agency requires a cautious, human-in-the-loop approach.


References & Fact Check:

  • Yann LeCun on “Recipe for Disaster”: Originally stated during various 2024-2025 press briefings and technical talks regarding Meta’s “V-JEPA” and World Model research.

  • Sam Altman on World Models: Referenced from the Lex Fridman Podcast #419.

  • Jensen Huang on AI Agents: From his keynote at the GTC 2024 conference and subsequent investor calls.

  • Andrew Ng on Agentic Workflows: Detailed in his “Letters to the Community” via DeepLearning.AI and his 2024 technical presentations.

Categories: DDMC Info

Leave a Reply

Your email address will not be published. Required fields are marked *