
Artificial intelligence is advancing at an unprecedented speed, and this is primarily driven by the dominant paradigm of “scaling”—more computing, more data, and more parameters, i.e., larger models. The allure of scaling lies in its simplicity: if we keep making models bigger, perhaps we’ll eventually achieve human-like AI, or Artificial General Intelligence (AGI)—machines capable of human-level intelligence, creativity, adaptability, and generalization. Yet, as impressive as these large language models are, important theoretical questions remain unanswered. Is scaling alone truly sufficient for achieving genuine understanding, human-like creativity, or, more profoundly, consciousness? In this post, I argue that scaling, despite its practical successes, is fundamentally limited in its ability to produce true AGI. Instead, AI researchers will need insights from contemporary neuroscience. These approaches reveal critical blind spots in our current AI trajectory, challenging the simplistic notion that bigger will always mean better and offering a richer, more integrated path toward genuine intelligence and creativity.
Stuart Russell, a top AI researcher at Berkeley, sharply critiques the scaling approach, highlighting the absence of fundamental guiding principles underlying these massive models, often called “giant black boxes.” Scaling is an empirical, rather than theoretical, strategy: it lacks a solid scientific basis guaranteeing progress toward AGI. Practical limitations loom—like finite amounts of useful data and physical limits on computing capacity. More troubling, Russell points out that even impressive breakthroughs—such as AlphaGo’s acclaimed successes—can mask underlying misunderstandings, creating illusions of intelligence without genuine comprehension. This raises serious doubts about whether scaling alone could lead to true AGI. If scaling fails to deliver on its promises, we risk not only stagnation but a potentially devastating “AI winter,” leaving the field economically and scientifically stranded.
Emergence
Recent research highlights emergent abilities—skills spontaneously arising only after models surpass certain size thresholds—as having a similarity with the human brain. Wei et al., 2022 report that tasks like arithmetic and multi-step reasoning appear abruptly at specific scales, defying simple predictions. These properties initially seem to support scaling strategies: perhaps the path to AGI is simply discovering larger emergent transitions. However, the unpredictability underlines a critical vulnerability: emergent abilities are fundamentally uncertain, appearing without warning and lacking theoretical explanations. Without a deeper understanding of why these transitions happen, we remain unable to predict future breakthroughs reliably. Yann LeCun, Chief AI Officer at Meta, and winner of the Turing Award, makes this point repeatedly. Instead, we depend on trial and error, an inherently risky strategy, in terms of AI safety, for developing something as significant as AGI. Such unpredictability underscores the urgency of grounding AI in robust scientific principles.
Karl Friston’s Free Energy Principle (FEP), a well-supported theoretical framework, describes the brain as an adaptive, nonlinear dynamical system minimizing uncertainty via active inference. Unlike AI’s passive pattern recognition, the embodied brain engages in an action-perception cycle, predicting and controlling sensory inputs. Our brains constantly generate predictions and adjust when reality doesn’t match. Faced with uncertainty, we update beliefs or act to shape outcomes. This offers what scaling “laws” lacks: a guiding principle for adaptive intelligence. Similarly, Scott Kelso’s metastability explains the brain’s flexibility in transitioning between order and disorder. Both theories emphasize embodied cognition, highlighting the limits of disembodied AI. Cognition arises from dynamic coupling between brain, body, and environment—something today’s AI lacks, restricting real-world understanding. True intelligence emerges through real-time sensorimotor interaction, not static pattern-matching alone.
Prediction
At first glance, AI models like ChatGPT and the human brain share a core similarity: both function as prediction engines. Both rely on patterns to make predictions—LLMs use text-based patterns, while the brain integrates sensory and lived experience patterns. However, the way they generate predictions reveals a profound difference in intelligence itself. Foundation models operate by predicting the next token in a sequence, drawing from massive datasets to determine the most probable response. Currently, they function as sophisticated pattern-matching machines that approximate coherence without meaning—only syntax, not semantics.
In contrast, the human brain’s predictive engine works dynamically—constantly generating hypotheses about the world, updating beliefs through sensory experience, and adjusting behavior accordingly. The brain doesn’t just predict passively; it acts to shape its environment in order to reduce uncertainty. This is true agency, a self-correcting loop of action-perception that is absent in today’s AI systems, which remain passive processors of text. This is the distinction between agentic AI, and agency. Without embodiment and an intrinsic drive to minimize uncertainty in real-world interaction, AI remains fundamentally limited—it can predict, but it cannot understand.
The philosopher Luciano Floridi’s wrote a recent paper, AI as Agency Without Intelligence, which reinforces the distinction between LLM-based AI and human cognition. While LLMs display remarkable linguistic fluency and may display a primitive form of agency, they operate as statistical pattern processors, not true intelligence systems. Floridi refines the popular “stochastic parrot” critique, noting that LLMs don’t simply regurgitate text—they synthesize and restructure data in novel, emergent ways, similar to a student stitching together an essay from multiple sources without deep comprehension.
Morevoer, John Searle’s Chinese Room Experiment argues that AI, like a person following a rulebook to manipulate Chinese symbols without understanding them, only simulates intelligence rather than truly comprehending language. Similarly, AI models process symbols without real meaning or intentionality. This highlights the gap between simulation and genuine understanding, reinforcing that scaling alone won’t create true intelligence without grounding in cognition and real-world experience.
Consciousness vs. Intelligence
AGI doesn’t imply acritical consciousness—a distinction underscored by neuroscientist Anil Seth. Intelligence involves flexible, goal-directed behavior, while consciousness involves subjective experiences and sensations. Contrary to assumptions within the AI community, consciousness isn’t merely algorithmic complexity running on the brain’s wetware. Instead, consciousness emerges from being a living, embodied, self-organized organism motivated by self-preservation. This challenges the assumption that consciousness will spontaneously emerge from “simply” increasing intelligence. Even if AI were to reach human-level intelligence, consciousness might remain elusive unless explicitly accounted for. The distinction between intelligence and consciousness further emphasizes the necessity of neuroscience. Understanding consciousness through embodiment may be essential for moving beyond AI toward genuine consciousness.
Artificial Intelligence Essential Reads
Given this, we can still bridge these neuroscience insights with practical agentic AI. Recent neuroscience research from Kotler et al., 2025, highlights how flow states— optimal conditions of peak performance and effortless creativity—integrate System 1 (fast, intuitive) and System 2 (deliberative, controlled) cognition, enabling adaptive decision-making. Today’s AI almost mimics both—LLMs excel at quick pattern-based recognition (System 1), while inference-time computation enables multi-step reasoning (System 2). However, AI lacks embodied, dynamic interplay between these processes.
Agentic AI, guided by neuroscience, could partner with humans, enhancing creativity, intuition, and performance. Future agentic AI must align with human cognition, supporting flow states, enabling true human-AI synergy through active inference, and embodied intelligence. By grounding these agentic systems in neuroscience-derived principles, we transform AI from passive computational tools into true creative partners (and keep in mind, even the brain is a black box!). Ultimately, this synthesis promises transformative advancements, enabling AI to augment rather than merely replicate human intelligence. Moving forward, embracing neuroscience in AI development is essential for responsibly navigating the pathway toward genuinely intelligent—and perhaps consciously aware—machines.