Formalization in Mathematics and the Path to Neurosymbolic Intelligence
Mathematics serves as the discipline that transforms intuitive notions into precise, verifiable structures. Formalizing an idea in mathematics entails taking an intuitive, vague, or verbally expressed concept and articulating it in a precise, unambiguous, rule-based language. This process enables rigorous analysis, proof, or computation. In essence, formalization expresses an abstract idea within a strict, symbolic framework governed by explicit rules and axioms.
The Limits of the Formalist Dream
David Hilbert’s vision in the early 20th century exemplified this pursuit of formalization. Through his famous 23 problems, posed in 1900, and particularly the Entscheidungsproblem (decision problem), Hilbert sought a mechanical procedure—an algorithm—to determine the provability of any mathematical statement within a formal system. This ambition aimed to secure mathematics by ensuring the complete mechanizability of reasoning.
Alan Turing addressed a closely related question in 1936. By introducing the concept of a universal machine (now known as the Turing machine), Turing demonstrated the undecidability of the halting problem: no general algorithm exists that can determine, for an arbitrary program and input, whether the program will eventually halt or continue indefinitely. Turing’s proof showed that Hilbert’s Entscheidungsproblem admits no solution, as it reduces to the halting problem. This result established fundamental limits on formalization and computation, revealing that mathematics cannot fully mechanize all truths or decisions.
Mathematics does not invent ideas; rather, it locks them into forms that cannot deceive. Precise symbolic representation enforces consistency, eliminates ambiguity, and permits verification through logical deduction.
This tension—between the desire for complete formalization and its inherent limits—would eventually shape not only mathematics but the entire pursuit of artificial intelligence. Mathematicians themselves don’t think in purely formal terms. As Henri Poincaré observed, mathematical discovery often arrives as a sudden intuitive insight, with formal proof following as verification. The creative act precedes the formal lock.
Intelligence as Adaptive Resource Allocation
This brings us to a fundamental question: if intelligence, whether human or artificial, isn’t about pure formalism or pure intuition, what is it about?
Consider this mental model: Intelligence is the adaptive allocation of cognitive resources between fast, intuitive processing and slow, formal reasoning. Humans excel not because they’re always rigorous or always intuitive, but because they’ve learned—through evolution, development, and experience—when to be which.
A chess grandmaster doesn’t calculate every possible move (a formal impossibility on any meaningful timescale). Instead, they rely on pattern recognition (System 1 intuition) for most positions, reserving deep calculation (System 2 reasoning) for critical junctures where intuition suggests something might be amiss or particularly promising. This “selective formalization”—knowing what to subject to rigorous analysis and what to handle automatically—is the hallmark of expertise across domains.
The AI Crossroads: Competing Models of Mind
Modern artificial intelligence stands at a crossroads between competing models of what intelligence actually is.
The Predictive Processing Model suggests intelligence is fundamentally about minimizing “surprise.” The brain is a hierarchical prediction machine, constantly generating models of the world and updating them based on sensory input. In this view, the intuitive/formal split might correspond to different levels of this hierarchy—fast predictions at lower levels versus slower model updates at higher levels.
The Reinforcement Learning Model frames intelligence as reward maximization. An intelligent agent takes actions to maximize cumulative reward, with the intuitive/formal tradeoff mapping onto the exploration-exploitation dilemma: when to exploit known strategies (intuition) versus when to explore new possibilities (requiring more formal analysis).
The Embodied Cognition Model argues that intelligence cannot be separated from physical interaction. Much of what we call “intuition” is actually embodied know-how—skills like catching a ball or navigating a room that reside in our sensorimotor systems rather than our symbolic reasoning. Formal thought emerges from, and remains grounded in, these bodily interactions. In The Spell of the Sensuous, Abram argues that meaning comes from embodied, sensory engagement with the world. Modern AI works almost entirely through abstraction. This makes AI detached from nuances.
Neuro-Symbolic AI: One Path Through the Maze
This brings us to neuro-symbolic AI—the contemporary attempt to bridge these divides. By combining neural networks (excellent at pattern recognition and handling uncertainty) with symbolic systems (enforcing logical consistency and explicit reasoning), neuro-symbolic approaches aim to capture something like human selective formalization.
In practice, this means:
- Combinatorial optimization problems like the Traveling Salesman Problem, where neural networks learn efficient heuristics while symbolic components enforce hard constraints
- Autonomous systems like self-driving cars, where neural perception identifies objects while symbolic reasoning enforces traffic rules and safety protocols
- Scientific discovery, where neural models identify patterns in data that are then formalized into testable hypotheses
Yet for all its promise, neuro-symbolic AI faces profound challenges:
- The Engineering Bottleneck: Integrating two fundamentally different computational paradigms—the continuous, statistical nature of neural networks with the discrete, logical nature of symbolic systems—remains complex.
- The Scale Paradox: Much of AI’s recent progress has come from scaling neural models with vast data and compute. Neuro-symbolic systems, with their hybrid architectures, may not scale as elegantly or efficiently.
- The Unsolved Grounding Problem: How do symbols in an AI system acquire real meaning? Simply connecting neural perception to symbolic logic doesn’t automatically solve this classic philosophical challenge—it may just relocate it.
Beyond Hybrids: Alternative Futures
It’s crucial to recognize that neuro-symbolic AI represents one path among many. Alternative approaches continue to show remarkable results:
- End-to-end deep learning has produced systems that master complex games (AlphaGo) and generate human-like text (GPT-4) without explicit symbolic components
- World models attempt to give AI systems internal simulations of how the world works, potentially capturing formal relationships implicitly rather than explicitly
- Embodied AI grounds intelligence in physical interaction, suggesting that formal reasoning might emerge naturally from sensorimotor experience rather than needing to be separately engineered
The Fundamental Tradeoff: When Does Formalism Help?
Perhaps the deepest insight from this exploration is that formalization itself carries costs. Every time we “lock down” an idea into precise rules, we gain verifiability but lose flexibility. Every abstraction makes certain assumptions, and when those assumptions don’t hold, the formal system breaks.
Human intelligence seems to have evolved not to maximize formal precision, but to optimize a tradeoff: enough structure to enable planning, communication, and tool use, but enough flexibility to handle novelty, ambiguity, and changing contexts. Our cognitive architecture—with its blend of intuitive and deliberative processing—appears finely tuned to this balancing act.
Conclusion: Toward a Science of Cognitive Tradeoffs
As we advance toward more general forms of intelligence, whether artificial or in our understanding of natural minds, the critical challenge may indeed be understanding selective formalization—not as a design principle to be imposed, but as a dynamic to be discovered and nurtured.
Mathematics taught us how to formalize. Neuroscience is teaching us how brains balance efficiency with accuracy.
This isn’t merely a technical challenge. It’s perhaps the defining intellectual quest of our age: to understand the architecture of understanding itself.