Nov 1, 2024

Neuro-Symbolic AI: A Conversation on Building Smarter, Multi-Step Reasoning Systems

Marius Constantin-Dinu

Share this article

This week, I had the pleasure of joining the Austrian Artificial Intelligence Podcast to talk about something I’m really excited about: neuro-symbolic AI and the work we’re doing at ExtensityAI. Neuro-symbolic AI, if you haven’t heard of it, is a combination of symbolic logic and neural networks—basically, it’s about giving AI a bit more “brains” behind the numbers. We’re not just relying on massive datasets and machine learning algorithms but blending them with structured, rule-based reasoning. Why? Because scaling models alone isn’t enough to get us to the next level of AI, where systems can reason their way through complex tasks and make sense of more abstract, nuanced problems.

This approach is gaining interest as people realize AI needs more than just raw data to become truly intelligent. We’re talking about models that can follow instructions, think in logical steps, and still draw on the vast data knowledge we see in neural networks today. Neuro-symbolic AI combines the best of both worlds and at ExtensityAI, our goal is to push this further, creating systems that don’t just handle tasks but can actually reason through them in a way that’s reliable, explainable, and, hopefully, a big step closer to the kind of general intelligence many of us are aiming for in AI.

At ExtensityAI, we’re all about taking this neuro-symbolic approach and turning it into a framework that can handle real-world tasks with precision and depth. Our open-source framework lets AI act like a “reasoning engine.” It can break down tasks into steps, decide which models and tools to use along the way, and adapt as it moves through each stage. Think of it like an AI project manager that knows when to call in different tools or models to handle a complex problem, without just brute-forcing its way to a result.

Let’s say a company wants to generate a whole UI based on a high-level description—maybe a website layout, buttons, content, the works. Normally, an AI would need to be heavily programmed for every little step, or it would just try to figure it all out at once, and often fail. With our framework, the AI can map out each necessary step, check itself along the way, and pull in different specialized models to handle specific parts, like the layout or content generation. It’s like the AI is thinking: “First, I need a basic layout, so I’ll use Model X. Then, I’ll refine the content with Model Y.” This multi-step reasoning makes the whole process both efficient and adaptable, bringing in human verification when needed for things like design choices or final content checks.

One of the biggest challenges in building such a system is keeping everything in line as it works through each step. Errors can add up fast when you’re dealing with a multi-step process, especially if each stage relies on the last. So, we’ve focused a lot on ways to measure the quality of each step in a workflow, not just the final output. Inspired by statistical methods, we created a quality measure that allows the system to “check itself” after each phase. This way, if one part of the reasoning process veers off course, we can correct it right there instead of waiting until the whole process is finished and possibly flawed.

What sets our approach apart from many frameworks out there—think LangChain, for instance—is this built-in verification and explainability. We’re not just generating an answer; we’re building systems that can show why they arrived at a given answer, and they do so by following clear, logical steps. We isolate components and track them closely. In every workflow stage, our AI isn’t just spitting out answers—it’s doing type checks, structure validation, and all kinds of logical cross-checks. This ensures that when an AI system makes a choice, it’s grounded in logic and can be trusted to make the same choice again under similar conditions.

Ultimately, we want to move AI beyond the typical “monolithic” model, where one giant neural net is trying to handle everything. At ExtensityAI, we believe in modular, adaptive systems that know how to leverage their own strengths and call in specialized models when necessary. It’s a step toward AI that can work as a true collaborator, filling in for repetitive tasks but leaving the creative, nuanced decisions to humans.

Our long-term vision? To build a platform where different AI agents can actually work together, tackling everything from initial problem analysis to final solution output. Whether it’s automating scientific research or designing new systems for our business partners, we’re pushing toward an AI that doesn’t just mimic human reasoning but genuinely supports it. Imagine a world where AI isn’t just responding to commands but helping us think better, faster, and more clearly.

Thanks for reading about what we’re up to at ExtensityAI. I hope this gives you a feel for why neuro-symbolic AI is more than just a new tech buzzword. It’s a direction we’re passionate about because it brings AI one step closer to becoming a reliable, adaptable, and intelligent tool for a world that’s becoming more complex every day.

Future of AI.
Available today.

Future of AI.
Available today.

Future of AI.
Available today.