In this conversation with SOSV, our CEO Cristopher Salvi sets out the case behind Logos Research: that hallucinations in AI are a structural problem, especially for long-horizon reasoning, that formal verification is the missing layer, and that solving it requires a new kind of mathematical infrastructure built for agents rather than humans.
On why hallucination is structural, not a transient limitation:
Coding agents generate code with a certain level of randomness. Classical compilers catch syntax errors. Unit tests catch some logic errors — the ones you’ve seen before and thought about writing tests for. But the problems are edge cases you haven’t observed in historical data or haven’t thought about, like a crash, a cyber-attack, an earthquake.
— Cris Salvi, CEO
On why existing mathematical infrastructure isn’t enough:
In a world where AI agents need to reason mathematically about the physical world, the underlying mathematical infrastructure has to be designed for them.
— Cris Salvi, CEO
On the nature of the problem we are solving:
The core challenge Logos is solving is the design of a mathematical API for AI: how formalised mathematics should be decomposed, indexed, and structured so that AI systems can reliably retrieve and recombine it. This is, fundamentally, a data and infrastructure problem — more than a model problem.
— Cris Salvi, CEO