Imandra launches LLM-based AI for FIX connectivity

Imandra announced the launch of FIX Wizard for its FIX Connectivity suite used by banks and trading firms, which will make it the first AI assistant to be deployed across its product line. Built on Imandra’s neuro-symbolic architecture, FIX Wizard automatically reasons about complex financial protocols and APIs, acting as an expert AI assistant for onboarding clients in capital markets.

FIX Wizard combines statistical artificial intelligence (AI), used by large language models (LLMs), and Imandra’s automated reasoning to create a powerful generative AI assistant grounded in logic, with independently verifiable audit trails that meet stringent regulatory requirements. This approach addresses the fundamental shortcomings of applying LLMs in regulated markets, which include hallucinations, lack of scalability to unseen inputs and lack of validation of knowledge sources.

“LLMs hold tremendous promise, but ultimately cannot be trusted in regulated environments. By combining their strengths with scalable, rigorous automated reasoning, we obtain a kind of magic: conversational interfaces with correct reasoning and domain-specific skills,” said Grant Passmore, co-CEO of Imandra, in a statement.

Today, capital markets run on a myriad of complex and interconnected trading systems. Historically, navigating PDF specifications of FIX interfaces was highly manual and error-prone. The industry has been working on initiatives like FIX Orchestra – a digitized representation of FIX specifications. Imandra takes this approach further by creating a full logical model of the API.

FIX Wizard uses Imandra’s “digital twin” of a FIX gateway and automated reasoning to:

  • Answer questions about the FIX specification and rules of engagement
  • Analyze customer-provided FIX traffic, understand the underlying causes of issues and make recommendations on how to remedy them
  • Diagnose many issues at once, not just feeding back the first problem encountered
  • Give guidance on certification test cases and run conformance tests against the implementation

“LLMs are good at translation and bad at reasoning. Automated reasoning is bad at translation (i.e., requires you to be very precise) but incredible at reasoning. When we combine these techniques we get ground-breaking products that can be safely deployed in regulated environments,” said Denis Ignatovich, co-CEO of Imandra, in a statement.

Source

Related Posts

Previous Post
Liquidnet launches service for very large or illiquid blocks trading
Next Post
DTCC: industry affirmation progress flat as T+1 looms

Fill out this field
Fill out this field
Please enter a valid email address.

X

Reset password

Create an account