The AI Revolution in Quant Investing: Insights from Boltzbit’s Leaders

Please give us a little introduction to your current role and what you do 

Yichuan Zhang [YZ] - Co-founder and CEO of Boltzbit, I have been researching in the GenAI field for over 15 years. I work on the company’s mission and setting our strategy for innovation at the very frontier of AI research.

Ivan Mihov [IM] - Boltzbit’s CRO, I am responsible for business development and GTM. With over 15 years of experience in Fin and Trade Tech, I work on the adoption of custom LLMs for enterprise, starting within the financial domain.

What have been your/your firm’s top 3 priorities for the coming year?

[YZ] Our vision is to democratise access to the analytical power of LLMs - the capability to reason and self-train at scale instantaneously, much like how the human brain adapts to newly acquired knowledge...

As an AI research lab, our top priority remains on advancing research and engineering at the core of LLM architectures, specifically towards computationally efficient self-learning capabilities.

Secondly, we anticipate broader adoption of our technology across new industry verticals, driven by the advantages of online learning from proprietary data streams and workflows. While finance professionals have been early adopters, industries such as supply chain, insurance, healthcare, and telecommunications stand to benefit significantly.

Lastly, we continue to work closely with our partners to develop cutting-edge applications of LLMs in Big Data Reasoning, challenging traditional machine learning models and striving to match or surpass their performance.

You started with an initial focus on the fixed income space, can you talk us through how quant and systematic methods can be used here?

[IM] Fixed Income as an asset class is characterised by its OTC nature and the vast number of securities in existence, adding a layer of complexity, compared to say equity and FX markets. Data from both primary and secondary market activity is often not optimally normalised, making investment strategies more challenging in terms of data capture and security selection.

Unstructured data capture and modeling remain the most critical use case for fixed income market participants. The ability to model counterparty interactions across various communication channels enables firms to better understand price formation, classify counterparties, refine pricing techniques, and optimize dealer selection—all key areas for true alpha generation. However, since the beginning of this year, we have seen growing demand for algorithm optimization and the distillation of traders’ execution patterns into LLMs.

While Boltzbit’s core technology was first deployed in fixed income, it remains asset-class and industry-agnostic.

What are the biggest impacts breakthroughs in GenAI will have on the quant and systematic space?

[YZ] Big Data Reasoning (BDR) across data fusion, forecasting, sentiment analysis, dynamic pricing and AI assisted decision making. Within quant workflows, the key enablers of BDR include faster learning, normalized data streams, proprietary model training, and the decomposition of computational tasks into highly efficient AI agents. Any advancement that improves learning efficiency and reduces the need for costly fine-tuning will accelerate the deployment of new models while enhancing their accuracy. More accuracy leads to greater automation.

With so much data in the market right now, where do you think the most value can be found?

[IM] The rate of LLMs performance improvement is plateauing, as we gradually (or rather rapidly) exhaust the publicly available data we can train models on. Even the largest models remain ineffective for narrow, proprietary tasks, since they lack access to the firm’s private data sources—this is where the real value lies.

Proprietary models (to the firm) deliver the ultimate edge in performance and reliability from highly specific private datasets and understanding market microstructure within the firm’s operational constraints. To unlock this potential, firms must normalise and integrate structured and unstructured data (e.g., time-series and text) into a unified intelligence layer, fully realizing the predictive capabilities of LLMs.

What is your advice to funds hoping to get new systematic strategies into production quickly and more often?

[YZ] Start with the core of your business operations, ensuring seamless access to proprietary data and a clear strategy for extracting value from it. Next, implement real-time feedback loops to train models at inference speed, enabling instant validation and adaptation. Finally, prioritize computational efficiency by distilling AI agents in narrow yet critical tasks. Deploying small, highly efficient models that learn in micro-increments is faster and more effective than batch training large, all-encompassing models.

ChatGPT is everywhere and being used everywhere. How do you see quant funds using this new technology and what advice can you give people using it?

[IM] ChatGPT is a general-purpose tool, useful for personal co-pilot applications. Quantitative strategies demand highly tailored analytical reasoning that generic B2C AI chatbots are not designed for. Leveraging the firm’s proprietary data: transactional, price-ticks, market sentiment and real-time unstructured source like chats, news and research is where AI can truly bring value. Only models trained on firm-specific data and tailored for specialized tasks can truly align with a firm’s strategic objectives within its private tech stack. While the GPT architecture is powerful, it is likely not the optimal solution for quant tasks.

What are your predictions for generative AI in the coming years?

[YZ] AI is the engine, but the data is the oil - the critical resource that fuels it. In the coming years, competition in AI will center on rapid learning capabilities and efficiency at runtime, particularly with live proprietary data streams.

The future of AI is agentic, fundamentally transforming how human experts delegate tasks to process vast amounts of complex information in real time. To reach optimal efficiency, AI models must learn and adapt instantaneously, as the time lag from batch training and fine-tuning disrupts real-time decision-making and eliminates any temporal alpha capture. High-frequency decision making demands high-frequency model training.

At Quant Strats, we always discuss the challenges and opportunities of blending quant and fundamental strategies, and this is always a popular topic – why do you think this is? What do you think is the most important questions for quants when considering this strategy?

[IM] Approaching the blend of quant and fundamental strategies an abstract perspective—specifically, integrating time-series modeling with unstructured textual data—the potential for newly emergent strategies is immense.

Traditional machine learning techniques have struggled to dynamically capture multi-dimensionality of cross-asset and cross-strategy approaches. ML models have to be calibrated continuously to evolving market states, incorporating new data and variables as they emerge.

LLMs offer a new paradigm by being able to distill and integrate existing ML models into their architectures. This raises a fundamental question: could we ever achieve true "psychohistory" prediction techniques, as envisioned in Asimov’s Foundation—reducing both quant and fundamental strategies to a unified statistical probability output? Perhaps. But the challenge remains.

Subscribe to Our Free Newsletter