Investment Intelligence and AI Trends for 2025
By Gaby Diamant, Bridgewise CEO.
Believe it or not, but we are already halfway through the current decade. And with a new year comes new opportunities and new perspectives, and above all a new level of clarity into the trends that are shaping up for the next year. There are a number of AI trends that will have a significant impact on the industry and will also be felt in the capital markets more broadly. So in this post, I will focus on three big areas of change that I think will be felt widely across multiple industries even as they are most closely centered in the AI field.
2025 and The Rise of the Multi-Agent Model
This is certainly the most important trend that will impact the AI space in 2025 and beyond. So what is the multi-agent model? Let’s start with foundation models or LLMs; things like GPT-4o or Claude 3.5 Sonnet. While current LLMs are powerful and highly versatile, they face a significant challenge in achieving the depth of understanding required to act as domain experts. These models are not inherently designed to function as bespoke, subject-matter specialists.
Where we see AI going is a world in which foundation models are connected to different agents that will answer different types of questions. For example, you will have an LLM that when asked questions that are related to investments will approach Bridgewise in order to answer. A foundation model in this ecosystem acts as a versatile core, but specialized agents are layered alongside it to address domain-specific queries. For instance, when a client poses a question about investments, the system relies on Bridgewise, a bespoke investment-focused LLM, to deliver an accurate and tailored response.
How does it work?
Let’s take Bridgewise as an example. Our solution initially relied on five distinct language models. Over time, we transitioned to a more advanced architecture based on two proprietary pre-trained models developed by Bridgewise, supported by multiple BERT-based models fine-tuned for specific tasks. Additionally, we incorporate Mistral, an open-source large language model, as the foundation for some of our generative capabilities.
We have extensively fine-tuned Mistral across hundreds of sessions to adapt it specifically to our financial domain. This involves tailoring it to handle domain-specific terminology and context. Our architecture also includes a suite of specialized language models, each designed to perform a specific role. These models collectively create a multi-agent system that powers our solution.
All these proprietary models work seamlessly together, forming a cohesive and robust system tailored to Bridgewise’s mission. This multi-agent approach enables us to deliver precise, reliable, and domain-specific insights to our users.
Moving forward, we will see that companies like Bridgewise will become one layer of multi-agent solutions. Let’s imagine a virtual person that you want to speak to; you will have a Bridgewise LLM for investments, but also a health and medicine LLM, and a math LLM. Each one will be an agent. And in the middle there will be a layer, that will understand first, which agent should I try to address the question, then send the question to that agent. And then within that agent, are layers of additional agents that will work to understand, parse, and answer any given question.
So in our business case, we believe there will be clients, a bank for example, that will want to create an LLM that can answer any question from its customers. If there is a question about investments, they will ask us, but for customer service questions or questions about other products such as loans or mortgages, they may turn to other agents. That’s where we see the industry heading.
LLMs Go “Better, Faster, Stronger” and Also More Efficiently
If you’re familiar with Daft Punk’s song “Harder, Better, Faster, Stronger,” it provides a fitting analogy for the advancements anticipated in the AI industry in 2025. However, the focus is shifting from “Harder” to “Efficient,” reflecting the industry’s push for smarter, more resource-conscious models.
One of the most significant challenges with large language models (LLMs) lies in the cost and complexity of training. Developing these models requires enormous computational resources and comes with hefty financial implications. This has driven the industry to explore innovative ways to make LLMs more efficient without sacrificing performance. For example, Mistral recently introduced a model that delivers performance comparable to larger models while using significantly fewer parameters. By optimizing the training process to run these parameters in parallel, they have achieved substantial computational and cost efficiencies. These advances could have transformative downstream impacts across the AI landscape.
These improvements in efficiency will likely enable a future where LLMs can run on personal devices, such as laptops or desktops. While this is not an immediate reality, it represents a logical direction for the industry. To make this feasible, the industry will need to pioneer new training methods that prioritize speed and efficiency while reducing hardware demands. This vision underscores a broader shift from scaling models endlessly to refining their efficiency and accessibility.
Another expected outcome is the rise of specialized client-side applications powered by LLMs. These next-generation apps will use existing models as a foundation but will be fine-tuned to excel in specific domains or tasks. Instead of focusing on building larger and more generalized models, the emphasis in 2025 will likely shift toward creating smaller, domain-specific models that require fewer resources yet deliver highly accurate, specialized outputs.
This shift reflects a maturing AI industry that recognizes the limits of brute-force scaling. Efficiency, adaptability, and specialization will define the next wave of innovation, balancing performance with practicality and cost-effectiveness.
More Players Will Follow Bridgewise into the AI Investment Intelligence Space
At Bridgewise, we believe we are paving the way for AI to transform investment intelligence and revolutionize investor decision-making. Over the past five years, we have dedicated ourselves to laying a strong foundation—technologically, in regulatory compliance, and through business innovation—to bring generative AI to the investment community. This vision has driven our focus and positioned us at the forefront of a significant transformation coming to the capital markets.
However, we recognize that in 2025 we won’t be alone in this space. We anticipate many companies will follow our lead, further expanding the use of AI in investment intelligence. For instance, platforms like Investing.com have already begun experimenting with generative AI tools, such as creating analyst reports and other features aimed at reshaping how investment insights are delivered.
That said, new entrants to this space will face substantial regulatory challenges, a critical factor in the adoption of AI within capital markets. Regulation is essential, particularly when it comes to applying AI to investments. We believe that responsible AI implementation must be a shared priority across the industry. Unlike other domains, where AI errors may result in minor inconveniences, inaccuracies or hallucinations in the investment space can directly impact people’s savings and livelihoods. This is why we have committed significant resources to ensure that our AI operates reliably, transparently, and responsibly, guided by stringent compliance standards.
The importance of responsible AI in finance cannot be overstated. Capital markets require tools that not only provide innovative insights but do so with unwavering reliability. We have worked tirelessly to establish these principles in our solutions, and we believe they are essential for any organization entering this space.
These are just a few of the trends shaping the industry in 2025. We are confident that the pace of AI transformation in capital markets will only accelerate throughout this year and into the latter half of this decade. It is an exciting time, and we look forward to seeing how the ecosystem evolves in the coming years.