AI Information and Investor Trust

Investor trust is like a house of cards – hard to build, easy to destroy. Now, with artificial intelligence taking a major place in the financial world, trust is becoming even more of an issue. For financial institutions, ensuring that clients get the best advice and develop trust in the system is absolutely essential if the advantages of AI are to be realized. This begins with knowing how to address an investor’s concerns.
AI Hallucinations – An Obstacle to Investor Trust
Glue on pizza. Non-existent book lists. Bad health advice. These are just some of the outlandish AI hallucinations that have resulted in obviously, and not-so-obviously, bad recommendations. Some AI hallucinations are just funny, while others have led to legal action.
Despite these liabilities, AI seems to be the destiny of the investment world. Deloitte predicts that AI-enabled applications will be the most popular form of retail investment advice for 78% of the market before 2029.
Nowhere are the stakes higher for AI hallucinations than in the investment industry. AI in finance is not about some anonymous technology spitting out answers to random questions. Investment intelligence platforms can directly influence investment decisions, and therefore must meet the highest standards of reliability. This comes on top of the already strict regulatory compliance issues that all professionals in the industry need to follow.
Black Box Meets Big Brains
The tricky thing about AI to keep in mind is its “black box” quality. In other words, how AI generates information by accessing general large language models (LLMs) in ways that cannot be easily understood.
Many AI solutions, sample thousands of sources across the internet to produce results. This means that various types of irrelevant or low-quality information sources might be part of an analysis. It can be impossible to figure out which sources have been translated into results and how reliable they are. For investors who want to verify the information that supports a recommendation, or understand a result, this lack of transparency will stand in the way.
MIT recognizes that, at present, the LLMs on which AI currently depends are not optimal. But, as industry-specific applications of artificial intelligence begin to emerge, we are seeing the creation of LLMs that are dedicated to investment information. Customized LLMs mean that AI analysis will collect information from more relevant and accurate sources, which increases the reliability of results.
The Role of Responsible AI in Investor Trust
One major effort that seeks to improve AI-generated results is responsible AI (RAI). RAI is based on a set of policies that look to reduce or eliminate biased results, including those connected to social issues. These principles cover things like fairness, transparency, accountability, and efficacy. An example of RAI in action might be an investment firm’s decision to only work with companies that have a high ESG score.
At the heart of RAI is Explainable AI (XAI), which is closely related to the black box problem. The goal of XAI is to allow a simple understanding of how a particular AI system works, and from what sources it has created results. There are some major companies involved in XAI development, including IBM, Google, Microsoft, and Intel.
The World Is Stepping Up
Fortunately, these challenges have been taken up by global organizations that are providing advice and standardization concepts for general use. For instance, the World Economic Forum (WEF) recognizes that investors are willing to accept different levels of risk, and so should use different levels of reliance on AI: none at all, a combination of human and artificial intelligence, and agentic AI advisors.
The Institutional Approach
But brokers, advisors, and exchanges that want to reassure investors who are nervous about the use of AI for investments can take other measures.
For example, let’s take the WEF’s concept and combine it with the idea of diversification. Instead of choosing only one of these options, investors can use all three at once. If they like the results, then they can increase their usage of AI-based advice over time.
Advisors must also communicate the potential of investment intelligence using a term that investors should be very familiar with: risk. The goal of using investment intelligence is to gain new, unique insights into the market. But, as with any sort of information, nothing is set in stone; risk = return. If investors know ahead of time that the information upon which they are depending is generated by AI, then they can make informed decisions about the risk that comes with that information, and set their expectations accordingly. After all, AI faces the same fundamental limitations as any kind of investment information – and investments always carry risk, no matter the source of the information.
Your First Step Towards Investment Intelligence
The BridgeWise platform is a main point of entry for those who want to access the vast opportunities of AI and investment intelligence. Sign up for a quick demo, to understand how we can help deliver AI insights investors can trust and rely on.