Can AI Replace Investment Analysts?
Financial institutions and OpenAI have been experimenting with the application of GenAI models to the kind of tasks carried out by investment analysts. How are they doing?
Among the professions potentially vulnerable to replacement by AI models, financial services figure prominently. This is especially so in the advisory field where high-quality advice is becoming more available to the less wealthy, as we explored in the recent post Can Tech Democratize Wealth Management?
The role of investment analyst, on the other hand, is more demanding, requiring a more complex set of skills. Nevertheless, financial institutions (and, it is rumored, OpenAI) have been experimenting with using GenAI models to carry out some or all of the tasks involved in evaluating investment opportunities and creating useful reports. Some of these experiments are now producing fairly clear answers to the question ‘Can AI replace investment analysts?’.
What do investment analysts do?
The first step in getting to the bottom of the question is understanding what a financial analyst does. An experienced investment analyst:
- locates relevant data from a variety of sources
- analyzes it carefully
- builds models to simulate the performance of the target company under hypothetical future conditions
- uses the forecasting to make investment recommendations
- presents that information through slide decks, charts and text reporting.
Putting to the models to the test
Starting from the workflow of a typical investment analyst, a recent study at European bank Bernstein Société Générale, as reported by the Financial Times, created a series of tests to assess the performance of a dozen different premium GenAI models.
As well as marked differences between the models (with Google’s Gemini turning in the best performances) the study revealed that there was no problem with initial collection of data and its presentation, concluding that ”the AI models could do a good job generating graphs of financial data.” Location of scattered information is their strength: “What large language models can do well is find useful stuff in copious amounts of text.”
Breakdown on modelling and outlook
It was on the subsequent tasks of model-building, forecasting and advising that the models all fell well short: “Most of the AI tools couldn’t create a model at all,” the FT reported. “With a lot of coaxing, Gemini offered some Python code to make a financial model, but it still didn’t work due to errors.”
They concluded: “In the end, no matter how much data and prompting Garre provided, none of the ten-plus models could properly analyse the outlook for companies. The company initiation reports lacked sufficient depth.”
The qualitative/quantitative divide
The marked difference in performance between qualitative and quantitative tasks was confirmed by a commenter to the article with experience of using the models: “ For understanding business models … LLM’s will produce written research on par with most sell-side (often better written). However, people don’t seem to understand how bad the quantitative side of these products actually is.”
'Hallucinations' are a major problem, with the AI “either finding data that doesn’t exist, copying data from the wrong place or wrong company, referencing data that is 10 years old as current, or just making forward-looking numbers up.”
AI as junior assistant
In another recent article in the Financial Times, Robert Buckland, former chief global equity strategist at Citigroup, noted that the AI models showed competence in the sort of tasks performed by juniors in the research operation, concluding: “Right now, AI is better suited to being a research assistant than a research analyst … to tasks which I performed in my beginner years. Information collection, model updates and presentation formatting can all be automated.”
OpenAI takes aim at entry-level roles
This assessment is confirmed by OpenAI's recent rumored initiative to train AI models to carry out the tasks of entry-level staff in investment banks. According to Fortune magazine, the scheme aims to automate "structured, repeatable tasks ... like cleaning and formatting spreadsheets, building financial models, and assembling pitch decks."
“I’m not convinced that we get rid of entry-level workers anytime soon, but I could imagine a world where the skill set we need those entry-level workers to have is different,” Shawn DuBravac, CEO of research firm Avrio Institute, told Fortune.
AI as oracle?
An area where Buckland did find useful application was in the distribution of research results through a chatbot that could be interrogated by customers: “AI can allow customers to ask complex questions of a research department’s output, be it numbers or words — it’s almost like speaking to the actual analyst.”
Intrinsic limitations
The stark contrast in these models between their ability to seek and synthesize information and their quanitative/modelling performance is perhaps unsurprising. As we explored in a recent post Will GenAI reasoning ever be reliable? , the training of generative AI models creates constructs with formidable powers of pattern-matching and information extraction. When a task requires rigorous deduction and calculation, however, the statistical nature of their knowledge seriously undermines their performance.
Excelling in SWOT
While GenAI technologies are not able to perform the full range of tasks of an investment analyst, certain subsets of the worflow are within the capabilities of a well-prompted model.
An example of a task in which GenAI excels is classical SWOT analysis of a taget company (Strengths, Weaknesses, Opportunities and Threats) .
Writing for the CFA Institute, Michael Schopf suggests that AI may already be outperforming analysts on specific SWOT operations. In fact, when six large language models (LLMs) went head-to-head with human analysts, they “...uncovered risks and strategic gaps the human experts missed.”
There is a catch. Schopf says, “Advanced prompting improved AI performance by up to 40%.” Therefore, it requires a human with prompt engineering skills and some financial expertise to enable AI to reach its full potential. That said, Schopf does not suggest AI is ready to take over for analysts, but may have the “potential not just to support analyst workflows, but to challenge consensus thinking and possibly change the way investment research gets done.”
GenAI in investment analysis – the guidelines
AI may not be ready to run the show when it comes to identifying investment opportunities, but it is certainly proving helpful. As Schopf put it, “Investment professionals who master [prompt engineering] will extract exponentially more value from AI tools. Those who don’t will watch competitors produce superior analysis in a fraction of the time.”
Setting up guidelines and best practices for the use of AI in investment analysis is crucial to ensuring a firm maximizes the benefits of this technology without crossing ethical boundaries or compromising client trust.
The CFA Institute offers a practical set of guidelines for using AI on the front lines of investment:
- “Augmentation, Not Automation” — This sentiment is familiar to anyone who has been tracking AI’s use in almost any industry. Still, it’s essential to use it as a tool to boost productivity, not as a replacement for human insight. CFA has suggestions for its practical applications: “For less-experienced investment professionals, investment firms may deploy AI tools to enhance their productivity, such as automating data collection or generating initial research drafts. More experienced professionals, however, could focus more on leveraging AI for hypothesis testing and scenario analysis.”
- “Enhancing Strategic Decision-Making” — For a number of reasons, not the least of which is AI’s inability to explain its decision-making process, CFA suggests, “...AI should be used to support decision design, not to make the final decision. Its role is best suited to generating ideas or automating components of the process, rather than serving as the final arbiter.”
- “Preserving Human Judgment” — Outsourcing too many tasks, especially high-level tasks, runs the risk of eroding critical thinking skills. To avoid an over-reliance on these tools, CFA says, “Create deliberate workflows where AI outputs are stress-tested through human-led discussions. Encourage analysts to perform periodic ‘AI-free’ exercises, such as manual valuation or market forecasting, to maintain cognitive sharpness.”
- “Ethical and Regulatory Challenges” — Ethical and legal concerns must always be top of mind when using AI, especially in a highly regulated industry. “With AI having a role in decision making, human guidance and oversight has become even more important. The assumption that machines can make better investment decisions by being more rational is unfounded. Current AI models still exhibit biases.”
- “Investor Skill Sets Must Evolve” — As investors and analysts work side by side with AI, they must develop new skills or improve on old ones. Going forward, prioritizing “critical thinking, creativity, and AI literacy over rote learning” may be key to thriving in a changing industry.
The bottom line is that financial analysts who are willing to adapt to the times and learn to master new tools, without letting AI take the reins, will likely thrive in the industry for years to come.
Generative AI models, while not ready to take over the process entirely, will deliver significant competitive edge to those able to exploit their considerable strengths while carefully avoiding their weaknesses..
