If you explore "AI investing" on the internet, you'll encounter a plethora of offers urging you to entrust your finances to artificial intelligence.
Recently, I dedicated thirty minutes to understanding the capabilities of so-called AI "trading bots" in handling investments.
While many boast of delivering substantial returns, it's essential to heed the cautionary note echoed by reputable financial institutions: your capital is susceptible to risks, whether guided by a human or a computer in stock market decision-making.
Despite this, the fervor surrounding AI in recent years has led to a surprising statistic - nearly one in three investors, according to a 2023 US survey, express willingness to let a trading bot take charge of all their investment decisions.
John Allan, head of innovation and operations at the UK's Investment Association, emphasizes caution in embracing AI for investments. He highlights the gravity of investment decisions, affecting individuals and their long-term life objectives. Allan suggests waiting for AI to prove its effectiveness over the long term before fully relying on it, emphasizing the enduring role of human investment professionals.
While AI-powered trading may seem poised to replace some highly skilled human investment managers, it is crucial to acknowledge the novelty of AI in this realm, accompanied by inherent issues and uncertainties.
Firstly, AI is not a crystal ball and cannot predict the future any more than humans can. Unforeseen events, such as 9/11, the 2007-2008 credit crisis, and the COVID-19 pandemic, have historically impacted stock markets.
Secondly, the efficacy of AI systems depends on the quality of initial data and software crafted by human programmers. While early AI systems in the 1980s were based on "weak AI," contemporary "generative AI" is more potent but susceptible to flawed data inputs and biased decision-making.
Elise Gourier, an expert in AI, cites examples of AI going wrong, such as Amazon's biased recruitment tool in 2018. Generative AI can produce incorrect information or exhibit biases, leading to what is termed a "hallucination."
Prof Sandra Wachter warns of the potential for data leakage and "model inversion attacks," wherein hackers try to reveal underlying coding and data through specific questions posed to AI.
Despite these risks, a considerable number of investors are drawn to AI decision-making. Business psychologist Stuart Duff suggests that some individuals trust computers more than humans, believing in the objectivity and logical decision-making of AI. However, he cautions that AI tools may reflect the thinking errors and judgments of their developers and lack the intuitive experience needed during unprecedented events.
In summary, the allure of AI in investment decisions is met with a complex landscape of risks and uncertainties, prompting experts to advocate for cautious consideration and a continued role for human professionals.
Comments