- Machine learning (ML) is playing an increasingly important role in the investment industry, but the lack of transparency and explainability of ML algorithms poses legal and regulatory risks.
- There are two types of ML solutions available: interpretable AI, which uses less complex ML that can be directly read and interpreted, and explainable AI (XAI), which attempts to explain complex ML algorithms.
- Interpretable AI is currently the more practical choice, as XAI is still in its early days and may not provide clear explanations.
- ML has the potential to revolutionize investment management by reducing costs, leveraging data, and achieving more targeted results.
- The finance industry should learn from other sectors that have explored the trade-off between interpretability and complexity in AI applications.
- Interpretable AI can provide simpler and more transparent solutions for stock selection and other investment decisions, without sacrificing accuracy.
- In the future, XAI may become more established and powerful, but for now, interpretable AI is the best choice to avoid legal and regulatory risks.
Explaining Machine Learning: A Must or a Bust?
Complex machine learning (ML) algorithms are becoming increasingly prevalent in the investment industry, but their lack of transparency and explainability poses significant risks. ML models can measure risk, execute trades, and drive stock selection, but they often remain black boxes.
According to the International Organization of Securities Commissions, unexplainable ML algorithms expose firms to unacceptable levels of legal and regulatory risk. In other words, if investment decisions cannot be explained, it can lead to serious trouble for individuals, firms, and stakeholders.
While there are efforts to develop explainable AI (XAI) solutions that attempt to explain complex ML algorithms, they are still in the early stages of development. Currently, interpretable AI is the more practical approach. Interpretable AI uses less complex ML algorithms that can be directly read and interpreted. This allows for greater transparency and understanding of the decision-making process.
ML has the potential to revolutionize investment management by reducing costs, leveraging data, and achieving more targeted results. However, the adoption of ML in the investment industry has been slow compared to other sectors. The recent rise of environmental, social, and governance (ESG) investing and the need to assess vast data pools related to ESG have driven the transition to ML in investment management.
It is crucial for the investment industry to learn from other sectors that have explored the trade-off between interpretability and complexity in AI applications. While complexity may be warranted in certain applications, such as predicting protein folding, it may not be essential in stock selection and other investment decisions.
Research has shown that interpretable AI can provide simpler and more transparent solutions for stock selection. These interpretable models use simple ML approaches to learn interpretable investment rules. They are scalable, auditable, and can be communicated to stakeholders without advanced computer science knowledge. In fact, interpretable AI approaches have demonstrated comparable performance to more complex black-box models.
While XAI may become more established and powerful in the future, interpretable AI is currently the best choice to avoid legal and regulatory risks. As the saying goes, “If you can’t explain it simply, you don’t understand it.” It is important to prioritize interpretability and avoid excessive complexity in machine learning applications in the investment industry.