Financial fraud is a top concern for banks in 2021. The pandemic caused a surge of fraudsters and hackers engaging in financial crime, resulting in heavy costs and damaged reputations for many banks. One potential solution is machine learning, which the finance industry has been steadily adopting in recent years, using the technology to identify fraudulent transactions. So, is machine learning the answer to the pandemic-related surge in financial fraud?
According to Forrester, fraud rates across financial products increased by 33% in April 2020. This trend is expected to increase in 2021. In addition, a high number of fraudulent COVID-19 loans will come to light. This problem has emerged due to both the increased number of scams and attacks, alongside the cybersecurity vulnerabilities caused by rapid digital transformation.
Besides the money stolen by fraudsters, an incident results in further costs for financial service organisations through the time and effort put into the investigation, and the potential legal fees and fines.
Machine learning (ML) is a subset of artificial intelligence (AI). The terms are often used interchangeably, but ML is most commonly used to refer to statistical algorithms with the potential to learn from datasets without specific, ongoing programming.
ML provides huge value in financial services because it’s able to flag potential fraud more quickly and causes less false positives than existing methods. For example, KPMG created a ML tool for HSBC, which eliminated 80% of the false positives created by the bank’s previous transaction alert system.
The potential of ML to combat financial fraud means that experts in this area are in high demand. According to Bloomberg, financial services job listings requiring skills in data science, AI and ML increased by 60% in 2018. Demand has steadily increased ever since. A more recent report from the Bank of England found that half of surveyed banks expected the importance of machine learning and data science to increase because of the pandemic.
Clearly, demand for AI and ML is increasing. It’s one of the most promising tools to fight financial fraud in years. However, other issues will emerge as usage grows. What are they and how can they be mitigated?
A common concern about the use of AI and ML techniques is bias. There have been several high-profile cases of gender discrimination in AI-enabled finance, such as the husband who received a credit limit twenty times higher than that of his wife, despite her having the better credit score. Building models to spot fraudulent transactions could suffer from similar bias in what constitutes risk and unusual activity. Such issues often result from the subconscious bias of their creators, so seeking employees who prioritise ethics in ML and those from a diverse range of identities will help organisations to implement ML ethically.
Lack of transparency about how algorithms operate can make it difficult to establish whether they meet regulatory standards. To increase public trust and industry compliance, organisations need to explain how data is being used and why. To do this, it will be vital to employ skilled communicators able to breakdown the technical aspects of machine learning to a wider audience.
According to a Bank of England survey, 35% of banks reported the pandemic has negatively impacted ML model performance. It’s harder for ML models to detect unusual activity because the pandemic has changed our behaviour so drastically.
The role of a data scientist must change to combat this risk, spending more time continuously monitoring and validating models, so that unexpected data patterns can be flagged early. Sharing data between banks could help machine learning models become more sophisticated. There is also a role for external organisations (e.g., the UK’s industry-funded Dedicated Card and Payment Crime Unit with the Police) and professionals with regulatory and/or legal experience to help create a central pool of information from which better risk assessments can be made.