The Double-Edged Sword: AI-Generated Code in the Banking Industry
Pedro Martinez, CIO & CISO at Zenus Bank
Fintech
The ongoing rise of Artificial Intelligence (AI) has profoundly impacted the banking and fintech sectors, offering potential for increased efficiency, improved customer service, and enhanced risk management. However, as we ride this wave of technological transformation, important questions arise concerning security vulnerabilities, code ownership, and application copyrights associated with AI-generated code. This article aims to dissect these issues, placing them in a historical context, and offering a path forward for banking executives navigating this complex landscape.
Historically, AI adoption by major US banks had been slow but began to pick up pace around the mid-2010s. Advanced AI technologies were initially employed to streamline customer service through chatbots, before gradually evolving to manage more complex tasks, such as fraud detection, credit scoring, and risk assessment. Today, AI platforms like OpenAI’s GPT, IBM’s Watson, Microsoft’s Azure, and Google’s DeepMind are increasingly integral to the functioning of many financial institutions.
Yet, as with any transformative technology, AI brings with it a new set of challenges. One such concern is that the AI-generated code can introduce security vulnerabilities. Traditional software development involves human oversight at every stage, ensuring each line of code adheres to established security protocols. With AI-generated code, however, this oversight becomes increasingly difficult. If undetected, these vulnerabilities could expose sensitive customer data, potentially leading to large-scale security breaches. This raises the need for advanced code vetting systems and increased regulatory scrutiny.
In addition to security vulnerabilities, the issue of code ownership has become a pertinent question. If an AI, like OpenAI’s GPT, helps write an application, does it still belong to the original programmer? The US Patent Office and US Copyright Office currently offer no definitive guidance on this question. In traditional scenarios, the human developer who wrote the code owns it. In contrast, AI-generated code blurs these lines, making it a pressing issue that needs addressing by lawmakers and regulatory bodies.
Moreover, the application of copyright laws to AI-generated code is equally nebulous. Under current US law, copyright protection extends only to original works of authorship. Since AI lacks legal personhood, it cannot own a copyright. However, the increasing prevalence of AI in content generation muddies the waters. Are works created with significant AI contribution considered original works of human authorship or a product of AI? This question has yet to be fully resolved.
Unpacking the security threats
As AI permeates the banking industry, it also opens new avenues for malicious actors. The potential security threats range from subtle identity theft to major data breaches.
One major concern is the use of AI by cybercriminals for identity theft, particularly during the onboarding process. By leveraging deep-fake technology – a sophisticated application of AI that can manipulate or fabricate visual and audio content – fraudsters can now convincingly fake identities. For instance, in 2021, a man in New Jersey was charged for allegedly using AI to create synthetic identities and defraud banks out of hundreds of thousands of dollars.
AI-generated code vulnerabilities also pose a significant risk. AI, particularly machine learning models, are vulnerable to adversarial attacks that manipulate input data to deceive the model, leading to erroneous outputs. An adversary could, for example, exploit this vulnerability to trick a fraud detection system to approve fraudulent transactions.
There is also the threat of data poisoning, where an attacker inserts malicious data into the training set to bias the AI system’s learning process. In a banking scenario, this could compromise a risk assessment model, leading to financial losses.
As a response, the industry must not only stay abreast of these threats, but also invest in developing AI systems that are resilient to these types of attacks. Rigorous testing, ongoing monitoring, and using state-of-the-art encryption and cybersecurity measures can help banks safeguard their AI systems against these threats.
Economic and regulatory impact of AI security threats
The security threats posed by AI have direct and indirect economic and regulatory impacts on the banking industry. They can lead to significant financial losses, reputational damage, regulatory sanctions, and increased cost of operations.
In 2022, a sophisticated AI-led cyberattack on a US bank resulted in unauthorized transfers amounting to $1 million. This attack used a form of AI manipulation known as ‘model evasion’, which modified transactional data in a way that evaded the bank’s fraud detection system. The attack led to a significant financial loss and increased scrutiny from regulators.
Another example is a 2023 case where a major credit card company suffered a large-scale data breach due to an undetected vulnerability in its AI-powered customer service chatbot. The breach exposed the personal data of millions of customers, resulting in class-action lawsuits, regulatory penalties, and significant reputational damage.
The costs of these incidents extend beyond immediate financial losses. Banks face increased expenses in improving their cybersecurity measures, hiring skilled personnel, and ensuring regulatory compliance. Following such incidents, banks typically experience higher customer churn rates and decreased shareholder confidence, both of which can impact long-term profitability.
On the regulatory front, such security breaches typically attract the attention of regulatory bodies like the Federal Reserve and the Office of the Comptroller of the Currency (OCC) in the US. These institutions may impose penalties, increase the rigor of regulatory examinations, and may mandate more stringent risk management practices.
For instance, in response to the aforementioned credit card company data breach, the OCC increased its scrutiny of AI implementations across the banking sector and issued new guidelines on AI security. This regulatory action has increased the compliance burden on banks and other financial institutions, prompting them to reassess their AI adoption strategies and risk management practices.
Looking at the bright side
Despite these challenges, it is critical to remember that AI is a tool that can be harnessed safely with adequate measures. For instance, to address security vulnerabilities, banking institutions can implement robust code review and testing practices, invest in advanced cybersecurity technologies, and regularly update their security protocols. As for code ownership and copyright issues, clear guidelines and agreements prior to AI utilization can provide some measure of protection until formal legal precedents are set.
Looking at the larger picture, the potential benefits of AI in the banking industry far outweigh the risks, especially when those risks are properly mitigated. AI enables faster and more accurate decision-making, drastically reduces operational costs, and significantly improves the customer experience. Moreover, AI’s capability to learn and adapt can help banks stay ahead of the curve in the face of ever-evolving financial threats.
Taking everything into account, while the integration of AI in the banking and fintech industry brings undeniable benefits, it also ushers in a new set of challenges around security, code ownership, and application copyrights. As we move forward, these issues must be addressed thoughtfully, with a balanced approach that recognizes AI’s potential while maintaining a keen eye on associated risks. Ultimately, the key lies in navigating these challenges strategically to unleash the full potential of AI, fueling innovation and transforming the future of banking.