Security

Guardrails for LLMs in Banking: Essential Measures for Secure AI Use

Maria-Elena Tzanev
June 4, 2024

Generative AI here, large language models there… With the internet convincing financial institutions that they “need machine learning to survive!” it’s tempting to just start using ChatGPT or one of its competitors out of the box.

Yes, the FOMO is real, but deploying any natural language model directly into production without proper testing and limitations can lead to significant problems. Many companies (which we will discuss later) have suffered operational failures and reputational damage because they rushed this process. So, financial service providers must learn from their experiences and put safeguards on AI-powered applications to avoid potentially embarrassing or damaging outcomes for their business.

In this article, we aim to highlight the risks of blindly implementing LLMs in banking and other types of financial services and how appropriate safety controls can help maintain high standards of accuracy, security, compliance, and ethics.

LLM Risks for Financial Enterprises

Businesses in the financial sector operate in a highly regulated environment, handling sensitive customer data and financial transactions where even small mistakes can have significant consequences. Here are the key security issues and reasons why it’s essential to implement appropriate safeguards:

  • Exposure of sensitive information. LLMs could inadvertently expose or misuse sensitive customer data if it is not adequately secured. This includes personal identification information (PII), financial records, and transaction details.
  • Accuracy and reliability. LLMs can generate incorrect or nonsensical outputs (hallucinate) if not properly trained and validated, leading to incorrect financial advice, transaction errors, and potential financial losses.
  • Unintended consequences. Biases in LLM outputs can influence critical decisions on loan approvals, credit scoring, and other financial services, leading to unfair practices. So, it's crucial to train LLMs on diverse data sets.
  • Misuse and abuse. Without adequate safeguards, LLMs can be manipulated into harmful or unethical text generation.

Indeed, language models lack judgment and are gullible, acting only on the data and language fed to them, and humans are often up to no good. So, while conversational AI has undeniable advantages, its open-ended nature can lead to cruel jokes, memes, and viral social media posts if not managed properly.

LLM Fails Without Guardrails

Generative AI chatbots should not be talking to customers without significant investment in reliability, security, and safety measures. Companies working in the finance sector absolutely must implement guardrails to detect and manage AI’s hallucinations (which occur 3% to 27% of the time) and inaccuracies. Otherwise, they’ll be looking at similar situations.

Air Canada Chatbot Fail

British Columbia resident Jake Moffatt booked a flight through Air Canada's website and was given incorrect information by a chatbot. The chatbot stated that he could receive a partial refund for a next-day ticket and return to attend a relative’s funeral if he applied within 90 days, whereas the actual policy requires a prior application for bereavement discounts. After Air Canada refused the discount, the tribunal mandated the airline to pay about $800 in refunds and costs.

Generous Car Dealership Chatbot

Auto dealerships using ChatGPT-powered chatbots have also found that adequate monitoring is needed to prevent unintended responses. At one point, customers at several US car dealerships tricked chatbots into funny responses and, in one case, a $58,000 discount on a car, lowering the price to $1.

DPD’s Feisty Online Support Chatbot 

The parcel delivery company DPD, which uses AI in its chat system alongside human operators, experienced unexpected behavior after an update: The chatbot started swearing and criticizing the company.

The incident quickly spread on social media after a customer shared his experience of the chatbot swearing at him, recommending other delivery companies, and expressing exaggerated hatred toward DPD. His post was viewed 800,000 times within 24 hours.

The key takeaway from these cases is that banks wishing to continue (or start) using GenAI need to put guardrails in place to ensure that LLMs function securely and correctly and to prevent intentional or unintentional misuse.

How Guardrails Can Help Banks Minimize LLM Risks

Technical and policy-based guardrails can prevent chatbots from delivering inappropriate or harmful content. Filters like this should block responses to sensitive topics and, essentially, limit a chatbot’s thought process. And if a chatbot doesn’t know an exact answer or is confused, guardrails should prompt it to forward queries to human agents instead of simply making up answers.

What are Guardrails?

Guardrails are essentially controls designed to shepherd LLMs, preventing undesired outcomes. Effective guardrails protect data, provide compliance, reduce operational risks, and promote the ethical use of AI. Fundamentally, these mechanisms and protocols ensure that LLM deployment and operation are safe thanks to:

  • Data encryption
  • Access controls
  • Explainable AI (XAI)
  • Open communication with users
  • Real-time monitoring
  • Regular audits
  • Model retraining
  • Patching and updating
  • Manual review of output
  • A human override option

The last two points on this list are particularly important. The human-in-the-loop (HITL) approach means that there is an operator who blocks or approves outputs, essentially grounding the bot, making decisions instead of the model, and taking responsibility for them. The system should also notify operators when an anomaly is detected: for example, the chat starts hallucinating, having strange conversations, promoting competitors, etc.

Types of Guardrails

There are two types of guardrails you can put on a large language model: input and output. 

Input guardrails protect LLMs from inappropriate content that users try to feed into the model. Some of this content can be just plain nonsense, such as off-topic questions about the universe to a GenAI chatbot in banking. However, malicious agents may attempt to compromise the whole system through the chatbot by jailbreaking (hijacking the model) or injecting prompts (inserting malicious code into the model).

Output guardrails limit what comes out of the large language model. This primarily includes trying to keep the LLM’s hallucinations under control, moderating the model’s input in accordance with your corporate guidelines, and ensuring answers aren’t biased.

When we think of specific LLM guardrails for companies operating in the banking and financial services sector, these are the ones that come to mind:

  • Sensitive financial information. Banks can set up guardrails to detect and block the transmission of sensitive financial data (account numbers, credit card details, etc.).
  • Toxic language and offensive content. LLMs can be trained to recognize and filter out toxic language, hate speech, or offensive content.
  • Mention of competitors. Guardrails can detect and prevent mentions of competitor banks or proprietary information to protect the institution's intellectual property and brand integrity.
  • Unsolicited financial advice. Custom guardrails can filter out unsolicited financial advice or recommendations to ensure compliance and avoid potential liabilities.
  • Data leaks and privacy breaches. Guardrails can detect and block attempts to share sensitive customer data or violate privacy regulations by ensuring that PII is not transmitted or stored improperly.
  • Jailbreak attempts and system manipulation. LLMs can be equipped with guardrails to detect and respond to attempts to circumvent operational parameters or manipulate the system.

GenAI regulatory compliance is inevitable, and we should expect that there will be two types of guardrails: regulatory compliance and internal policies. Regulatory compliance, set by international, national, and local bodies, will include laws such as the EU Artificial Intelligence Act to set standards and protect against data misuse and security breaches. As we know, these things take time, and we have to act fast. So, for now, building custom LLM guardrails internally is the best option for banks.

Building Guardrails for Large Language Models

Internal policies, unique to each financial institution and developed by cross-departmental teams, are an excellent way to safeguard your reputation and retain customer trust. These policies involve implementing a combination of strategies and best practices to ensure safe, reliable, and ethical use. There are several approaches to consider here:

  • Data quality control. Ensure that the data used for training and deployment of LLMs is accurate, clean, and representative of a diverse user base.
  • Data protection measures. Implement strict data protection protocols, including data anonymization, encryption, and secure storage. It’s also important to exclude PII from the data used in LLM training. 
  • Compliance with laws and regulations. Ensure that the LLM's operations comply with all relevant standards, including data protection legislation such as GDPR and industry-specific regulations.
  • Audit trails. Maintain comprehensive logs of the LLM's activities to facilitate regulatory audits and demonstrate compliance.
  • Access controls. Implement strict access controls to limit who can interact with the LLM and its data.
  • Threat detection and response. Establish systems to monitor potential security threats and respond to them quickly. This includes regular audits and penetration testing.
  • Real-time monitoring. Continuously monitor the LLM's performance and outputs in real-time to rapidly detect and address anomalies or issues.
  • Redundancy and failover. Design the system with redundancy and failover mechanisms to ensure reliability and availability even during outages or times of high demand.
  • Explainable AI. Develop techniques to make the LLM's decision-making process transparent and understandable to users. This includes providing clear explanations for outputs and decisions.
  • Human in the loop. Integrate human oversight into critical decision-making processes to review and validate the LLM’s outputs.
  • Override mechanisms. Provide opportunities for human operators to override the LLM's decisions when necessary.
  • Feedback loops. Establish feedback procedures to gather input from users and stakeholders and use this information to continuously improve the LLM's performance and functionality.
  • Ongoing training and updates. The LLM must be regularly updated with new data and trained to maintain its accuracy and relevance.

Implementing effective guardrails for LLMs in banking requires a strategic approach. Start with clear documentation of all policies and procedures related to the use of LLMs. Conduct regular training for stakeholders to ensure that everyone understands and can apply these guardrails effectively. It’s also a good idea to form interdisciplinary teams of data scientists, ethicists, legal experts, and subject matter experts to oversee the deployment and cover all critical aspects.

Next, meet with stakeholders, including customers, to align the guardrails with their needs. Conduct stress tests to see how the LLM performs under extreme conditions and to identify vulnerabilities. You can also use simulation exercises to test the LLM in different scenarios, tricky conversations, and other tasks to ensure it can handle various situations safely.

Following these security strategies may seem like a lot of work, but banks interested in GenAI must do so to ensure responsible LLM implementation.

Dynamiq's Guardrails for LLMs in Banking and Financial Services

Dynamiq provides essential guardrails and risk management tools to ensure the safe and effective use of LLMs in the banking sector.

Key Features

  • Precision and Reliability: Advanced validation tools verify the accuracy and correctness of LLM outputs, aligning them with expected outcomes.
  • Pre-Built Validators: A comprehensive library of ready-to-use tools tailored for various financial scenarios, enabling easy integration and effective validation.
  • Content and Risk Management: Tools to protect sensitive data, prevent toxic content, avoid competitor mentions, control financial advice, and detect jailbreak attempts.
  • Structured Output Assurance: Ensures outputs are in a structured JSON format with thorough risk assessment and proactive measures to maintain integrity.

Dynamiq ensures LLMs and data used in financial services are secure, accurate, and compliant with industry standards.

Conclusion

Deploying LLMs in the banking and financial services sector without adequate guardrails is a recipe for disaster. As illustrated by the various examples of chatbot failures, it’s clear that financial service institutions must invest in comprehensive strategies to mitigate risks and ensure the responsible use of AI. Guardrails serve as essential controls to prevent inappropriate content, protect sensitive data, and ensure compliance with industry regulations.

Implementing effective guardrails requires a strategic approach, including clear documentation, regular training, interdisciplinary oversight, and real-time monitoring. These measures, along with Dynamiq’s advanced validation tools and risk management features, help ensure that LLMs operate safely, accurately, and ethically. 

By balancing accuracy, latency, and cost, banks can benefit from GenAI while protecting their operations and reputation. If you’re keen to learn more about how Dynamiq can help you achieve this, contact us to book a demo

Curious to find out how Dynamiq can help you extract ROI and boost productivity in your organization?

Book a demo
Table of contents

Find out how Dynamiq can help you optimize productivity

Book a demo
Lead with AI: Subscribe for Insights
By subscribing you agree to our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Related posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit.
View all