AI in Fintech: The Legal Basis for Data Processing

Navigating the landscape of data protection can be challenging, especially when considering integrating artificial intelligence (AI) into your operations. 

This blog offers guidance on managing the complexities arising from the intersection of AI applications and the UK's General Data Protection Regulation (UK GDPR). It aims to shed light on crucial aspects such as the legal basis for data processing, the importance of transparency obligations, and the necessity of human reviews in the AI-driven decision-making process. In our next blogs, we'll dive into topics like data processor vs. controller, sharing data with third parties, the UK's White Paper on AI, and the AI Act in the EU.

1. An Example to Bring Things into Context

First, let's consider an example and a potential unwanted bias risk that we can consider when going through the data protection requirements.

Scenario: A bank uses an AI-driven system to assess loan applications. The AI model has been trained on historical data, which includes past loan approvals and rejections.

Unwanted Bias: The AI system begins to demonstrate a pattern of more frequently rejecting loan applications from female applicants. This pattern could have emerged because the historical data used to train the model may have unintentionally incorporated gender biases, reflecting a period where fewer loans were approved for women.

Practical Mitigations:

  • Statistical Tests for Bias: Implement regression models to check if gender is a significant predictor of loan approval or rejection, controlling for other variables like income, credit score, etc. This helps to determine if gender is unduly influencing the decision-making process.
  • Model Transparency and Explainability: Use tools that provide insights into the AI model’s decision-making process. Understanding which features are most influential in the model’s decisions can indicate if gender is being given undue weight.
  • Verification: Consider adding a layer of human verification to the AI's decision-making process to ensure fairness and address potential biases.

2. What are the Data Protection Laws?

In the UK, data protection laws are primarily governed by the Data Protection Act 2018 (DPA 2018), which complements and supplements the General Data Protection Regulation (GDPR). After Brexit, the UK retained the GDPR in its domestic law as the UK GDPR. The UK GDPR, together with the DPA 2018, forms the core of data protection legislation in the UK.

3. What is Personal Data?

Personal data is information that relates to an identified or identifiable individual. The UK GDPR applies to the processing of personal data that is processed by automated means. In our example, even if names were anonymised, if personal data such as address and personally identifiable financial information is being processed, individuals may be identifiable.

4. What are the Principles?

Article 5 of the UK GDPR sets out seven key principles that lie at the heart of the general data protection regime:

1. Processed lawfully, fairly and in a transparent manner in relation to individuals ('lawfulness, fairness and transparency'); 

2. Collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes ('purpose limitation');

3. Adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed ('data minimisation');

4. Accurate and, where necessary, kept up to date; ('accuracy');

5. Kept in a form which permits identification of data subjects for no longer than is necessary for the purposes for which the personal data are processed; ('storage limitation');

6. Processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures ('integrity and confidentiality').

7. Article 5(2) adds that: “The controller shall be responsible for, and be able to demonstrate compliance with, paragraph 1 ('accountability')”.

In our example, this means that the bank would at least need to inform customers in their privacy policy that their data is being used for AI-driven loan assessments. This practice is in line with the 'lawfulness, fairness, and transparency' principle of GDPR, ensuring that customers understand the processing of their data and the reasons behind it. 

5. Which Lawful Basis Should I Use?

Using personal data requires satisfying at least one lawful basis for processing, as outlined in Article 6 of the UK GDPR:

(a) Consent: The individual has given clear consent to process their personal data for a specific purpose.

(b) Contract: Processing is necessary for a contract with the individual, or because they have requested specific steps before entering into a contract.

(c) Legal obligation: Processing is necessary to comply with legal requirements (excluding contractual obligations).

(d) Vital interests: Processing is necessary to protect someone’s life.

(e) Public task: Processing is necessary to perform a task in the public interest or for official functions, with a clear legal basis.

(f) Legitimate interests: Processing is necessary for your or a third party's legitimate interests unless the need to protect the individual's personal data overrides these interests.

While legitimate interest might initially seem like a viable basis (assuming a balanced Legitimate Interest Assessment), Article 22 of the UK GDPR sets additional protections for individuals in cases of solely automated decision-making with significant effects. Such decision-making is permissible only if:

  • It's necessary for contract performance.
  • It's authorised by domestic law applicable to the controller.
  • It's based on the individual’s explicit consent.

In our loan example, 'necessary for contract performance' and 'explicit consent' emerge as potential legal bases. However, explicit consent is a high bar to meet. While not defined in legislation, it must be a clear, specific statement. The criteria for valid consent are stringent, requiring it to be freely given, specific, fully informed, unambiguous, and revocable.

In addition, in accordance with Article 22, the bank would need to implement suitable measures to safeguard the data subject's rights, freedoms, and legitimate interests, including at least the right to obtain human intervention, express his or her point of view, and contest the decision.

This means, in addition to the earlier discussed transparency requirements, the bank must inform individuals about the nature and operations of the automated decision-making process, provide ways for them to request human intervention or challenge decisions, and regularly ensure that systems are functioning correctly. Regulators understand the complexity of explaining algorithms and don't expect organisations to provide detailed descriptions or disclose the full algorithm (as per the Guidelines on automated individual decision making and profiling). However, a thorough description of the data used in decision-making, including influential factors, data sources, and their relevance, is recommended.

With the added complexities, having an effective compliance and control framework will be essential. A solid framework will allow you to proactively identify and address potential risks, ensure transparency in your operations, and align your strategies with both internal policies and external regulations.

6. Data Protection Impact Assessment 

The implementation of artificial intelligence systems often necessitates a comprehensive Data Protection Impact Assessment (DPIA). A DPIA is a process designed to identify and minimise data protection risks in a project. A DPIA is mandatory for processing activities likely to result in high risks to individuals, including certain specified types of processing. The ICO provides a screening checklist to determine when a DPIA is needed. It's also advisable to conduct a DPIA for any major project that involves processing personal data.

A DPIA should:

  • Describe the nature, scope, context, and purposes of the processing.
  • Assess the necessity, proportionality, and compliance measures.
  • Identify and evaluate risks to individuals.
  • Determine additional measures to mitigate those risks.

When assessing risk levels, consider both the likelihood and severity of any impact on individuals. High risk might arise from a high probability of some harm or a lower probability of serious harm. Consult your data protection officer, if available. Processors may also need to provide assistance. If you identify a high risk that cannot be mitigated, consult the ICO before commencing processing. The ICO will offer written advice within eight weeks or 14 weeks for more complex cases.

7. How can Product Pack help?

Product Pack streamlines complex governance processes. Our built-in compliance modules and guides for Privacy and AI are designed to support teams in conducting assessments, ongoing monitoring, and reporting, making compliance and accountability simpler to implement and evidence. Book a demo here.

Beta Program

We're currently running a private beta with a select number of companies. If you're keen get early access or see a demo leave us your details below.

Thank you!
Oops! Something went wrong