26 May 2023

AI & Data Protection: Is GDPR Ready for Retirement?

I.  Strengths of the GDPR

The General Data Protection Regulation (“GDPR”), implemented in 2018, has played a vital role in safeguarding personal data in the era of information and communications technologies (“ICT”).  As AI technologies continue to advance rapidly, questions arise regarding the effectiveness and adaptability of GDPR in addressing the evolving challenges of data protection.  This article examines whether GDPR is ready for retirement or if it requires updates to address AI-related data protection concerns effectively.

Namely, Artificial Intelligence (AI) is defined as a methodology used in machine learning to determine which one of several used models has the highest performance, i.e., a study on how to produce machines that have some of the qualities that the human mind has, such as the ability to understand language, recognize pictures, solve problems, and learn.

If AI starts to learn and solve problems, how does this fit under the main principles of GDPR?

GDPR has brought significant improvements to data protection practices.  It has established clear guidelines for organizations collecting, processing, and storing personal data.  GDPR ensures transparency by requiring individuals to provide informed consent for data processing and empowers them with the right to access, rectify, and erase their personal information.  These principles provided a strong foundation for protecting personal data in the digital era, but questions were raised if these principles are enough in the age of artificial intelligence (“AI”).

 

II.  GDPR and the Uncharted Territory of AI in Data Protection

While GDPR has laid a solid groundwork, the rapid growth of AI presents unique challenges.  While the Data Protection Directive from 1995 did not include specific references to the Internet, the GDPR introduces terms related to online platforms such as websites, links, and social networks.  However, it is worth noting that the GDPR does not explicitly mention the word AI or any related concepts like autonomous systems, intelligent systems, automated reasoning and inference, machine learning, or big data.

The expanded application of data protection regulations is driven by the adoption of AI to replace human decision-making.  Unlike human decisions, which the GDPR does not directly regulate unless they are based on inaccurate or unlawfully processed data, decisions made by AI systems are subject to the GDPR, which mandates fundamental requirements of fairness and accountability.  Therefore, individuals whose data is processed by AI systems have the right to contest the decisions made by these machines based on the grounds of fairness and lawfulness.

The main question remains: is this possible?

Ensuring compliance with GDPR’s requirements for the lawful processing of personal data becomes more complex in the AI landscape.  AI systems require extensive data for training and improving their performance.  However, this reliance on data raises concerns regarding privacy, data quality, and bias.  The volume and variety of data used in AI can make it challenging to obtain explicit consent from individuals, especially when data is collected from various sources.  Additionally, AI algorithms are susceptible to biases present in the training data.  If the data used for training is biased, the AI system may produce discriminatory or unfair outcomes.  GDPR’s principles of fairness and accountability need to be strengthened to address these biases effectively.

 

III.  Three Examples of How AI Can Find Itself in Breach of GDPR

• Lack of Transparency: AI systems often involve complex algorithms and decision-making processes, making it challenging to explain how personal data is processed.  GDPR requires transparency, meaning individuals should be informed about the logic, significance, and consequences of automated processing.  If AI systems fail to provide such transparency, they may infringe on GDPR provisions.

Example: In a hypothetical scenario, a corporation implements an AI-driven recruitment system to handle job applications.  The AI system autonomously screens resumes and profiles applicants, using complex algorithms to assess qualifications and suitability for specific roles.  However, if that system lacks transparency in how it evaluates candidates, applicants will not be able to understand the criteria and reasoning behind automated decisions.  As a result, individuals are left uninformed about the logic, significance, and consequences of the AI system’s profiling and selection processes.  This lack of transparency violates GDPR provisions, which emphasize the right of individuals to be informed about automated decision-making that affects their employment opportunities.

• Automated Profiling: AI systems heavily rely on profiling techniques to make predictions and decisions based on individuals’ data.  Profiling can significantly impact individuals, such as automated decisions on job applications or creditworthiness.  GDPR provides certain rights to individuals, such as the right to object and the right to be informed about the existence of profiling.  AI systems performing automated profiling without providing these rights or obtaining appropriate consent may infringe on GDPR provisions.

Example: In a hypothetical case, a financial institution utilizes an AI system to assess creditworthiness for loan applications.  The AI system extensively profiles applicants, analyzing various personal data points, including financial records, employment history, and online behavior.  However, if the system fails to inform individuals about profiling or provide them with the right to object, individuals will be unaware of how their data is being used, and automated decisions are made without their knowledge or consent.  This scenario violates GDPR provisions, which aim to protect individuals’ rights by ensuring transparency and control over automated profiling processes.

• Inadequate Security Measures: AI systems often process and store large amounts of personal data.  GDPR mandates that organizations implementing AI systems must take appropriate security measures to protect personal data from unauthorized access, loss, or destruction.  If AI systems fail to implement adequate security measures, leading to data breaches or unauthorized access, they may infringe on GDPR provisions.

Example: In a hypothetical case, a healthcare organization implements an AI system to analyze patient data for diagnosing diseases.  However, inadequate security measures make the AI system vulnerable to cyberattacks.  A malicious actor successfully breaches the system, gaining unauthorized access to sensitive patient records, including medical histories and test results.  This data breach violates GDPR provisions and compromises patient privacy, potentially misusing personal information and undermining trust in the healthcare institution.

 

IV.  AI under Regulators’ Watch

As previously mentioned, AI may rely on potentially sensitive personal information, leading to privacy violations if not adequately secured.  Our previous “Hello, ChatGPT!” article drew attention to this.

Italy has become the first Western country to ban ChatGPT, an AI chatbot developed by OpenAI that simulates human-like conversations.  On March 30, 2023, the Italian Data Protection Watchdog (Garante per la Protezione dei Dati Personali) imposed a temporary limitation on OpenAI’s processing of Italian users’ data, citing concerns over the lack of information provided to users and the absence of a legal basis for the extensive collection and processing of personal data to train the platform’s algorithms.  The regulator also expressed worries about the lack of age restrictions and the potential for the chatbot to provide factually incorrect information.  OpenAI could face a fine of up to EUR 20 million or 4% of its global annual revenue.  Both parties engaged in a video conference, with OpenAI expressing its commitment to cooperation and transparency to address the regulator’s concerns.  The Italian watchdog emphasized its support for AI development while emphasizing the importance of complying with personal data protection legislation.  OpenAI provided a document outlining measures to address the regulator’s evaluation and compliance assessment requests.

Italy isn’t the only country grappling with the rapid pace of AI progression and its societal implications.  Governments are creating rules for AI that will likely impact generative AI, even if they don’t explicitly mention it.  Recently, the European Data Protection Supervisor (EDPS), the body that unites Europe’s national privacy watchdogs, set up a task force on ChatGPT, a potentially important first step toward a standard policy on setting privacy rules on artificial intelligence.

 

V.  The Intersection of GDPR and the AI Act in the EU

Recognizing the need to prevent potential misuse of AI and protect individuals’ rights, EU regulators are actively working on legislation and guidelines to regulate AI use.  This legislation, often called the AI Act, will complement the existing GDPR and ensure compliance with fundamental rights and requirements in the EU market.

Like the GDPR, the AI Act will require organizations to comply with its provisions alongside the GDPR, potentially facing the consequences of non-compliance under both regulations.  While the GDPR can impose fines of up to 4% of a company’s revenue, the AI Act proposes fines of up to 6% of global revenue for non-compliance, making it even more significant for businesses.  The AI Act is expected to establish and maintain risk management systems and processes, requiring organizations to assess if their AI systems fall into the “high-risk category” and undergo regular evaluations.  In the case of utilizing personal data for developing high-risk AI systems, providers will be obligated to adhere to the data handling requirements of the AI Act and the personal data processing requirements of the GDPR.  However, the question arises whether non-compliance with these regulations would result in double penalties, considering the EU’s role as a unified authority overseeing both personal data management and AI systems/services.

Although the lessons learned from the GDPR, such as data hygiene, standards, and processes, will be valuable for complying with the AI Act, there are additional considerations.  The AI Act focuses on regulating the decision-making methodology based on data, including the algorithms used.  It goes beyond safeguarding raw data and addresses the potential threats of data manipulation and integrity compromise, which can lead to altered decisions with significant consequences.  Therefore, organizations must be aware of these risks and take appropriate measures to protect against them.

Authors: Branko Gabrić, Nikola Ivković, Žarko Popović