The AI Act and the GDPR — Personal Data Protection in Artificial Intelligence Systems

Artificial intelligence is entering organisations at a pace that outstrips many companies’ ability to assess the associated legal risks. Customer service chatbots, product recommendation engines, algorithms supporting recruitment, user behaviour analytics tools, facial recognition in access control — each of these systems processes personal data and is subject to both the GDPR and the new Artificial Intelligence Act (AI Act).

The AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council) is the world’s first comprehensive law regulating artificial intelligence. It entered into force on 1 August 2024, with its various provisions becoming applicable in stages — from February 2025 to August 2027.

This article explains how the AI Act and the GDPR interact, what obligations they impose on companies using AI, and how to prepare your organisation for their simultaneous application.

The AI Act in Brief — A Risk-Based System

The AI Act introduces a risk-based approach, classifying AI systems into four categories:

Unacceptable risk (prohibited AI practices) — AI systems whose use is entirely prohibited in the EU. The ban has applied since 2 February 2025. It covers, among others: social scoring systems by public authorities, subliminal manipulation causing harm, exploitation of vulnerabilities of individuals (age, disability) to influence their decisions, and real-time remote biometric identification in public spaces by law enforcement (with limited exceptions).

High risk — AI systems that can have a significant impact on the fundamental rights of individuals. Subject to the strictest requirements. These include: AI systems in recruitment and employee management, credit scoring systems, AI systems in education (student assessment, examinations), AI systems in the administration of justice, biometric systems (identification, categorisation), AI systems in critical infrastructure, and AI systems in access to public services.

Limited risk — AI systems subject to transparency obligations. This covers: chatbots (must inform the user that they are interacting with AI), systems generating deepfakes (must label content as AI-generated), and emotion recognition systems (must inform about their operation).

Minimal risk — all other AI systems (e.g., spam filters, game recommendation systems). No additional obligations, but the application of codes of conduct is encouraged.

Why the AI Act and the GDPR Must Be Applied Together

The AI Act does not replace the GDPR — Article 2(7) of the AI Act expressly confirms that the regulation is without prejudice to the GDPR and other data protection legislation. Both legal acts must be applied in parallel, which means:

Every AI system processing personal data is subject to the GDPR — regardless of its classification under the AI Act. Even a minimal-risk AI system under the AI Act must comply with GDPR principles if it processes personal data.

Requirements overlap, they do not exclude each other — the organisation must meet both the AI Act requirements (for the relevant risk category) and the GDPR requirements (for personal data processing).

Supervisory authorities may audit compliance with both acts — UODO as the GDPR authority and the national AI Act authority (a separate body will be designated in Poland) may examine the same AI system from different perspectives.

DPIA for AI Systems — A Dual Requirement

One of the most important points of intersection between the AI Act and the GDPR is impact assessment:

The GDPR (Article 35) requires a DPIA when data processing using new technologies is likely to result in a high risk to individuals’ rights. AI systems — due to profiling, automated decision-making, and the frequent opacity of algorithms — almost always meet the criteria triggering a DPIA.

The AI Act (Article 27) requires deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA). The FRIA covers, among other things: a description of the processes in which the AI system will be used, the period and frequency of use, the categories of individuals potentially affected, specific risks to fundamental rights, and human oversight measures.

In practice: If your AI system is both a high-risk system under the AI Act and processes personal data, you must conduct both a DPIA (GDPR) and an FRIA (AI Act). The EDPB and supervisory authorities recommend an integrated approach — conducting a single comprehensive assessment that meets the requirements of both regulations.

Profiling and Automated Decision-Making — Article 22 GDPR and the AI Act

Article 22 of the GDPR grants individuals the right not to be subject to a decision based solely on automated processing (including profiling) that produces legal effects or similarly significant effects. This right takes on particular significance in the context of AI:

Automated candidate screening in recruitment — if an AI system automatically rejects candidates without human involvement, the individual has the right to request human intervention, express their point of view, and contest the decision (Article 22(3) GDPR).

Credit scoring — automated creditworthiness assessment by an AI algorithm is subject to Article 22 GDPR. The CJEU in the SCHUFA case (C-634/21) confirmed that credit scoring constitutes automated decision-making within the meaning of Article 22 GDPR.

Recommendation systems — algorithms personalising content or offers may constitute profiling under the GDPR, even if they do not make “decisions” in the traditional sense.

The AI Act adds further requirements — for high-risk AI systems it requires, among other things: technical documentation describing the system’s logic, provision of human oversight, transparency towards system users, and system accuracy and robustness.

The combination of Article 22 GDPR with the AI Act requirements means that organisations using AI for decision-making about individuals must simultaneously ensure: algorithm transparency (GDPR + AI Act), the right to human intervention (GDPR), human oversight (AI Act), technical documentation (AI Act), and information to individuals (GDPR + AI Act).

Information Obligation — Algorithm Transparency

The GDPR requires transparency of data processing (Article 5(1)(a)). In the AI context, this means informing individuals about the use of automated decision-making, including profiling, and providing meaningful information about the logic involved, the significance, and the envisaged consequences of such processing (Articles 13(2)(f) and 14(2)(g) GDPR).

In practice, this is one of the most challenging requirements to fulfil — machine learning algorithms can be “black boxes” whose logic is difficult to explain even for their creators.

The AI Act requires additional transparency:

High-risk AI systems — must have technical documentation describing the system’s logic, limitations, and intended use (Article 11 AI Act).

Limited-risk AI systems — must inform users that they are interacting with AI (Article 50 AI Act). This applies to chatbots, deepfakes, and emotion recognition systems.

Practical recommendation: A GDPR privacy notice for AI systems should include: information about the use of AI/automated decision-making, a general description of the algorithm’s logic (not source code, but an intelligible explanation), the significance of the processing for the individual (what decisions are made or supported), the envisaged consequences, and information about the right to human intervention and to contest the decision.

AI Training Data and the GDPR

AI systems learn from data — frequently including personal data. Using personal data to train AI models raises a number of GDPR questions:

Legal basis for training — under which Article 6 GDPR legal basis is personal data used for model training processed? The most commonly invoked basis is the controller’s legitimate interest (Article 6(1)(f)), but this requires a balancing test. Consent (Article 6(1)(a)) is problematic due to the difficulty of meeting the specificity and informed consent requirements.

Purpose limitation — if data was collected for one purpose (e.g., fulfilling an order), using it to train an AI model constitutes a new processing purpose that requires a separate legal basis or a compatibility assessment (Article 6(4) GDPR).

Data minimisation — is personal data necessary for training the AI model, or can anonymised or synthetic data be used? The minimisation principle (Article 5(1)(c) GDPR) requires processing to be limited to what is necessary.

Right to erasure and the AI model — if an individual requests the deletion of their data (Article 17 GDPR), must the controller “unlearn” the AI model? This is one of the most difficult questions — technically removing the influence of specific data on a trained model may be impossible or disproportionately costly.

UODO and European authority decisions — this area is at an early stage of regulatory development. The Italian Garante imposed a temporary ban on ChatGPT in 2023 precisely due to concerns about the legal basis for training, the information obligation, and data subject rights. The matter was resolved after OpenAI implemented additional safeguards.

AI Act Application Timeline — Key Dates

The AI Act becomes applicable in stages:

2 February 2025 — prohibition of unacceptable AI practices (Article 5) + AI literacy obligations (Article 4).

2 August 2025 — provisions on general-purpose AI models (GPAI), including obligations for providers of large language models (LLMs).

2 August 2026 — full application of provisions on high-risk AI systems listed in Annex III (recruitment, scoring, education, justice, etc.) + transparency obligations for limited-risk systems.

2 August 2027 — application of provisions on high-risk AI systems embedded in regulated products (medical devices, vehicles, machinery, etc.).

Organisations should begin preparations now — particularly companies using AI systems in HR, customer service, marketing, and risk assessment.

AI Literacy — A New Obligation Since February 2025

Article 4 of the AI Act introduces an obligation for organisations to ensure an appropriate level of AI literacy among staff who operate and use AI systems. This obligation has applied since 2 February 2025.

In practice, this means: identifying employees who use AI systems or make decisions based on them, providing training tailored to their role and the risk level of the AI system, and documenting the measures taken.

AI literacy training should cover both technical aspects (how the system works, its limitations) and legal aspects (obligations under the AI Act and GDPR, data subject rights, escalation procedures).

Practical Checklist — AI Act and the GDPR

  1. Inventory your AI systems — what AI systems do you use, and what personal data do they process?
  2. Classify your systems under the AI Act — prohibited / high risk / limited risk / minimal risk.
  3. Check that you are not using prohibited AI practices — the ban has applied since February 2025.
  4. Conduct a DPIA for AI systems processing personal data — integrate with an FRIA if the system is high-risk.
  5. Verify the legal bases for data processing — including training data.
  6. Update privacy notices — include information about the use of AI and automated decision-making.
  7. Ensure human oversight — for AI systems making or supporting decisions about individuals.
  8. Implement Article 22 GDPR procedures — the right to human intervention, to express a point of view, and to contest the decision.
  9. Ensure AI literacy — train employees who use AI systems.
  10. Prepare technical documentation — for high-risk AI systems.
  11. Designate responsible persons — who in the organisation is responsible for AI Act compliance?
  12. Monitor regulatory developments — the AI Act will be supplemented by delegated acts, standards, and guidelines.

Need Support With the AI Act and the GDPR?

Simultaneously applying the AI Act and the GDPR requires legal expertise in both areas and an understanding of the technical aspects of AI systems. At the Law Office of Dr Joanna Maniszewska-Ejsmont, we advise companies on AI system classification, conducting integrated DPIAs/FRIAs, implementing transparency and human oversight obligations, and training teams on AI literacy.

Notariusz-Joanna-Maniszewska-Ejsmont

Contact us — we will help you prepare your organisation for the new regulatory requirements.

  +48 692 004 515

  kancelaria@maniszewska.pl