Unacceptable Risk
Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).
Social scoring is a system by which AI can be used by governments to assign individuals a ‘social score’ based on their behaviour or compliance with social norms. Scores like this can be used to incentivise good behaviour or punish ‘bad’ behaviour by limiting opportunities, lengthening wait times for services, or withholding access to employment.
As part of the EU AI Act, social scoring using AI and other similar systems are prohibited and have been classified as unacceptable risk.
Prohibited AI System |
Description |
Reason for Prohibition |
Social Scoring by Governments |
Systems that score individual behavior or compliance with laws to determine access to services. |
Violates fundamental rights, risks discrimination, and undermines personal freedoms. |
Real-Time Biometric Identification in Public |
AI systems used for real-time facial recognition in public spaces for law enforcement. |
Invasion of privacy and potential misuse for mass surveillance. |
Manipulative AI |
Systems that exploit vulnerabilities of individuals (e.g., children, elderly) to manipulate behavior. |
Undermines individual autonomy and ethical principles. |
Exploiting Vulnerabilities |
AI systems that exploit specific vulnerabilities due to age, disability, or other factors. |
Risk of harm or coercion to vulnerable groups. |
Mass Surveillance AI |
AI used for indiscriminate or disproportionate surveillance of individuals. |
Breach of privacy and human rights. |
Predictive Policing AI |
Systems predicting criminal behavior based on personal data (e.g., ethnicity, location). |
Risk of reinforcing biases and unfair treatment. |
AI for Subconscious Manipulation |
Systems designed to subtly manipulate individuals' decision-making processes. |
Undermines free will and informed decision-making. |
The majority of obligations fall on developers of high-risk AI systems.
That is, those that create and intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.
And also, third-country providers where the high-risk AI system’s output is used in the EU.
Users are natural or legal persons that deploy an AI system in a professional capacity, not affected end-users.
Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).
This applies to users located in the EU, and third country users where the AI system’s output is used in the EU.
General purpose AI (GPAI)
All GPAI model providers must
- provide technical documentation
- Provide instructions for use
- comply with the Copyright Directive
- publish a summary about the content used for training.
Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.