European Parliament approves landmark AI Act
- עו"ד נועה גבע
- Mar 18, 2024
- 4 min read
March 18, 2024 Adv. Noa Geva

Following a marathon of negotiations, on March 13, 2024, the European Union took a significant step on regulating Artificial Intelligence (AI) systems by approving the AI Act.
The EU AI Act is now set to enter into force in the coming weeks, following the completion of final procedural and linguistic checks. While the AI Act will enter into force 20 days after its official publication, the Act provides a phased approach to the implementation and enforcement.
This landmark comprehensive legislation represents one of the first attempts globally to regulate the development and application of AI. It is designed to manage how AI technologies are developed, deployed, and used across the EU, aiming to ensure these innovative technology benefit society while safeguarding individual rights and safety.
Organizations developing and deploying AI will need to understand and put its new obligations into practice.
Under the AI Act, machine learning systems will be divided into four main categories according to the potential risk they pose to society. The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market.
Here's a summary of the AI Act key aspects:
Risk-Based Approach: The Act categorizes AI systems based on the level of risk they pose to society and their potential impact on our lives, ranging from minimal risk to unacceptable risk. Regulations are more stringent for systems that pose higher risks.
Unacceptable Practices: These uses of AI are considered unacceptable and are prohibited due to their potential harm. This includes AI systems that manipulate human behavior to circumvent users' free will (except in specific cases), systems that unfairly categorize individuals; such as toys using voice assistance encouraging dangerous behavior of minors or systems that allow 'social scoring' by governments/companies, applications of predictive policing, some uses of biometric systems, systems categorizing people or real-time remote biometric identification systems for law enforcement purposes in public places with certain exceptions).
High-Risk AI Systems: The Act defines high-risk AI systems as those that have significant potential to impact the safety, rights, or freedoms of individuals. These include AI applications in critical infrastructure (water, gas, electricity), medical devices, systems to determine access to educational institutions or for recruiting people, certain used in the fields of law enforcement, migration and border control, and administration of justice and democratic processes. Biometric identification and emotion categorization systems are also considered high-risk. High-risk systems are subject to strict compliance requirements, such as risk mitigation systems, detailed documentation and human oversight, transparency obligations, high quality of data sets, and high level of robustness, accuracy and cybersecurity.
Limited Risk: AI systems that interact directly with users, like chatbots, fall here. They must be transparent about being AI-driven to ensure users are aware they are not interacting with humans.
Minimal Risk AI Systems: The vast majority of AI systems fall into this category, for instance AI-enabled recommender systems, spam filters, which present only minimal or no risk for citizens' rights or safety. Transparency and Information Requirements: For certain AI systems, including those interacting with people (chatbot) or generating or manipulating content (deepfakes), the Act mandates clear disclosure to users to ensure they are aware they are interacting with a machine. AI generated content will be labelled as such, and users will be informed when biometric categorization or emotion recognition systems are being used. Providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated. Data Governance and Management: The AI Act emphasizes the importance of high-quality data sets in training, developing, and testing AI systems, to minimize risks and biases. Market Surveillance and Enforcement: Member states are required to appoint one or more national competent authorities to oversee the implementation and enforcement of the Act. These authorities will have powers to conduct market surveillance and enforce compliance. European Artificial Intelligence Office: The establishment of a European AI Office is proposed to facilitate the consistent application of the AI Act across the EU. Compliance and Penalties: The Act sets out significant penalties for non-compliance, which can be up to €35 million or 7% of a company's annual global turnover, depending on the severity of the infringement.
When it comes to operationalizing Responsible AI, consider the following key areas:
Establish AI governance: a set of principles that will govern your AI systems, based on the obligations that the EU AI Act.
Assess risk: assess where AI is being developed and deployed within your company and the potential risk of those various use cases and mitigate systemic risk.
Enable systematic Responsible AI testing: establish capabilities for testing AI systems for fairness, explainability, transparency, accuracy and safety.
Check ongoing compliance: set your right to monitoring, documentation and oversight to oversee your AI systems, your service providers, and responsible AI initiatives.
Address the impact across your company: take steps to govern AI, from specified staffing to long-term strategic planning to provide collective responsibility, in order to address factors such as workforce impact, legal compliance, sustainability and privacy/security programs.
Feel free to contact our office if you have any questions regarding the AI Act and its practical implications.
This update is presented as a summary only and should not be regarded as advice regarding any specific situation. For specific advice please contact our office.
Comments