top of page

Understanding the Impact of the New EU AI Act on Businesses and Consumers

Updated: May 16, 2024

Introduction to AI


Artificial Intelligence (AI) is a ground-breaking technology that will transform industries across the globe. It offers capabilities such as data analysis, trend prediction, process optimization, promising to change how humans work and live. However, with these advancements come significant risks, including bias, data privacy concerns, safety and security, human rights and ethical issues.

ree

Unregulated AI can lead to violations human rights, biased decisions, privacy violations, and a lack of accountability. For instance, AI systems used in hiring might favour certain demographics, perpetuating inequality. Similarly, AI in law enforcement could unfairly target minority groups or use 'real-time' remote face recognition in public places. These issues highlight the need for stringent regulations to prevent harm and misuse.


The consequences of unregulated AI may be far-reaching. Discriminatory practices can increase social inequalities, while data breaches or infringements of copyright law can erode public trust. The lack of transparency in generative AI models can hinder adoption and trust in this emerging technology. Recognizing these risks, in April 2021 the European Commission proposed the AI Act (Act) to create a balanced framework that promotes innovation while protecting fundamental rights and the single market. Therefore, understanding the impact of the new EU AI Act and its requirements is critical.


The EU AI Act


On 13 March 2024, the European Parliament adopted the AI Act, a first comprehensive horizontal law of its kind aimed at creating common legal framework for development, marketing and use of AI systems across the EU. The Act will apply in the EU, and will also impact developers of AI who are based outside, since they will have to appoint a representative in the EU, and their AI will have to comply with the requirements of the Act.


The Act is adopted using the ordinary legislative procedure and in order to come into force, the Act will have to be approved by the Council and published in the Official Journal. Due to the significance of the single market and the value of its consumers, similarly to GDPR, the Act will set a world class standard for regulation and adoption of AI systems.


The AI Act has sparked debate on balancing innovation and risk management. Critics argue that stringent regulations might stifle innovation and disadvantage European companies vs US, where most AI innovation is created. For example, some AI models, such a Google's Gemini or Meta's Llama are not yet available in the EU. Additionally, the clarity and scope of high-risk categories and enforcement capabilities are hotly debated. Ethical concerns, such as bias and accountability, dominate discussions as policymakers strive to create a framework that promotes trust in AI technologies​​​​​​. The AI Act will not doubt be costly to implement. However, AI is a pivotal technology to be left on its own. This is particularly important, since most tech companies that own and develop AI models are US based and have considerable market power over EU consumers and their data.


The Risk-Based Approach in the EU AI Act


The AI Act introduces the definition of an AI system and categorizes AI systems with different requirements based on the level of risk they pose. This risk based approach ensures that regulatory measures are proportional to the potential harm AI systems could cause. Here’s an overview of the different risk categories and their corresponding requirements:


Prohibited AI


AI systems that have harmful effects are prohibited. These include AI systems that:

  • Manipulate human behaviour through subliminal techniques.

  • Exploit vulnerabilities of specific groups (e.g., children, disabled persons).

  • Categorize humans based on race, trade union membership, political views, beliefs, sex life or sexual orientation.

  • Implement social scoring by governments, leading to disproportionate consequences.

  • Identify humans in public spaces using 'real-time' remote biometrics for law enforcement (except for searching for missing persons, preventing threats to safety or searching suspects for serious crimes).

  • Profile humans based on personality to assess the risk of committing criminal offences.

  • Create facial recognition databases through scrapping of internet or CCTVs.

  • Infer emotions at work or education institutions, except for medical or safety reasons.


High Risk AI


High-risk AI systems have a significant impact on people's health, safety or their fundamental rights and must comply with strict regulations. In evaluating whether a AI system is high risk, what matters is it's purpose and function. Examples include AI used in:


  • Critical infrastructure (e.g., energy, transportation).

  • Education and vocational training (e.g., determining access to education).

  • Employment (e.g., AI used in the workplaces).

  • Law enforcement (e.g., predictive policing systems).

  • Essential private and public services (e.g., credit scoring).


High-risk AI systems will be subject to conformity assessment, a procedure that a provider will have to run before an AI system can be used in the EU. It can either be self-assessment, or involve a notified government authority. Such AI systems will have to comply with a number of requirements, such as risk management, testing, data training, transparency, human oversight, and cybersecurity. In some cases, fundamental rights impact assessment will have to be conducted to ensure that AI system complies with EU law.


Limited Risk AI


Limited-risk AI systems that interact with humans or generate content must comply with specific information and transparency obligations, such as watermarking. Users should be informed when they are interacting with an AI system, like chatbots, systems that generate images audios or videos (eg. deep fakes). It is noteworthy that employers will be required to disclose to employees and their representatives, if they use AI systems in the workplace.


Low or Minimal Risk AI


Minimal risk AI systems, such as spam filters will be regulated by existing legislation only (eg GDPR), and no further obligations will apply.


General Purpose AI Models and Their Obligations


The regulation also introduces the concept of General Purpose AI (GPAI) model. GPAI models perform a wide range of tasks and are subject to specific obligations under the EU AI Act. These include comprehensive risk management, transparency in operations, stringent data governance, human oversight, and ensuring robustness and security against cyber threats. These measures ensure ethical and responsible use of GPAI models​​​​​​. The Act also provides more stringent requirements for GPAI models with 'high-impact capabilities' that use total computing power above 10^25 floating-point operations per second can pose greater risk to the single market. Such models will be required to constantly assess and mitigate risks and cybersecurity, report violations of fundamental rights and take corrective action.


Codes of Practice and Enforcement of the AI Act


In order to implement the Act, Commission is expected to develop a code of conduct or rules for implementation of AI system providers' obligations of the Act. If an AI system complies with European harmonised standards to be developed in the future, it will be presumed compliant.


Number of EU level and at least one national market surveillance and notifying authority in each member state will be responsible for enforcement of the AI Act. The European Artificial Intelligence Board (EAIB) will harmonize enforcement across the EU, providing guidance and support. Also the EU AI Office will assist application of the Act and development of codes of practice. They will be assisted by an advisory forum, scientific panel and independent experts. There will be large fines for infringements of the Act. Companies must conduct regular conformity assessments and maintain technical documentation to demonstrate compliance​​​​​​. One notable limitation of the AI Act, is that its enforcement will be limited, and natural persons will not be granted a right to sue the AI system or its operator for loss or damages.


Sandboxing and Entry into Force


The AI Act includes regulatory sandboxes for testing AI systems under regulatory supervision, fostering innovation while ensuring compliance. The Act will come into force gradually, with prohibited AI systems to be discontinued after 6 months, parts concerning the GPAI in 12 months, and provisions regarding high risk AI systems will apply in 24 months after the Act comes into force, with a 36 month transition period for AI systems that are regulated by existing EU product legislation.


ree

To comply with the EU AI Regulation, the actions that businesses should make depend on whether they are providers or users of AI systems. Users of AI systems should:


  • Strengthen Data Protection: ensure protection of your data and adhere to GDPR

  • Conduct Risk Assessment of AI systems used to understand their risk level.

  • Inform workers and their representatives if and when deploying AI systems at workplace, and assess what changes to policies are required.

  • Evaluate if an AI policy is necessary,

  • Ensure Transparency: Implement measures to make AI operations understandable to users.

  • Establish Human Oversight: Implement human oversight to monitor and intervene in AI systems when necessary.

  • Stay Informed: Continuously monitor regulatory updates to ensure ongoing compliance.


The Act will come into force gradually, with prohibited AI systems to be discontinued after 6 months of the Act coming into force. Parts of the Act concerning GPAI and penalties will apply 12 months after the date the AI act comes into force. Similarly, requirements for high-risk AI systems will apply in 24 months after entry of the Act into force.


Conclusion


The EU AI Regulation aims to balance innovation with ethical governance. By categorizing AI systems on risk based approach and imposing appropriate obligations, the regulation mitigates potential harms while fostering trust in AI technologies. Companies must adapt proactively to ensure compliance and leverage AI responsibly. At Zabulis Legal, we are committed to learning and guiding businesses through the new and pivotal area of law.

Comments


  • LinkedIn
  • Facebook

@Copyright Zabulis Legal 2020-2024. Zabulis.Legal is a logo used by Vincentas Zabulis, an SRA regulated freelance, SRA number is 666548.  I trade in my own name. Head office address is 86B Lordship Park, Stoke Newington, London  N16 5UA. Vincentas Zabulis, LL.M (UCL) is a freelance solicitor regulated by Solicitors Regulation Authority. All our activities in England are carried out as a regulated freelance. The Solicitors Regulation Authority website is: www.sra.org.uk/consumers and the applicable SRA Standards and Regulations at https://www.sra.org.uk/solicitors/standards-regulations/. Zabulis.Legal is a trading name of Zabulis Legal, APB, a law firm in Lithuania (EU), registered under company number 306641254 and regulated by the Lithuanian Bar. Our Head office address is Vokieciu str. 18A-7, 01130, Vilnius, Lithuania. Solicitors or attorneys of the jurisdiction where the Law practice is registered provide legal services. Reserved legal activities are provided by the law firm. Lithuanian Bar website is: www.advokatura.lt/en/

bottom of page