Skip to main content

The EU AI Act, a landmark regulation by the European Union, affects a wide range of stakeholders, including businesses, developers, public authorities, and consumers. Here’s a breakdown of who is impacted:

1. AI Developers and Providers

  • Companies Developing AI Systems: Any business or entity that creates AI systems intended for use in the EU is affected. This includes large tech companies, startups, and research institutions that develop AI software, algorithms, or applications.
  • Open-Source Developers: Even open-source projects that create AI tools could be subject to certain obligations, particularly if their AI is used in high-risk applications.
  • AI Suppliers: Companies that supply AI systems to other businesses are also within the scope of the Act.

2. Businesses Using AI

  • Industries Utilizing AI: Sectors such as healthcare, finance, transportation, and manufacturing that rely on AI systems to improve efficiency, decision-making, or service delivery are significantly impacted. These businesses must ensure that their AI systems comply with the new regulations.
  • High-Risk AI Users: Businesses using AI in high-risk areas (e.g., biometric identification, critical infrastructure, or recruitment) face stricter requirements, including transparency, risk management, and oversight obligations.

3. Public Authorities

  • Government Agencies: Public sector bodies using AI for law enforcement, social services, or public administration must adhere to stringent rules, especially if their AI systems are classified as high-risk.
  • Regulatory Bodies: National regulators within EU member states are tasked with overseeing compliance, enforcing rules, and imposing penalties for violations.

4. Consumers and the General Public

  • End-Users of AI Products: Consumers interacting with AI-powered products or services will benefit from greater transparency, fairness, and safety measures. They have the right to know when they are interacting with AI systems and can expect better protection of their personal data and privacy.
  • Vulnerable Groups: The Act includes specific provisions to protect children, elderly individuals, and other vulnerable groups from harmful AI practices.

5. Non-EU Companies

  • Global Companies Operating in the EU: Non-EU companies that offer AI products or services within the EU market must comply with the AI Act. This means that global tech firms with a European presence are also subject to the regulation.
  • Importers and Distributors: Companies that import or distribute AI systems within the EU are responsible for ensuring that these products comply with the Act’s standards.

6. Researchers and Academic Institutions

  • AI Research and Innovation: Researchers involved in AI development need to consider ethical and legal implications, particularly if their work might lead to applications covered by the regulation.

7. Civil Society and Advocacy Groups

  • NGOs and Advocacy Groups: These organizations may play a role in monitoring and reporting on AI systems’ compliance with the Act, advocating for the rights and interests of affected individuals and communities.

Key Obligations:

  • Transparency Requirements: Certain AI systems, especially those interacting with humans, must be transparent about their AI nature.
  • Risk Management: High-risk AI systems need to undergo rigorous risk assessments and comply with specific standards.
  • Data Governance: Strict rules govern the data used by AI systems, particularly to prevent bias and ensure fairness.

The EU AI Act aims to balance innovation with safeguarding fundamental rights, making it one of the most comprehensive regulatory frameworks for AI worldwide, but not only the AI software providers but the entire ecosystem and especially the business using AI, need to comply to the regulation.