Announcing the release of PIA Template v2.0! Learn more

Key Things for Normal Companies to Know about the AI Act

by Meaghan McCluskey

Well, it finally happened; Europe has enacted the AI Act. This occurred while I was on a family trip to Disney World for the kids’ spring break and all I could think was “What would the evil Queen from Snow White need to worry about?” She was using her magic mirror to mass surveil the kingdom and assess who is the “fairest in the land.” She would have needed some kind of parameters on which fairness is based: complexion? Facial symmetry? Is it merely based on physical appearance or is there profiling related to other factors: kindness, disposition, quality of singing voice? All in all, some high risk AI usage and not free from bias in the slightest, if plotting murder to skew results is any indication…

That said, there are many companies who are or are thinking about using AI tools or maybe developing some in-house AI capabilities who may be wondering what the implications are for them. Normal companies are those who don’t have unlimited budgets; they operate with constraints on both time and resources, and they need to make trade offs and risk-based compliance decisions. Note that I am not focusing on “providers” of AI systems under the AI Act, I am looking at the “deployers,” companies using or “putting into service” an AI system for its own purposes. 

Similar to the GDPR, the AI Act applies extraterritorially. So not only deployers who are established or located within the EU, but it also applies to deployers of AI systems located in a third country but where the output produced by the AI system is used in the EU or where the affected persons are located in the EU. Under the Regulation, an ‘AI system‘ is a machine-based system that infers from the input it receives how to generate outputs, such as predictions, content, recommendations, or decisions. The AI does not need to operate completely autonomously, and does not need to include ongoing machine learning, to be covered by the Regulation. However, the Regulation does not focus much on AI systems unless they are considered “high risk”, i.e., it poses a significant risk of harm to the health, safety or fundamental rights of natural persons, including by materially influencing the outcome of decision making.

First, a list of AI practices that are prohibited from being deployed:

  • Techniques that are subliminal or are purposefully manipulative or deceptive that distort or impair decision making;
  • Exploitation of vulnerable persons or groups due to age, disability, social or economic condition;
  • Biometric categorisation systems that deduce or infer special categories of data under Article 9 of the GDPR;
  • Creation of social scores leading to detrimental or unfavourable treatment;
  • Predicting an individual’s likelihood to commit a crime based on their personality (relevant for retailers wanting to prevent shoplifting);
  • Facial recognition databases created through scraping images from the Internet or CCTV footage; and
  • Use of emotion AI in the workplace or educational institutions, except for medical or safety reasons.

Second, a list of things that are high risk:

  • Profiling, pretty much any kind of profiling is considered high risk under Article 6(2a);
  • AI systems used in critical infrastructure;
  • AI systems used in education (admissions, detecting cheating, etc.);
  • AI used in employment, both recruitment (including placing job ads, selection, hiring), and to make decisions about job performance, relationships, promotions, terminations, and allocation of tasks;
  • Determining creditworthiness; and
  • Pricing insurance premiums.

So merely using ChatGPT to create some marketing copy isn’t really subject to obligations under the AI Act. Certainly you need to worry about what is being inputted from a trade secrets and data protection perspective (avoid using personal data), but in general these lower risk systems may be used without restriction.

If you are deploying a high-risk system created by someone else, here’s what you need to do (Art. 29):

  • Carry out a DPIA under the GDPR based on the information and instructions provided by the AI provider;
  • Put in place technical and organizational measures to ensure users use the system in accordance with the system instructions;
  • Assign competent individuals to have oversight of the system and give them training, support, and, importantly, sufficient authority (authority to do what is unclear, but assume it could include pulling the plug or putting the brakes on the AI system usage);
  • Ensure input data is relevant and sufficiently representative in view of the purpose;
  • Monitor system operation and report where the product poses a health and safety risk to the provider and market surveillance authority. Inform providers of any serious incident (e.g. death, serious damage to health, disruption of critical infrastructure);
  • Keep system logs for an appropriate period, at least 6 months, where possible;
  • Be transparent:
    • Inform employees and their representatives (unions, Works Council) where the AI system will be in place at the workplace and employees will be subject to the system;
    • Inform natural persons where they are subject to a high-risk AI system that makes or assists in decisions related to that person;
    • Inform natural persons exposed to emotion recognition systems or biometric categorisation systems of its operation and its processing of personal data. 
  • Recognize the right to an explanation. If the AI system is being used to support decisions that produce a legal effect or similarly significantly affects the individual in a way that adversely impacts their health, safety and fundamental rights, then they have the right to request clear and meaningful explanations on the role of the AI system in the decision-making procedure, and the main elements of the decision taken. This right is additional to the right to be free from automated decision making under Article 22 of the GDPR.

However, if you’re creating in-house AI that falls into a high-risk category above, there are some additional things you need to do around risk management, testing system performance, managing data sets, maintaining technical documentation and system logs, ensuring effective human oversight, and building resiliency and robustness (See Articles 9-15).

Additionally, if you are a company that provides services on behalf of a public body, you have two extra obligations: conducting a fundamental rights impact assessment and registering in the EU database your use of a high-risk AI system.

Also, be warned that you may need to comply with the provider obligations for high-risk AI systems in a couple of situations: if you white-label the AI system by putting your name or trademark on it, or if you make a substantial modification to a high-risk AI system or a low-risk AI system such that it becomes high-risk as a result of the modification.

With respect to deploying a low-risk AI system, there are only a few obligations around transparency that apply. You need to inform people when they are directly interacting with an AI system, for example in a customer service chatbot, unless this is obvious. And you need to be transparent when disseminating content from systems that generate deepfakes or news about matters of public interest (unless this is subject to review and editorial control). Really low-risk AI systems are not subject to much in the way of regulation.

It will be interesting to see what is produced by the newly established European Artificial Intelligence Board. Similar to the EDPB, the EAIB includes issuing guidance among its tasks. My hope is that the EAIB can produce guidance akin to some of the EDPB’s working papers that provide good clarity around expectations and in many cases offer practical and specific guidance. The AI space desperately needs this level of guidance, going beyond mere principles and objectives to provide examples and use cases that shed light on, for example, what metrics to use to measure robustness or accuracy. Much of this will depend on the competencies of the board members and its advisory forum members.

As a final note, since everyone loves to talk about penalties and as an FYI, penalties under the AI Act are intended to be proportionate and dissuasive, and for deployers, the maximum penalty is the higher of an administrative fine of up to €15,000,000 or up to 3% of total worldwide annual turnover for the preceding financial year. So best tread carefully and aim to be the fairest in the land.

Disclosure Statement: this blog post was not written by AI. Take it for what it is, but it certainly is not legal advice.


Work With The Privacy Pro

Schedule an introductory consultation to discuss your
privacy goals and how The Privacy Pro can help.


This website uses cookies for web analytics, to properly service our customers and for marketing purposes. The cookies may be set by us or by a third party provider whose services we have retained. You can block cookies at any time by changing the settings of your web browser. By continuing to use this website, you consent to our use of cookies on this website. Our Privacy Policy.