EU AI Act - A Guide from an Agency Perspective

„AI recruiting tools leverage artificial intelligence to automate […] various aspects of the hiring process, making it faster, more efficient, and potentially more effective.“

This is what Google's AI Gemini says about the search query “ai recruiting tools”. What it doesn't say is that such tools not only speed up decisions, but can also reinforce existing prejudices from the past - often without anyone realizing it. An AI system that learns from old recruitment data can also reproduce old patterns: preferred educational paths, certain names, gaps in the CV - all of which can lead to exclusion without a human ever having seen the application. What sounds like discrimination potentially is. The EU AI Act, which was passed on 1 August 2024, aims to minimize these and other risks associated with the use of AI without blocking innovation. What does the new law mean in concrete terms for companies and, in particular, for agencies that integrate AI into their day-to-day work? Our AI team has looked into the law and explains what is important now and what our own first steps at Wächter look like.

AI Act Logo

What is the EU AI Act - and how does it affect companies?

The AI Act classifies AI systems according to their risk potential - the greater the risk to society or individuals, the more comprehensive the regulatory requirements. The EU wants to ensure that the fundamental rights of EU citizens are not jeopardized by biased data sets, user manipulation or surveillance.

The risk levels at a glance

The law distinguishes between four risk categories, here with examples that could occur in the day-to-day work of many companies.

Risk Level Definition Example Regulatory Implications
Unacceptable Risk AI applications that violate fundamental rights Emotion recognition in the workplace Must be withdrawn from the EU internal market by February 2, 2025
High Risk AI systems that are used in sensitive fields and can have serious consequences in the event of malfunctions or misuse HR software and other AI systems that influence hiring or career advancement Strong regulation & licensing requirements
Limited Risk AI systems that interact with people and pose only a low risk to users Chatbots and AI-generated content Transparency and labeling obligations
Minimal Risk AI systems posing no significant risk AI-supported spell checker, spam filter No obligations, voluntary implementation of an AI code of conduct

Which requirements apply according to the risk classes depends on how and for what purpose AI is actually used. We have begun to systematically answer these questions and have taken our first steps to meet the legal requirements.

How should companies proceed now? 
Our first steps at Wächter

Step 1 Inventory:

In order to better understand the implications of the risk classes, it is important to distinguish between AI providers and AI operators. An AI provider develops its own AI system, which is marketed under its own trademark in the EU. Such a company must classify its product according to the risk classes and comply with the corresponding regulations.
 AI operators - including us at Wächter - are companies that use other AI systems commercially (e.g. ChatGPT to support text work for articles such as this one). In this case, the risk class must be determined individually for each AI tool used and for each type of use.

Step 2 Risk analysis:

Classification into the four risk categories mentioned is often not that easy. The EU therefore provides an online test that allows you to check which obligations apply to your own organization or systems:
To the AI Act compliance test
Important: The test must be carried out individually for each type of AI use. A prior inventory is therefore particularly helpful..
We did the test for Wächter - and this is the result for our agency:

  1. Transparency obligation for synthetic content (Art. 50 para. 2)
    Content generated or heavily modified by AI must be clearly recognizable as such for humans and machines (e.g. metadata, watermarks, notes in the text).
  2. Obligation to promote AI literacy (Art. 4)
    We must ensure that our employees understand the AI systems we use and can handle them responsibly.

Step 3 Implementation of the requirements:

As with most agencies, our use of AI, such as spell checking with ChatGPT or image generation with Midjourney, falls under the categories of Minimal and Limited Risk. For us at Wächter, this means in concrete terms:

  1. We consistently label all artificially generated or edited content (e.g. in our presentations, on our website, on LinkedIn, etc.) and thus disclose our use of AI.
  2. We promote the AI competencies of our employees and take advantage of qualification and training opportunities.
  3.  

Meeting the requirements of AI literacy in accordance with Article 4 is not so easy given the rapid progress of technologies. A clear distribution of responsibilities and tasks is particularly important. We have therefore set up our own AI team at Wächter - an interdisciplinary group of colleagues from PR, creative, digital and media - with the aim of sharing knowledge and expertise, reducing uncertainty and facilitating an open, continuous exchange on the use of artificial intelligence in day-to-day agency work. The responsibility for establishing and communicating guidelines on the use of AI is therefore clearly assigned. Our AI team is responsible for ensuring that all employees are provided with clear guidelines and policies, that these are adapted and updated and that AI skills are deepened and shared internally. This creates clarity and trust within the company.

Seeking technology expertise when we need it

We use artificial intelligence to make processes more efficient and to add value for our customers. We are therefore required to constantly rethink and develop our approach to this. We are supported in this by Sest-Digital, who as AI thinkers and developers, bridge the gap between communication and technology. Together, we can use AI in a targeted and meaningful way for companies.

Many of the requirements of the EU AI Act only partially affect agencies like us. Nevertheless, we believe that we have a responsibility to ensure that the use of AI is conscious, transparent and fair and that our approach to AI is questioned from a technical, legal and ethical perspective on a regular basis.

It is important to us to make our process transparent and to allow others to participate in our deliberations. In the spirit of open exchange - and because we believe that collective learning is central to this development.

What is next

The AI team at Wächter is currently working on the first test runs of so-called AI agents - systems that are able to plan and execute tasks independently. The aim is to better understand the potential of such models in the agency context. We will report on our experiences soon.

Author

Louisa Terbrack

Louisa is a Digital Consultant at Wächter and part of the agency's AI team. She develops digital solutions at the interface of user-friendliness, sustainability and technology - and is currently focusing in particular on the responsible use of artificial intelligence in everyday agency work.