Are you currently developing a high-risk AI system?

News

When lift companies and component manufacturers deploy AI, the associated new obligations are frequently easy to handle. But it’s a different story once AI is deployed as a safety component. This then constitutes a high-risk AI system, which has far-reaching consequences.

By Jacob Feder & Christian Schultz, LL.M. (King's College London) 


There is a new EU regulation that regulates the handling of artificial intelligence and that comes into full effect in phases. However, the actual burden for many companies under the new EU AI Act (EU 2024/1689) to date has been manageable.

But as soon as AI comes into use as a safety component, many additional regulations apply for the conformity assessment procedure, which must be observed. These obligations will apply at the latest as of 2 August 2027, but for some areas of use as early as 2 August 2026.

In order to ensure that the product's certification later succeeds, a number of requirements already have to be observed during the development, sourcing and integration of AI. Manufacturers of lifts and components in particular should clarify at an early date whether their development pipelines are subject to the strict regulations for high-risk AI.

Not just ChatGPT: what is AI according to the AI Act? 


AI is a collective term. References in the press usually involve generative AI systems, such as ChatGPT. Shortly before its finalisation, the AI Act also addressed so-called “general-purpose AI models”. As important as proficient use of these tools is in general in a company setting (e.g. when used in connection with customer data or business secrets), they hardly have specific implications for the lift sector.

But the AI Act is much wider. According to it, an AI system is any machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs. The delineation is often fluid; but in general, the following applies: can I already map every foreseeable combination of input and result before using the technology, e.g. based on a simple tree diagram? If so, it is probably not AI.

The AI Act takes a risk-based approach: AI systems are classified in categories according to risk potential and correspondingly regulated in grades.

The decisive question: what risk class does my AI system belong to?


In the coming months, the most important question for companies and operators in the lift sector will be: am I dealing with a high-risk AI system within the meaning of the AI Act? The answer to this question will establish the roadmap for companies as to whether duties under the AI Act basically “just” involve transparency obligations and the like or whether the entire conformity assessment of their products has to be examined.

There are four risk categories - in descending importance for the associated obligations under the AI Act:

Unacceptable risk – prohibited AI systems: AI systems that are for example used for discriminatory social scoring are completely prohibited.

High-risk –AI systems: These AI systems have a high-risk potential for safety, health or fundamental rights. Such high-risk AI systems may only be offered and operated if strict requirements are met – among others a risk and quality management, technical documentation, human oversight – and a conformity assessment (CE marking), among other things, have been carried out successfully.

An example would be a safety component of a lift or a component subject to an EU conformity assessment (cf. Art. 6(1) and Annex I no. 4 of the AI Act). Companies should thus ensure that they have updated conformity certificates and certificates by 2 August 2027 at the latest.

• The use of AI to evaluate and classify emergency calls is also explicitly mentioned in the AI Act (cf. Art. 6(2) and Annex III no. 5 lit. d) of the AI Act). If applicable, companies must already be organised in conformity with the new regulations on 2 August 2026

Limited risk – AI systems with transparency obligations: The AI Act prescribes specific transparency obligations for several AI applications. This applies especially where AI systems facilitate direct interaction with users (e.g. voice-controlled assistants) or if users are shown AI-generated contents. Users should be able to detect the involvement of AI here.

Low risk – other AI systems: The majority of all AI systems belong to this category. No special requirements of the AI Act apply to normal or low-risk AI applications (such as recommendation algorithms without a sensitive are of application). However, they may be covered by other national or EU legislation.

Roles according to the AI Act


What is also important is in what role the obligations of the AI Act apply to a company. The most important of these roles are “providers” and “deployers”.

• Anyone who distributes an AI system in their own name or puts it into operation is deemed to be a “provider” – normally this will be the producer or developer.

• In contrast, the “deployer” is the entity that uses an AI system in its own responsibility, i.e. the user in business operations (such as a company that uses AI software).

If the AI in question has a limited or even high risk, determining the role in the next step is decisive since then considerable differences arise in the obligations between deployers and operators.

Most of the obligations under the AI Act are borne by the provider (producer). What is decisive is not whether the provider has developed the AI system itself; it suffices that it is distributed under the company's own trading name or as part of one of its own products. This aspect is especially relevant if the AI system falls into a high-risk AI segment. In this case, the provider is subject to the obligation to carry out the prescribed conformity assessment procedure before distribution. If the AI system is added to an already certified product, the provider should not forget to update the conformity declaration.

The lift operator will most likely also be considered the deployer of the respective AI tool. Here too, the obligations in the case of high-risk systems in particular are critical. In addition, transparency obligations may also have to be observed.

AI in lift systems: relevance and use cases


Recent years have shown that AI can introduce innovations in economic sectors at a breathtaking pace. There are various use cases that are already evident in the lift sector:

Predictive maintenance and fault forecasting: AI systems analyse the sensor data of lifts to detect the need for maintenance early on and predict shutdown. For example, a maintenance company can anticipate potential disruptions through machine learning and plan service deployments before a shutdown occurs.

Passenger counting and use analysis: AI can count people in the lift using cameras or scanners, analyse traffic flows in the building and optimise the lift control. For example, the capacity utilisation of individual lifts can be forecast to travel to busy building floors directly and minimise waiting times.

Voice-based interaction: Lifts with integrated voice assistants permit voice-assisted operation - if these systems become more complex in the future, they could constitute an AI system. This would be particularly relevant when integrated into an emergency call system.

From an AI Act perspective, these use cases are predominantly classified as low or limited risk. Nevertheless, transparency obligations must be borne in mind here (together with an adequate justification for the utilisation of usage data.

Conclusion


Particularly in the lift sector, the field of high-risk AI must always be kept in mind. The legal starting point here is, above all, the use of AI in the context of a safety component. The AI Act definition itself is broad and the relationship to the terms of the EU Lift Directive unclear.

Consequently, the AI Act is associated with manageable obligations for most companies. However, with their own development pipeline in mind and given the significant time needed for completion for a conformity assessment, companies must be clear on the question: are we currently developing a high-risk AI system?

The authors are specialised attorneys in IT law at the international business law firm Fieldfisher.


More informations: Read more about this at lift-journal.com/data-act

fieldfisher.com

This might interest you as well: