The Consileon Group supports the campaign: #Zusammenland – Vielfalt macht uns stark!
The Consileon Group takes a clear stance against all forms of discrimination and hatred.
In March 2024, the EU Parliament adopted a regulation that governs the public and commercial use of such systems. The regulations, also known as the AI Act or EU AI Act, categorise AI systems into four risk classes. The higher the risk, the stricter the requirements. The regulation is expected to come into force in the middle of the year, with individual regulations coming into force six months later.
The EU AI Act is intended to protect people, institutions and the environment from the risks of state or commercial AI use and ensure that AI systems operate in a safe, transparent, egalitarian and environmentally friendly manner.
Class 1: Unacceptable risk
The regulation prohibits unreasonably or unacceptably risky AI apps. This includes systems that implicitly violate fundamental rights, for example by exploiting human weaknesses to manipulate behaviour, selecting people according to biometric or social characteristics (social scoring), or monitoring or suspecting people without cause. This also extends to the unauthorised, indiscriminate reading and collection of faces from online images or surveillance cameras as well as the machine interpretation of human emotions in the workplace or in educational institutions. Only official law enforcement officers enjoy – strictly regulated – special powers here.
Class 2: High-risk systems
High-risk AI systems are those whose use does not automatically harm health, safety, fundamental rights, the environment, democracy or the rule of law, but which are susceptible to deliberate or negligent misuse. They are typically used to manage critical infrastructure, material or non-material resources or personnel.
Show moreSuch systems will be subject to strict conditions in future. Take lending, for example: according to the AI Act, banks are not allowed to let the machine alone decide on the customer’s creditworthiness. A human must check the score calculated by the machine and be responsible for approving or rejecting the loan.
Manufacturers of such high-risk systems must test them thoroughly before launching them on the market, importers and downstream retailers must ensure that the systems comply with the law, and users must monitor their use. According to the law, the final decision-making and supervisory authority remains the human being. The regulation gives the addressees of the decisions of such systems rights of objection, information and appeal. High-risk systems include in particular
Class 3: Transparency risk
The EU legislator categorises AI as moderately risky, or at least non-transparent, if it does not conflict with fundamental rights but leaves users in the dark about the nature and sources of the service. This applies to chatbots, but above all to so-called generative AI, i.e. programmes that generate artificial texts, images or videos (e.g. deepfakes). According to the law, such apps must identify themselves as machines, label their products as artefacts, document training data and its sources, protect the copyrights of the sources and prevent the generation of illegal content.
Class 4: Low risk
No restrictions apply to simple AI systems such as spam filters or recommendation services.
Quality assurance for AI applications?
Lighthouz AIThe use of systems in risk class 1 must be terminated just six months after the ordinance comes into force, i.e. probably by the end of the year. Further provisions will apply after 12 and 24 months respectively, and after 36 months the AI Act will apply in full. In addition to users, the target group includes manufacturers, importers and downstream dealers. Violations of the regulation can result in fines of between one and seven per cent of annual global turnover, depending on their severity. Companies should therefore immediately check what obligations they have under the law and prepare for them in good time.
Operator
Natural or legal person (public authority or company) that develops an AI system or has one developed in order to sell or operate it in its own name or under its own brand, either for a fee or free of charge.
Importer
A natural or legal person resident or established in the European Union who sells or operates in the Union an AI system bearing the name or trade mark of a manufacturer resident or established outside the EU. Obligations:
Distributor
Natural or legal person who distributes an AI system manufactured or imported by a third party within the EU without changing its characteristics. Obligations:
User
Natural or legal person who uses an AI system on their own initiative for professional, commercial or official purposes. Obligations:
With AI, the accent is on the C, not the I – the machine doesn’t really know what it’s doing. What has so far been sometimes curious, often annoying, embarrassing, sometimes harmful, will become one thing above all else according to the will of EU legislators: illegal. Systems that play with human dignity must be scrapped quickly. AI applications that are categorised as responsible but risky must be constantly monitored.
The US start-up Lighthouz has set itself the task of uncovering the malfunctions of such systems. Together with its American colleagues, Consileon supports you with the legally required tests. Lighthouz AI tests AI systems for:
Essentially, Lighthouz AI generative AI tests how closely the machine-generated answer to a complex question matches the user’s expectations. To do this, Lighthouz AI evaluates the result using syntactic and semantic metrics.
If you would like to know exactly how Lighthouz AI’s testing process works, what you need and how it can help you comply with the AI Regulation, our AI experts look forward to hearing from you.
Quality assurance for AI applications?
Lighthouz AIConsileon helps to develop, market or use AI applications in a legally compliant manner and to adapt legacy systems to the new legal situation.
"*" indicates required fields
Further information on the AI Regulation can be found at the European Parliament. A video by MDR provides a brief introduction.
The Consileon Group takes a clear stance against all forms of discrimination and hatred.
In order to appear credible as a company on the labor market, awards also play an important role, among other things. The “Employer of the Future” award pushes Consileon Austria a decisive step forward.
The motorhome industry is experiencing a strong upswing due to the growing popularity of motorhome holidays. Our data-driven sales strategy is based on in-depth analyses of the customer journey to provide tailor-made solutions for better consumer touchpoint integration.