We review your AI application for regulatory and ethical standards
Artificial intelligence (AI) already forms a core element of the automation of business processes and the development of new services or business models. In the financial industry in particular, AI systems are increasingly making far-reaching decisions ranging from lending to fraud prevention. The scope and frequency of such use cases are rapidly increasing, while regulators such as BaFin point out that companies are just as responsible for judgments delegated to machines as those made by their employees. As BaFin President Felix Hufeld pointed out in June 2018, “It is elementarily important from BaFin’s point of view that machines should not bear responsibility even in the case of automated processes. In any case, management remains responsible.”
Like other IT systems, AI applications are subject to the regulations of the European Banking Authority (EBA) and the Federal Financial Supervisory Authority (BaFin), including MaRisk and BAIT. The rejection of a loan application, for example, requires justification. If the bank leaves the decision to an algorithm, this must be explainable and the decision made must be comprehensible.
Companies in the financial sector are therefore well advised to align their AI systems not only with current IT regulations, but also with ethical criteria in a forward-looking manner. Our experts regularly exchange ideas with AI researchers, analyze the impact of regulatory trends and new procedures in practice. Our AI audit helps financial service providers to exploit the opportunities offered by the use of intelligent algorithms without losing the trust of their customers and employees or risking regulatory sanctions. AI systems are complex on the one hand, but on the other, it is hard to imagine core business without them. If such a system proves to be problematic from a regulatory point of view, it can often only be converted at great expense and time.