Supervisory Benchmarks and Artificial Intelligence: A View from Germany

By | July 1, 2019

Courtesy of Julia von Buttlar*

Artificial intelligence (AI) technologies are increasingly being used in a variety of fields, none more so than the financial industry. AI offers great opportunities; it can enable companies to automate manual processes and meet their regulatory requirements at a higher speed with a lower error rate and less effort. For example, by looking at voting rights notifications, algorithm-based decisions can be used to trigger notifications in the system and prepare voting rights notifications as soon as predefined thresholds are met. Robo-advisors enable automated investment advice, investment brokerage, and asset management based on algorithms that require minimal human involvement. In the compliance area, AI helps with fraud detection and know-your-customer checks.

Due to the increased use of AI in financial companies, many processes will become faster, more efficient, and more automated. To pick a further example: even today, insurers are able to carry out procedures such as risk assessments and claims processing without involving a single human being. These are just a few of the numerous applications that are constantly being expanded.

While AI promises to bring greater efficiency to the provision of financial services, it is important not to overlook the risks.  How should we deal with the market changes triggered by digitalization? How do supervisory benchmarks and requirements need to be designed in order to carry out a proper legal and technical examination of the models? Regardless of which innovative solutions prevail in the market, AI applications must be embedded in a proper business organization in order to adequately counter risks. Those who use AI must therefore ask themselves which tasks it may take on and how it should be monitored.

Three principles that come up in this context are responsibility, explainability and transparency. [1]The obligation to comply with rules and responsibilities remains with the management of the company. Therefore, the management board may not just shift responsibility to machines and algorithms. Ultimate responsibility for AI generated outcomes must remain with people.

Transparency means that it is possible to fully understand the behaviour of an entire system. But many algorithms are too complex for that. It is therefore necessary to try to at least ensure explainability. This involves being able to identify the key factors that influence the decision a machine makes. Algorithm-based decision processes must in principle be comprehensible and documentable for the company as well as the supervisory authority. Even the most complex models should provide insight into how they work. For this reason, German Federal Financial Supervisory Authority (BaFin) for example will not accept models that are presented as a black box.

If sanction risks are to be avoided, appropriate internal control measures should be put in place when innovative technical solutions are used. Their use must be embedded in an effective, appropriate and orderly compliance management system which sets up a clearly structured process with a clear and appropriate distribution of responsibilities, the instruction and training of employees, the performance of controls and the provision of internal sanction mechanisms for any deficiencies.

Some argue that the choice of intelligent systems should not be left to companies. Instead, an algorithm Technical Inspection Association (TÜV)[2] is to be introduced in which the security of algorithms is checked. [3] According to this proposal, intelligent systems should only be usable after prior approval. The BaFin received comparable feedback on its report “Big Data meets artificial intelligence (BDAI) – challenges and implications for the supervision and regulation of financial services”.[4] Stakeholders as well as individual institutions, national and international authorities and academics participated in the consultation. Respondents to the consultation proposed extended requirements for business-critical process areas, such as the use of CodeReview procedures, simulation and penetration tests, and the peer review of model profiles. BaFin is also asked to examine BDAI models and to formulate concrete requirements for the documentation and explainability of BDAI applications. In a first evaluation, however, it was pointed out that rather than rushing in head first, it would be preferable for companies to develop best practices first.

In this context, numerous expert committees and think tanks are currently working on formulating best practice requirements. For example, the Guidelines for Trustworthy Artificial Intelligence (AI) published in April 2019 is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG).[5] This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year. Based on fundamental rights and ethical principles, the Guidelines list seven key requirements that AI systems should meet in order to be trustworthy: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being and accountability. Aiming to operationalise these requirements, the Guidelines present an assessment list that offers guidance on each requirement’s practical implementation.

Conclusion

The rapid adoption of AI in the financial industry has challenged the industry, supervisors and politicians to develop appropriate safeguards to ensure responsible use of these new systems. While regulation always lags technological development, the gap with AI is even wider than normal. Expert to see many additional supervisory and political developments with respect to AI in the coming years.

 

*Dr. Julia von Buttlar , LL.M. (Duke) is Deputy Head of the Division dealing with Administrative Offence Proceedings at the BaFin’s Directorate for Securities Supervision. She holds a doctorate from the University of Darmstadt (DE) and Law degree at the Johannes Gutenberg University Mainz (DE). She received her LL.M. from Duke Law in 2001. The views expressed in this post are those of the author and do not reflect the views of the BaFin.

 

[1] Key Regulatory Questions on Big Data Analytics and Machine Learning in the Financial Sector, Keynote speech by Felix Hufeld, President of BaFin on 19 June 2019 at the IIF Roundtable on Machine Learning in the Financial Industry in Frankfurt am Main, https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/Reden/re_190619_iif_roundtable_p_en.html.

[2] TÜVs (short for German: Technischer Überwachungsverein, English: Technical Inspection Association) are German businesses that provide inspection and product certification services.

[3] Scherer, ‚Regulating artificial intelligence systems: risks, challenges, competencies, and strategies‘, Harvard Journal of Law and Technology 2016, 353-440.

[4] BaFin, “Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services”, www.bafin.de/dok/10985478, retrieved on 26 April 2019. The study was prepared in collaboration with PD – Berater der öffentlichen Hand GmbH, Boston Consulting Group GmbH and the Fraunhofer Institute for Intelligent Analysis and Information Systems.

[5] https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai; Another project is the “Ethics of Algorithms” project at the Bertelsmann Stiftung initiated the process for developing the Algo.Rules. One of the overriding objectives of the Stiftung’s work is to ensure that digital transformation serve the needs of society, https://algorules.org/en/home/,

Leave a Reply

Your email address will not be published. Required fields are marked *