Liability for algorithmic systems


In the latest issue of the BvD-NEWS, Christiane Wendehorst deals with the need for regulation in the field of artificial intelligence (AI) in the light of the EU Commission's White Paper that was published in February 2020.

In the issue 2/2020 of the Magazine for Data Protection of the Professional Association of Data Protection Officers in Germany (BvD) e.V., Christiane Wendehorst published an article on possible approaches to regulate algorithmic systems. Several working groups at national and European level, in which Christiane Wendehorst is also involved, are currently developing ethical and legal guidelines for the use of AI (among others, the Data Ethics Commission, Global Partnership for AI).

In order to address "physical" risks such as bodily injury, property and environmental damage when using AI, a "digital fitness check" of exisiting liability regimes is required. Rules should not only be adapted in the light of AI, but of digital ecosystems at large (e.g. including the IoT). What is a more challenging task is the question how to address "social" risks associated with the use of AI, such as discrimination, total surveillance, or manipulation. The legislator has to master a difficult balancing act: ensuring a high level of protection, while not putting up too much tape and impeding innovation and growth.

Christiane Wendehorst argues for a combined regulatory approach. For "high risk" applications, an approach similar to that of the GDPR (principles, mandatory requirements, rights, procedures) should be adopted. In addition, the European legislator could also define a list of unfair algorithmic practices and link these to liability rules. The complete article will be available at