Christiane Wendehorst was a plenary speaker at the 2nd European AI Alliance Assembly, which took place online on 9 October 2020. She took the opportunity to present some insights that follow from her research, notably the difference between the 'physical dimension' and the 'social dimension' of AI, and the implications of that divide for AI regulation. On the central panel dealing with the "ecosystem of trust" that is being proposed by the Commission's White Paper on AI, Wendehorst explained criteria that could be used for defining 'high risk' applications, which is a concept used both in the White Paper as well as in the newly published JURI Report on Liability for Artificial Intelligence. Wendehorst voiced scepticism as to the sectoral approach suggested, arguing that the question whether AI poses a high risk for fundamental rights or not does not primarily have something to do with the sector (such as health, energy or mobility) in which it is used. She rather argued for looking at the concrete AI application (such as human resources, credit scoring, or personalised pricing) and the purposes for which it is being used. She also stressed that the "ecosystem of trust" which the Commission is trying to achieve must be seen as having a dual dimension, primarily as public trust and trust of each individual that their safety and fundamental rights are not put at risk, but also trust on the part of those developing and deploying AI that Europe is the place for them to engage in R&D for the benefit of innovation.
For Wendehorst's response to the White Paper, authored together with Jens-Peter Schneider, see https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/News_page/2020/ELI_Response_AI_White_Paper.pdf. For the position of the German Data Ethics Commission co-chaired by Wendehorst, see https://datenethikkommission.de/wp-content/uploads/DEK_Gutachten_engl_bf_200121.pdf.