The White Paper on Artificial Intelligence (COM(2020) 65 final), which was published by the European Commission in February 2020, presents policy options for the use of AI and addresses certain risks connected to it. The response discusses the regulation of AI applications in both public and private use. It points out some of the challenges, such as lack of knowledge and the public-private technology gap, as well as options for a legal framework on the public use of AI. Impact assessments in particular could serve as a source of trustworthy public AI. Model rules on such impact assessments are currently being developed in the framework of the new ELI project on Artificial Intelligence (AI) and Public Administration – Developing Impact Assessments and Public Participation for Digital Democracy.
In regard to the private use of AI, the response points out that the risks associated with AI applications fall into two different dimensions: the ‘physical’ and the ‘social’. The response advocates for a targeted regulatory approach to both dimensions. It underlines that the ‘physical’ risks (such as death, personal injury or damage to property caused by unsafe products and services) could be best addressed by fully adjusting existing regulatory frameworks to the challenges of digital ecosystems, including AI. The ‘social’ risks (such as discrimination, exploitation, manipulation or loss of control by inappropriate decisions or exercise of power with the help of AI) are much more AI-specific and challenging to regulate. The response recommends a combination of horizontal principles that, a list of blacklisted AI practices and a more comprehensive regulatory framework for defined high-risk applications.
The full response is available here.