GuardAI

The GuardAI project aims to strengthen the security of edge AI systems by addressing their vulnerabilities, especially in high-risk areas outlined in the EU’s AI Act, including drones, autonomous vehicles, and network edge infrastructure. These cutting-edge applications are becoming widespread and are heavily dependent on real-time decision-making and sensitive data processing, making them prone to various security threats and adversarial attacks. GuardAI’s main goal is to create resilient AI algorithms specifically designed for edge AI applications. By integrating the latest technologies, it focuses on ensuring system integrity, security, and resilience to build trust and promote safe use of AI.

GuardAI seeks to revolutionize AI security by adopting a multi-disciplinary and multifaceted approach. It brings together researchers, industry experts, government agencies, and AI practitioners, combining their expertise with advanced threat analysis and secure-by-design AI algorithms. This collaborative effort is designed to drive significant advancements in the field and set new standards for AI system security. The development of standardized evaluation criteria and insights from real-world applications seek to guide future certification frameworks, ensuring rigorous security standards. Additionally, ethical considerations are central to the project, aiming to foster ethically responsible AI technologies.

The Department of Innovation and Digitalisation in Law ensures GUARDAI’s compliance with legal, ethical, and data protection regulations, including the European Charter of Fundamental Rights. Additionally, the Department provides recommendations on ethical concerns related to project research and identifies potential regulatory barriers for GuardAI solutions.

More information on the project can be found here and in u:cris.

 

An diesem Projekt arbeiten folgende Expert*innen des Instituts: