In this FFG-funded project Gradient Zero collaborates with Professor Mark Coeckelbergh from the University of Vienna and the the Austrian Research Institute for Artificial Intelligence to put ethical issues front center in AI development. The project aims to enhance our machine learning platform dq0.io into a full-fledged environment for ethical AI development.
This project provides the unique possibility to bring together ethicists and technical researchers to develop concrete ethics-by-design principles dealing with the practical considerations arising from compliance with relevant ethical foundations and legal frameworks and the constraints imposed by technical implementation.
Starting with philosophical reflections on questions about privacy, transparency, fairness, and impacts, the team will continue to develop tools to embed these principles directly in the AI development process for truly explainable and accountable AI.
02 / 2021 - 03 / 2021
Advancing DQ0 into a trusted machine learning development platform starts with protecting data.
The data of users working with the platform but equally important the data that is being used for AI development.
Advanced statistics and machine learning models put privacy at risk. This first phase of the project therefore focuses
on implementing robust data protection and privacy-preserving mechanisms into the heart of DQ0.
04 / 2021 - 06 / 2021
Differential Privacy mathematically guarantees the highest level of data protection. In contrast to traditional anonymization
techniques Differential Privacy enables the data owner to control the amount of information she is willing to provide from a given data set.
We implemented Differential Privacy as one of the core building blocks of DQ0's privacy-preserving technology in both machine learning training (and prediction)
and SQL analytics.
07 / 2021 - 08 / 2021
Differential Privacy gets implemented into DQ0 for both model training and SQL analytics. Additionally, in this phase we develop our own set of attacks and checks to verify the actual privacy guarantees of all analytic results that shall be published through DQ0. The Privacy Checker module acts as a data protection safe guard for analytic workflows developed with DQ0. Only those models or SQL results that actually respect data privacy can be released by the DQ0 data owner.
09 / 2021 - 11 / 2021
Start of cooperation with Prof. Mark Coeckelbergh, Dr. Rebecca Raper, and the team from the University of Vienna's Philosophy of Media and Technology research group.
The goal is to jointly develop means to embed ethical principles into the AI development process
on the crossroad of philosophy, psychology and computer science.
12 / 2021 - 03 / 2022
Ethics of AI has received a lot of attention during the past years. However, it is still very unclear what the proposed principles mean in practice. There is a gap between the high level theoretical principles and the specific technologies and technological practices of development and use of AI technologies. This is a significant methodological gap, without which ethics of AI cannot be implemented and further developed.
Applied AI Ethics
04 / 2022 - 12 / 2022
In this phase we will develop the foundations for ethical AI development and at the same time introduce their concrete implications and implementations into the AI development process. Because only if the principles of AI Ethics are applied to real-world problems of machine learning development and embedded into the process itself (rather than attached after the facts) one can develop truly trustworthy applications.
06 / 2022 - 10 / 2022
Developing accountable AI by means of integrating ethics-by-design mechanisms into the development process also requires to crack this mysterious “AI black box”. This comes down to the field of Explainable AI. We will research and develop latest techniques of explainable AI together with the Austrian Research Institute for Artificial Intelligence. Starting from the Privacy Checker module from phase 1, an important topic will be data bias. Bias in and from data, bias from machine learning models, and bias based on ethical decisions.
Accountability & Fairness
11 / 2022 - 03 / 2023
Explainability is not the whole story, though. But really rather a prerequisite for trustworthy development. Therefore this project phase aims at introducing tools like an impact, transparency, and fairness checker to assess whether the to be developed models, or rather their developers, can indeed be held accountable and the use of the AI models is indeed "good" in all of its meanings.
04 / 2023 - 02 / 2024
In this final phase it’s time to scale the platform into a development environment that is capable of handling large, real-world projects with huge data sets. In this phase we plan to collaborate with the research group Scientific Computing of the faculty of Computer Science of the University of Vienna to develop strategies to integrate high-performance and machine learning optimized processes into the development platform. In this final phase the results from the previous phases will be revisited and refined.