Trust is indispensable to the prosperity and well-being of societies. For millennia, we developed trust-building mechanisms to facilitate interactions. But as they become increasingly digital, many traditional mechanisms no longer function well, hence trust breaks down. As a result, low levels of trust discourage us from engaging in new forms of interactions and constrain business opportunities.
We must therefore invent trust mechanisms that will contribute to prosperous and peaceful societies in the digital age, and Artificial Intelligence could clearly contribute to implement trust in the digital world
AI & Privacy
AI often relies on Machine Learning, which requires massive training datasets. The current status quo in AI are cloud-based services that process the raw data of all users, and learn from all or a subset of it. Following the rise of IoT, and the deployment of sensors (cameras, microphones, etc) in our homes, this raw data becomes more and more sensitive. The development of legislation around privacy, and progresses made on private machine learning and edge computing, open the way to interesting alternatives. Key academic and industrial efforts are now being made to reconcile AI and Privacy.
AI for Good
In the midst of increasing concern over widespread misuse of data and the implications of AI, it's important to highlight those applying AI for social good. Whether its helping African farmers detect diseased crops, advancing sustainability, or helping the visually impaired, there are some great stories out there that should be shared. These stories include amazing technical achievements and outcomes that are positive for humanity, and a few of these will be highlighted during the "AI for Good" sub-track.
AI & Security
More and more AI and Machine Learning solutions are deployed across industries affecting society at scale. The well defined models for attack and defence in classical computing security do not transfer perfectly to the ML systems and are not coping well with the variety of possible ML attacks. The attack surfaces are not yet clearly defined, one can do minor alterations to the input data in order to manipulate or poison the system. On the other hand, as the AI industry is an emerging one, there are no obvious standards or formal definitions for testing and security for ML systems. This session brings together experts in an attempt to highlight the advances in the field of secure ML.
Trusting AI
Trust is the social glue that enables humankind to progress through interaction with each other and the environment, including technology. In the AI context, there are various reasons, why trust has become a very popular topic in research and practice. There is a lack of clear definition of processes, performance and especially the purpose of the AI, with respect to the intentions of its provider. Furthermore, open questions regarding ethical standards, the notion of dual-use research, lack of regulations and questionable data privacy as well as the uncontested supremacy of the tech-giants, leave a feeling of uncertainty behind. The Subtrack "Trusting AI" encompasses questions and issues from research and practice, which look at the heterogeneous concept and associations of trust from an individual, a relationship, a situation and/ or a process perspective: From the initial trust formation and antecedents to repairing a trust relationship once it is broken. Trust is one of the most critical influencing factors in the AI context.