INVITATION TO A LECTURE BY
Prof. Andreas Holzinger
Holzinger Group, Institute for Medical Informatics/Statistics at the Medical University Graz, Assoc. Prof. at the Institute of Interactive Systems and Data Science at Graz University of Technology
Towards Explainable AI: opening the black-box
Monday, 26th March, 2018 at 11:00
Rectorate building, Room A7.12
Organized by: Faculty of Economic Informatics, Department of Applied Informatics
Summary of lecture:
Artificial intelligence (AI) and machine learning (ML) demonstrate enormous success in many different application areas. Deep learning, trained on big data yet exceed human performance. Even in the medical domain there are remarkable results. However, the central problem of such approaches is that they are regarded as “black-box” models and even though we understand the underlying mathematical principles of such models they lack explicit declarative knowledge. This calls for systems enabling to make decisions transparent, understandable and explainable. This is not new. Explainable AI is as old as AI and possibly the result of AI itself. However, it recently gains interest within the international scientific community. A huge motivation for Explainable AI are rising legal and privacy aspects. For example the new European General Data Protection Regulation (GDPR and ISO/IEC 27001) entering into force on May 25th 2018, will make black-box approaches difficult to use in business. This does not imply a ban on automatic learning approaches or an obligation to explain everything all the time, however, there must be a possibility to make the results re-traceable on request. Moreover, Explainable AI provides a lot of advantages, e.g. for general understanding, for teaching, for learning, for research, and it can be helpful in court. In this talk I outline some fundamentals of Explainable AI and also show how the interactive Machine Learning (iML) approach with the human-in-the-loop can be beneficial for making machine decisions transparent and explainable by opening the black-box to a glass-box.
Andreas HOLZINGER is lead of the Holzinger Group, Institute for Medical Informatics/Statistics at the Medical University Graz, and Assoc. Prof. at the Institute of Interactive Systems and Data Science at Graz University of Technology. Currently, Andreas is Visiting Professor for Machine Learning in Health Informatics at the Faculty of Informatics at Vienna University of Technology. He serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. Andreas obtained a PhD in Cognitive Science from Graz University in 1998 and his Habilitation (second PhD) in Computer Science from Graz University of Technology in 2003. Andreas was Visiting Professor in Berlin, Innsbruck, London (twice), Aachen, and Verona. Andreas fosters a synergistic combination of methodologies of two areas that offer ideal conditions towards understanding intelligence: Human-Computer Interaction (HCI) and Knowledge Discovery/Data Mining (KDD), with the goal of augmenting human intelligence with artificial intelligence. To stimulate crazy ideas at international level without boundaries, Andreas founded the international Expert Network HCI–KDD. Andreas is Editor-in-Chief of Machine Learning & Knowledge Extraction (MAKE), Associate Editor of Knowledge and Information Systems (KAIS), Associate Editor of Brain Informatics (BRIN) and Section Editor for Machine Learning of Medical Informatics and Decision Making (MIDM). He is member of IFIP TC 12 “Artificial Intelligence” and WG 12.9 Computational Intelligence, the ACM, IEEE, GI and the Austrian Computer Society.