we are overwhelmed with the systems using Artificial Intelligence (AI), regardless of whether we are users or developers of these systems. Such the systems range from systems following a predefined set of rules to systems relying on machine learning. The need for systems so-called eXplainable Artificial Intelligence (XAI) which are able to explain and to present their decisions in understandable terms to a human becomes a necessity when these systems are too complex to allow for human oversight or are inherently opaque. The latest techniques in machine learning like ensembles and Deep Neural Networks (DNN) which have also achieved a big attention in machine learning bring this necessity as they are inherently opaque and difficult for a human to understand their decision results. In the practical deployment of AI systems in the real-world applications ranging from business to medicine, explainability and transparency is a crucial feature of our AI systems, specially, when it comes to affect people’s life. As General Data Protection Regulation (GDPR) empowers individuals with the right to demand an explanation of how an AI system made a decision that affects them. Explainable AI systems are increasingly receiving significant interest from stakeholders, communities, and areas across this multidisciplinary field. This workshop aims to bring together researchers sharing the insights about the scope and research topics about explainable AI, exchanging the state-of-the-art of explainable AI research and applications, discussing the future direction and further work in this area.
Qingdao, China | October 15, 2021
◇ TBD
Topics of interests include but are not limited to:
◇ Explainable/Interpretable Machine Learning
◇ Strategies to explain black box decision systems
◇ Cognitive Theories and Philosophical Foundations of explanation
◇ Metrics for evaluation of the quality of explanations
◇ XAI Domains and Benchmarks
◇ Dialogue systems for enhanced human-ai interaction
◇ Causality reasoning and learning
◇ Combination of human intelligence (HI) and machine intelligence (MI)
◇ Explanatory user interfaces and Human-Computer Interaction (HCI) for explainable AI
◇ Adaptable explanation interfaces
◇ Interaction Design for XAI
◇ Industrial applications of xAI, challenges, and solutions, e.g. in medicine, autonomous driving, production, finance, ambient assisted living, etc.
◇ Explanation agents and recommender systems
◇ Fairness, Accountability, and Transparency in AI
◇ Ethical aspects and law, legal issues and social responsibility
◇ Abstraction of human explanations
◇ Interpretable dataset shift detection
◇ Gradient based interpretability
◇ Counterfactual explanations
All submissions must be written in English. Accepted submissions will be presented at the workshop orally or as poster and be published as a volume in the Springer LNAI series. Submissions are limited to a total of 12 (twelve) pages, including all content and references, and must be in PDF format. The template for submissions is the same with that for NLPCC 2021 main track English submissions (http://tcci.ccf.org.cn/conference/2021/cfp.php). The website for submissions is https://www.softconf.com/nlpcc/xai-2021. Submissions must conform to the specifications of NLPCC 20210 call for papers regarding multiple submissions and preparing papers for the double-blind review process. Papers that violate these specifications will be desk rejected.
◇ Submission Deadline: July 15, 2021
◇ Notification of Acceptance: July 31, 2021
◇ Camera-ready: August 14, 2021
◇ Feiyu Xu, SAP
◇ Deyi Xiong, Tianjin University
◇ Mohsen Pourvali, AI Lab, Lenovo Research
◇ Please contact Mohsen Pourvali mpourvali@lenovo.com if you have question.