Deep learning has contributed to the recent big progress in artificial intelligence. In comparison to the traditional machine learning methods such as decision tree, support vector machine, deep learning has achieved significant improvement in prediction accuracy. However, the deep neural network is very weak in interpretability and explainability of the reasoning process and decision results. In fact, DNN is a black box both for developers and for the users. Some people consider DNN and deep learning in the current stage as alchemy, not as science. In many real-world applications such as business decision support or optimization, medical decision or diagnostics support, explainability and interpretability and transparency of our AI systems are very important, for the users, for the people who are affected by the AI systems, and in particular, for the researchers and developers. In recent years, the explainability and explainable AI have received big attention from research to industry. This workshop will provide a forum for sharing the insights about the scope and research topics about explainable AI, exchanging the state-of-the-art of explainable AI research and applications, discussing the future direction and further work in this area. The scope of this workshop will include but not limited to algorithm, technology, evaluation, application in the field of explainable AI.
Zhengzhou, China | October 16, 2020
◇ 1:30 PM – 1:40 PM Welcome
◇ 1:40 PM – 3:00 PM Keynote Speech
◇ 3:00 PM – 3:15 PM Coffee Break
◇ 3:15 PM – 4:30 PM Oral Session
◇ 4:30 PM – 5:30 PM Poster Session
◇ Algorithms, tools, frameworks for Explainable AI
◇ Modified deep learning models with interpretable features
◇ Structured, interpretable, causal models
◇ Inferring an explainable model from any model as a black box
◇ Evaluation for explainability, metrics, and measurement
◇ Combination of human intelligence (HI) and machine intelligence (MI)
◇ Explanation interfaces
◇ Applications and solutions in vertical domains, e.g., business intelligence, medical diagnosis, finance decision, autonomous driving
◇ Fairness, Accountability, and Transparency in AI
◇ Ethical aspects and law, legal issues and social responsibility
All submissions must be written in English. Accepted submissions will be presented at the workshop orally or as poster and be published as a volume in the Springer LNAI series. Submissions are limited to a total of 12 (twelve) pages, including all content and references, and must be in PDF format. The website for submissions is http://www.softconf.com/nlpcc/xai-2020/. Submissions must conform to the specifications of NLPCC 2020 call for papers regarding multiple submissions and preparing papers for the double-blind review process (http://tcci.ccf.org.cn/conference/2020/cfp.php). Papers that violate these specifications will be desk rejected.
◇ Submission Deadline: July 15, 2020
◇ Notification of Acceptance: July 31, 2020
◇ Camera-ready: August 14, 2020
◇ Feiyu Xu, SAP
◇ Dongyan Zhao, Peking University
◇ Jun Zhu, Tsinghua University
◇ Yangzhou Du, AI Lab, Lenovo Research
◇ Please contact Yangzhou Du < duyz1 AT lenovo DOT com > if you have question.