Deep learning has contributed to the recent big progresses in artificial intelligence. In comparison to the traditional machine learning methods such as decision tree, support vector machine, deep learning has achieved significant improvement in predication accuracy. However, deep neural network is very weak in interpretability and explanability of the reasoning process and decision results. In fact, DNN is a blackbox both for developers and for the users. Some people consider DNN and deep learning in the current stage as alchemy, not as science. In many real world applications such as business decision support or optimization, medical decision or diagnostics support, explanability and interpretability and transparency of our AI systems are very important, for the users, for the people who are affected by the AI systems, and in particular, for the researchers and developers. In the recent years, the explanability and explainable AI have received big attention from research to industry. This workshop will provide a forum for sharing the insights about the scope and research topics about explainable AI, exchanging the state-of-the art of explainable AI research and applications, discussing future direction and further work in this area. The scope of this workshop will include but not limited to algorithm, technology, evaluation, application in field of explainable AI.
Dunhuang, China | October 12, 2019
◇ 1:30 PM – 1:40 PM Welcome
◇ 1:40 PM – 3:00 PM Keynote Speech
◇ 3:00 PM – 3:15 PM Break
◇ 3:15 PM – 4:30 PM Oral Session
◇ 4:30 PM – 5:30 PM Poster Session
◇ Algorithms, tools, frameworks for Explainable AI
◇ Modified deep learning models with interpretable features
◇ Structured, interpretable, causal models
◇ Inferring an explainable model from any model as a black box
◇ Evaluation for explainablity, metrics and measurement
◇ Combination of human intelligence (HI) and machine intelligence (MI)
◇ Explanation interfaces
◇ Business applications of explainable AI
◇ Fairness, Accountability, and Transparency in AI
◇ Ethical aspects and law, legal issues and social responsibility
The manuscripts are expected to be formatted in accordance with Springer style sheets [LaTeX][Microsoft Word],in PDF format,and submitted electronically through the website(https://www.softconf.com/j/xai2019/).For first submission, a full paper of 8-12 pages is preferred and an abstract of 1-3 pages is also welcome. After confirmation, the abstract needs to be extended to full paper as final version. All submissions will be peer reviewed by two members of our international program committee. Accepted submissions will be presented at the workshop orally or as poster and published in the NLPCC 2019 proceedings.
◇ Paper/Abstract submission deadline: Extended to July 27, 2019
◇ Notification: Aug 5, 2019
◇ Camera-ready submission: Aug 15, 2019
◇ Feiyu Xu, Lenovo Research
◇ Dongyan Zhao, Peking University
◇ Jun Zhu, Tsinghua University
◇ Roberto Navigli, Sapienza University of Rome, Italy
◇ Haojin Yang, Hasso Plattner Institute, Germany
◇ Jörn Hees, German Research Center for Artificial Intelligence (DFKI), Germany
◇ Sven Schmeier, German Research Center for Artificial Intelligence (DFKI), Germany
◇ Freddy Lecue, CortAIx, Thales, Canada & INRIA, France
◇ Shixia Liu, Tsinghua University, China
◇ Hang Su, Tsinghua University, China
◇ Mengchen Liu, Microsoft Research Asia, China
◇ Chengqing Zong, Chinese Academy of Science, China
◇ Jin Zhang, Nankai University, China
◇ Yangzhou Du, Lenovo Research, China
◇ Please contact Yangzhou Du < duyz1 AT lenovo DOT com > if you have question.