◇ Dan Roth
Dan Roth is a Professor in the Department of Computer Science and the Beckman Institute at the University of Illinois at Urbana-Champaign and a University of Illinois Scholar.
Roth is a Fellow of the American Association for the Advancement of Science (AAAS), the Association of Computing Machinery (ACM), the Association for the Advancement of Artificial Intelligence (AAAI), and the Association of Computational Linguistics (ACL), for his contributions to Machine Learning and to Natural Language Processing.
He has published broadly in machine learning, natural language processing, knowledge representation and reasoning, and learning theory, and has developed advanced machine learning based tools for natural language applications that are being used widely by the research community and commercially.
Roth is the Associate Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR) and will serve as Editor-in-Chief for a two-year term beginning in 2015. He was the program chair of AAAI’11, ACL’03 and CoNLL'02.
Prof. Roth received his B.A Summa cum laude in Mathematics from the Technion, Israel, and his Ph.D in Computer Science from Harvard University in 1995.
◇ Keynote topics: Learning and Inference for Natural Language Understanding
Machine Learning and Inference methods have become ubiquitous and have had a broad impact on a range of scientific advances and technologies and on our ability to make sense of large amounts of data. Research in Natural Language Processing has both benefited from and contributed to advancements in these methods and provides an excellent example for some of the challenges we face moving forward.
I will describe some of our research in developing learning and inference methods in pursue of natural language understanding. In particular, I will address what I view as some of the key challenges, including (i) learning models from natural interactions, without direct supervision, (ii) knowledge acquisition and the development of inference models capable of incorporating knowledge and reason, and (iii) scalability and adaptation—learning to accelerate inference during the life time of a learning system.
A lot of this work is done within the unified computational framework of Constrained Conditional Models (CCMs), an Integer Linear Programming formulation that augments statistically learned models with declarative constraints as a way to support learning and reasoning. Within this framework, I will discuss old and new results pertaining to learning and inference and how they are used to push forward our ability to understand natural language.
◇ Jian-Yun Nie
Jian-Yun Nie is a professor in University of Montreal. He obtained his PhD from University of Grenoble (France) on information retrieval. Since then, his research has always been focused on information retrieval and natural language processing. Among other topics, he has worked on IR models, cross-language IR, query expansion and query understanding. Jian-Yun Nie has published a number of papers on these topics and his papers have been widely cited. He published a monograph on cross-language information retrieval (Morgan and Claypool, 2010). He is on editorial board of several international journals, and is a regular PC member of the major conferences in these areas (SIGIR, CIKM, ACL, etc.). He has also been the general chair of SIGIR conference in 2011 held in Beijing.
◇ Keynote topic: The role of NLP in IR
As information retrieval (IR) deals with textual documents in most cases, one would believe that NLP should play an important role in IR.
However, the current IR systems do not rely extensively on NLP methods. Over several decades, IR methods using NLP techniques also have not shown the expected performance gain. Is this because IR does not require NLP, or because many NLP methods are not suitable to IR applications? In this talk, I will discuss several successful and unsuccessful utilizations of NLP techniques in IR and argue that the NLP techniques required in IR should be robust and flexible in order to cope with noisy and less constrained languages.
◇ Hang Li
Hang Li is director of the Noah’s Ark Lab of Huawei Technologies. His research areas include information retrieval, natural language processing, statistical machine learning, and data mining. He graduated from Kyoto University in 1988 and earned his PhD from the University of Tokyo in 1998. He worked at the NEC lab in Japan during 1991 and 2001, and Microsoft Research Asia as Senior Researcher and Research Director during 2001 and 2012. He joined Huawei Technologies in 2012.
◇ Keynote topic: Toward Building A Natural Language Dialogue System Using Big Data and Deep Learning
Natural language dialogue is regarded as one of the most challenging problems in artificial intelligence. We argue that it is a right time to conduct more research on the problem, given that more and more dialogue data has been accumulated on social media and more and more advanced machine learning technologies such as deep learning have been developed. At Noah’s Ark Lab of Huawei Technologies, we dare to take the challenge and try to build a natural language dialogue system using big data and deep learning. At the first step, we particularly investigate how to conduct one round of dialogue, referred to as short text conversation (STC), in which given a message from human, the computer returns a reasonable response to the message. We consider two approaches to conducting STC, a retrieval based approach and a generation based approach. Several models based on the approaches have been developed using a large amount of STC data and deep learning, which can make the system return reasonable output to human’s input in more than 70% of the cases. The interesting question, then, is: what performance can the system achieve with more advanced machine learning techniques and much more data? We wish to work with the community to address this interesting yet challenging problem.