AI, Language, and Society — New Frontiers in Communication and Public Opinion
Speaker: Xun Pang, Professor, Peking University
Title: Large Language Models in Public Opinion Research: Addressing Survey Limitations and Model Bias
Abstract : Traditional public opinion surveys are increasingly constrained by rising costs, declining response rates, and political sensitivity, leading to issues of missing data and representativeness. This paper investigates the use of large language models (LLMs) as tools to augment and improve public opinion research by addressing these limitations. We conduct two empirical studies to examine LLMs' capacities and biases. The first study evaluates how LLMs, including GPT and Chinese generative models, can predict and impute trust-related survey responses using data from the China Family Panel Studies (CFPS). We compare LLMs with conventional statistical and machine learning methods under various missing data mechanisms (MCAR, MAR, MNAR), and find that LLMs—especially in zero-shot settings—achieve competitive or superior performance, even without access to training data. The second study assesses LLM bias using the World Values Survey, uncovering systematic individual- and country-level patterns: LLMs tend to better replicate responses from high-socioeconomic-status individuals and favor responses aligned with developed, democratic contexts. While LLMs offer promising avenues for data augmentation and imputation in survey research, their inherent biases—shaped by training data and model origin—pose critical risks. We conclude by outlining principles for responsible integration of LLMs into public opinion research, emphasizing the need for transparency, contextual awareness, and bias mitigation.
Speaker: Pang Xun is a professor at Peking University’s School of International Relations and director of the PKU Analytics Lab for Global Risk Politics. She serves on the editorial board of Political Analysis and as associate editor of the Japanese Journal of Political Science. Prof. Pang’s research covers global risk politics, international political economy, international security, and computational methods, including Bayesian statistics, quantitative causal inference, and AI applications in social science. She has published extensively in leading journals such as Political Analysis, International Organization, Political Science Research & Methods, Chinese Social Sciences, and World Economy and Politics. She is the author of monographs BRICS Foreign Aid Cooperation in Global Governance (2016) and Global Risk Politics in the Age of Anxiety and Anger (2025). She has served on multiple committees within the Society for Political Methodology (the methodology section of the American Political Science Association) and is a founding organizer of the Asian Political Methodology Annual Meeting.
Speaker: Liang Yang, Associate Professor, Dalian University of Technology
Title: When Large Language Models Meet Computational Humor
Abstract : Significant progress has been made in the field of humor computation, including recognition and generation. Despite its promising developments, several challenges still remain in the era of LLM. In this talk, I will present our recent efforts to address some of them. (1) For understanding, I will discuss the understanding mechanism of large models for humors. (2) For generation, I will introduce how large models generate specific types of humor by integrating humor theory. (3) Finally, I will share our future exploration directions, such Evaluation of sense of humor.
Speaker: Liang Yang is an associate professor at the School of Computer Science and Technology, Dalian University of Technology(DUT). He is also a member of DUT-Information Retrieval Research Lab (DUTIR). His research areas include natural language processing, large-scale language models, sentiment analysis, and information retrieval. In the past five years, He has published over 30 papers, such as ACL, EMNLP, COLING, AAAI, WWW, TKDE, TASLP, etc.
Speaker: Yi R. (May) Fung, Assistant Professor, Hong Kong University of Science and Technology
Title: Foundation Model Sociocultural Intelligence: From Norm Awareness to Multi-Role Advanced Reasoning
Abstract : Foundation models require deep sociocultural intelligence to operate meaningfully across global contexts. Yet, current models often still struggle with nuanced norms, multimodal cultural cues, and perspective-sensitive reasoning. In this talk, we set forth to tackle these crucial gaps through three key dimensions. First, we confront the foundational challenge of models lacking parametric knowledge of culturally situated norms, by introducing a novel approach for instilling norm awareness through automated extraction and self-verification of culture-specific behavioral rules from multilingual conversations, grounding models in contextual social expectations. Second, as real-world context is inherently multimodal, we present the CultureCLIP framework for extending this awareness to visual domains. It overcomes cultural ambiguities via synthetic "twin" image-caption datasets and contrastive fine-tuning, significantly boosting fine-grained cultural recognition while preserving generalization. Finally, we propose MultiRole-R1 that tackles culturally situated advanced reasoning through role-diverse chain-of-thought and group relative policy optimization (GRPO), demonstrating that scaling perspectives enhances both accuracy and adaptability in subjective tasks. Collectively, these advances establish an integrated pipeline for socioculturally intelligent AI — transforming how models perceive norms, interpret multimodal context, and reason across perspectives, paving the way for more socioculturally-situated context-aware foundation models.
Speaker: Yi R. (May) Fung is an Assistant Professor at the Department of Computer Science and Engineering (CSE), Hong Kong University of Science and Technology (HKUST). She received her Ph.D. from the University of Illinois, after which she spent time visiting MIT as a postdoctoral researcher. May drives cutting-edge research in the domain of human-centric trustworthy AI/NLP model reasoning, with cognitively grounded scalable alignment principles and a focus on advancing multimodal knowledge robustness mechanisms. In particular, she has published near 40 papers at top-tier machine learning venues along the topics of MLLM agentic frameworks, retrieval-augmented generation, and multi-lingual cross-culture situation understanding for diverse real-world applications (e.g., software, healthcare, business, education, media communication). Her stellar research has received much recognition internationally, including the ACL’24 Outstanding Paper Award, NAACL’24 Outstanding Paper Award, and NAACL’21 Best Demo Paper Award. In addition, she serves on the Organizing Committee for IJCAI, as Area Chair for NeurIPS/ACL/EMNLP/ACL-RR, and as Program Chair for ACM Multimedia System (MMSys). She leads a young, energetic, and growing research lab, with a number of students awarded or nominated highly selective HKPFS/RedBird merit fellowships. Her work has been reported by various mainstream news outlets, including TVB News and The Paper 澎湃新聞.
Speaker: Huiyi Lyu, Graduate Student, Tsinghua University
Title: Beyond Volume: LLM-Driven Detection of Emotional Influence Networks in Global Social Media Discourse
Abstract : This study presents a novel computational framework using Large Language Models to analyze global narrative competition through massive social media datasets. It collected 251,225 original English tweets from 63,986 accounts (September 2013-December 2023) regarding China’s Belt and Road Initiative. Large Language Models were utilized to infer sentiment orientation, thematic categories, account types, and user geographic origins for each tweet. Unlike traditional volume-based approaches measuring “who speaks most,” our method computationally detects “who triggers global emotional responses,” a higher-dimensional measurement of discourse influence that maps sentiment cascades across national boundaries. Analysis reveals four distinct behavioral clusters: narrative leaders (China-aligned), counter-narrative leaders (US-aligned), conflict zones responding to both narratives simultaneously, and narrative blockers with minimal engagement. Results demonstrate that emotional contagion patterns serve as explicit mechanisms for strategic positioning rather than mere communication outcomes. This research contributes a scalable computational framework for real-time monitoring of global narrative influence, with applications extending to brand competition, crisis communication, and cross-cultural information dynamics.
Speaker: Huiyi Lyu is a PhD candidate at Tsinghua University and former visiting student researcher at Stanford’s Asia-Pacific Research Center. Specializing in computational social science, Huiyi applies Large Language Models to analyze large-scale social media data, focusing on cross-national sentiment analysis and strategic narrative dynamics. Her current research involves developing LLM-based methodologies for mining large-scale Twitter data to decode global information competition patterns.
Simulating Social and Strategic Decision-Making with Large Language Models
Speaker: Zhongyu Wei, Associate Professor, Fudan University & Shanghai Innovation Institute
Title: SocioVerse - A World Model for Social Simulation based on 10 million User Pool
Abstract : Social simulation is transforming traditional social science research by modeling human behavior through interactions between virtual individuals and their environments. With recent advances in large language models (LLMs), this approach has shown growing potential in capturing individual differences and predicting group behaviors. However, existing methods face alignment challenges related to the environment, target users, interaction mechanisms, and behavioral patterns. In this talk, the speaker will introduce SocioVerse, an LLM-agent-driven world model for social simulation. The framework features four powerful alignment components and a user pool of 10 million real individuals. To validate its effectiveness, large-scale simulation experiments across three distinct domains has been performed: politics, news, and economics. Results demonstrate that SocioVerse can reflect large-scale population dynamics while ensuring diversity, credibility, and representativeness through standardized procedures and minimal manual adjustments.
Speaker: Zhongyu Wei, Associate Professor at the School of Data Science, Fudan University, director of the Data Intelligence and Social Computing Lab (Fudan DISC), Jointly Appointed Mentor at Shanghai Innovation Institute (SII). He received his Ph.D. degree from The Chinese University of Hong Kong and worked as a postdoctoral researcher at the University of Texas at Dallas. His primary research areas include multimodal large models and social computing, with over 100 published papers. He serves as the Senior Area Chair (SAC) for ACL 2023, EMNLP 2024 and NAACL 2025. His representative achievements include the multimodal multi-step reasoning large model Volcano and social simulation framework SocioVerse. He has received the CIPS Social Media Processing Committee Rising Star Award, Shanghai Rising Star Program, and the CCF Natural Language Processing Committee Rising Scholar Award.
Speaker: Hu Jingtian, Graduate Student, Tsinghua University
Title: Modeling Collective Foreign Policy Decisions: Randomized Controlled Experiments with AI Multi-Agent Architectures
Abstract : This study uses randomized-controlled experiments to test large language model (LLM) agents in three classic "hawkish-bias" scenarios originally developed for human subjects: a prospect-theory rescue mission, a North Korean intentionality judgment, and a U.S.–China reactive devaluation task. The LLMs were structured as individuals, in five-member horizontal groups, and in hierarchical leader-plus-adviser groups, with a LangGraph-based simulation managing deliberation and consensus. The findings indicate that LLMs replicate the human loss-frame effect, choosing riskier policies when outcomes are framed as losses, especially in hierarchical groups. However, a complete reversal was observed on the intentionality task; unlike humans, the presence of fatalities caused LLMs to perceive less hostile intent across all decision structures. Furthermore, reactive devaluation was amplified in LLMs, which, unlike humans, showed a pronounced drop in support for a policy authored by China, a trend sharpest in hierarchical groups. Finally, diversity nudges that create mild disagreement in human groups acted as a volatile amplifier of dissensus in LLM groups. The study concludes that cognitive asymmetries originate at the individual agent level and are then reshaped by institutional design, providing a reusable framework for analyzing LLM deliberation with insights for computational linguistics and foreign policy.
Speaker: Jingtian is a master's student at the School of Social Sciences, Tsinghua University, and a core research team member of PKU Analytics Lab for Global Risk Politics. His major is International Relations, with primary research interests in political violence and computational social science methods, particularly causal inference and complex network analysis in the social sciences.A recipient of the Tsinghua Top Prize Finalist Scholarship, Jingtian has been honored as a Beijing Outstanding Graduate and a Tsinghua University Outstanding Graduate. His research has been funded by the "Future Scholar" Program and the Computational Social Science Award Program. His research has been published in academic journals such as Quarterly Journal of International Politics. He has also been invited to present research papers at top international academic conferences, including Annual Meeting of the Society for Political Methodology (PolMeth), Asian Political Methodology Meeting(Asian PolMeth), International Studies Association (ISA) Annual Conference, and Midwest Political Science Association (MPSA) Annual Conference.
Speaker: Xiong Wei, Professor, China Foreign Affairs University
Title: AI in International Peace Mediation: Building Interpersonal and Human–Machine Trust
Abstract : The recent developments of Artificial Intelligence (AI) has demonstrated it as a transformative force in many sectors. The impact of AI on international peace mediation is no exception. As the United Nations Department of Political and Peacebuilding Affairs (UNDPPA) suggests, AI “will not only wage future wars but also future peace.” This statement highlights the dual potential of AI: while it has applications in military advancements, it equally holds promise for fostering peace. In fact, international conflict resolution is a poineer area of AI application. The potential applications of AI in conflict resolution are vast, spanning the entire lifecycle of mediation and peace-building processes. This paper explores how AI can be integrated into international peace mediation, focusing on how AI impacts international peace mediation by helping and complicating trust building that is the key and challenging issue in the endeavor.
Speaker: Wei Xiong is Professor of Diplomacy at China Foreign Affairs University (CFAU), where he serves as Chair of the Department of Diplomacy and Foreign Affairs Management. He is the Founding Director of the CFAU Lab for Diplomacy Experiments and Data Analysis and Vice President of the National Association for Diplomatic Studies. Previously, he served as Political Advisor at the Chinese Embassy in Berlin.
Prof. Xiong’s research focuses on international negotiation, mediation and conflict resolution, and German foreign policy. He has authored numerous scholarly articles and books, including German Foreign Policy after Unification: From 1990 to 2004, Providing Global Public Goods Through International Cooperation, and Diplomatic Negotiation: Interest, Institution, and Process.
Future Directions for AI and Social Science Collaboration