Invited Speakers Short Bio Talks
Roberto Navigli

Roberto Navigli

Professor of Natural Language Processing at the Sapienza University of Rome, where he leads the Sapienza NLP Group. He has received two ERC grants on multilingual semantics, highlighted among the 15 projects through which the ERC has transformed science. He has received several prizes, including two Artificial Intelligence Journal prominent paper awards and several outstanding/best paper awards from ACL.

He leads the Italian Minerva LLM Project - the first LLM pre-trained in Italian - and is the Scientific Director and co-founder of Babelscape, a successful deep-tech company focused on next-generation multilingual NLU and NLG.

He is a Fellow of ACL, AAAI, ELLIS, and EurAI, and has served as General Chair of ACL 2025.

Exploring Semantics in the Age of Large Language Models

Large Language Models (LLMs) have redefined the distributional paradigm in semantics, demonstrating that large-scale statistical learning can yield emergent representations of meaning. Yet, while these models exhibit impressive linguistic fluency and versatility, their internal representations of meaning remain largely opaque, data-driven, and detached from explicit conceptual structure. This talk revisits the problem of meaning representation from a complementary, knowledge-based perspective, presenting an integrated view of several large-scale semantic resources - including BabelNet, NounAtlas, and Concept-pedia - that aim to provide interpretable, multilingual, and multimodal conceptually-grounded frameworks for modeling lexical and conceptual knowledge.

We will also discuss the potential of explicit semantics to interface with LLMs for enhanced interpretability and semantic alignment. In doing so, the talk argues for a renewed synthesis between symbolic and subsymbolic approaches to meaning, illustrating how curated, multilingual knowledge graphs and data-driven models can jointly contribute to a more comprehensive and transparent account of semantics in the era of large-scale neural language modeling.
Regina Zhang

Regina Zhang

Postdoctoral fellow at the Nanyang Technological University. She received her Ph.D. in Computer Science from The University of Hong Kong. Her research lies at the intersection of AI for Science, Graph Representation Learning, and Spatial-Temporal Forecasting, with applications spanning urban computing, biology, and physics. She has published over 18 peer-reviewed papers in premier venues such as AAAI, NeurIPS, ICML, ICDE, WWW, and TKDE.

AI-Powered Graph Representation Learning for Robust and Efficient Urban, Social and Biological Science

The increasing availability of human trajectory and social data, fueled by GPS and social networks, presents a unique opportunity for scientific discovery. However, existing data analysis methods struggle to provide robust, efficient, and generalizable graph representations, hindering their applicability in urban and biological sciences. This research addresses this challenge by developing novel machine learning algorithms specifically tailored for graph-structured data in these domains. This research tackles three key challenges: (1) Sparse Data and Data Distribution Heterogeneity: Current methods often struggle with sparse data and varying data distributions, limiting their ability to capture diverse patterns and hindering scalability. This research proposes novel approaches for flexible, adaptive, and generalizable representations in urban planning and social sciences. (2) Non-General Representation and Difficulty Adapting to New Data: Existing methods often lack the ability to generalize across different datasets and struggle to adapt to new data, hindering their effectiveness in real-world applications. This research aims to develop methods that can learn robust and efficient representations that generalize across different datasets and adapt to new data. (3) Trade-off Between Efficiency and Effectiveness: Balancing processing speed, accuracy, and reliability is crucial in urban and social science data analysis. This research addresses this challenge by developing innovative algorithms that optimize for both efficiency and effectiveness. This research leverages contrastive learning and information bottleneck techniques to develop robust and efficient graph representation learning methods for spatial-temporal data and recommender systems. The developed methods have demonstrated significant improvements in downstream tasks such as traffic prediction, crime prediction, and anomaly detection. Furthermore, the research explores the application of AI in biological science, focusing on developing methods for knowledge representation and data analysis in areas such as cell biology and neuroscience. This research lays a strong foundation for future work in graph-structured data analysis across various domains, including urban science, social science, biological science, and scientific discovery.
Liu Kang

Liu Kang

full professor at Institute of Automation, Chinese Academy of Sciences. He is also a youth scientist of Beijing Academy of Artificial Intelligence and a professor of University of Chinese Academy of Sciences. His research interests include Knowledge Graphs, Natural Language Processing and Large Language Models. He has published over 80 research papers in AI conferences and journals, like ACL, EMNLP, NAACL, COLING, TKDE, et al. His work has over 30,000 citations on Google Scholar. He received the Best Paper Award at COLING-2014, Best Poster&Demo Paper Award at ISWC-2023, and Best Paper Award at NeusymBridge Workshop of COLING-2025.

Shuttle between Symbolic Knowledge and Neural Parameters

Recently, performing mutual enhancement between traditional symbolic knowledge bases and large language models has become a hot research problem. The important questions include: how to efficiently embed existing symbolic knowledge into large language models? how to induce symbolic knowledge from model parameters? And how to shuttle between symbolic knowledge and parametric knowledge? This talk will introduce our recent research work on these issues.
Minghui Dong

Minghui Dong

Professor and Chief Scientist at the Longgang Institute of Zhejiang Sci-Tech University, with 20+ years in speech and language processing. He previously served as Principal Scientist at A*STAR I²R in Singapore, leading major research projects in speech and NLP. He has held leadership roles in academic organizations, conferences, and editorial boards, and secured competitive grants with industry and government partners. His recent work focuses on neural–symbolic NLP, emphasizing verifiable reasoning, explainability, and reliability in high-risk domains such as regulations and healthcare, aiming to integrate neural models with symbolic knowledge for accountable, trustworthy AI systems.

Beyond Fluency: Accountability and Declined Answers in Safety-Critical LLMs

Large Language Models (LLMs) can produce fluent answers, yet fluency alone does not ensure trustworthy reasoning. In high-risk domains—such as clinical decision support, regulations, and public administration—models must not only be able to answer questions, but also recognize when they are not entitled to provide a conclusion. This talk presents a neural–symbolic approach to safe reasoning in which neural models perform evidence extraction and semantic grounding, while symbolic rules determine whether a conclusion is permitted, traceable, and auditable. The approach reframes “I don’t know” as a necessary safety behavior, not a failure: when essential information is missing, contradictory, or insufficient to justify an inference, the system abstains or escalates rather than generating a fluent but unsupported answer. A fail-closed reasoning layer is demonstrated on outpatient clinical notes, showing how explicit constraints reduce unwarranted inferences and produce evidence-bound outputs. The central claim is that the key question has shifted from Can an LLM answer? to When must an LLM decline to answer? Neural–symbolic integration offers a principled path toward verifiable, accountable, and deployment-ready NLP systems in safety-critical settings.
Ruihong Huang

Ruihong Huang

Associate professor in the Department of Computer Science & Engineering at Texas A&M University (TAMU), College Station. She is also an adjunct associate professor in McWilliams School of Biomedical Informatics at UTHealth Houston. Huang received her PhD in computer science at the University of Utah and completed a postdoc at Stanford University. She joined TAMU in Fall 2015 as an assistant professor and was promoted to associate professor (with tenure) in 2021. Her research is focused on event-centric NLP, discourse analysis, dialogue and pragmatics, LLM evaluation, safety and moral reasoning of LLMs. She is a recipient of the US National Science Foundation CAREER award (2020).

Discourse Structure Guided NLP Models for Fine-grained Media Bias Analysis

Thinking about more and more powerful pretrained models and an increasing context window size supported by recent LLMs, do we still need explicit discourse structures to guide semantic reasoning? In this talk, I will present our research on fine-grained sentence-level media bias analysis that shows that incorporating shallow discourse structures or event relation graphs enables NLP models to better understand broader context and recognize subtle sentence-level ideological bias. News media play a vast role in shaping public opinion not just by supplying information, but by selecting, packaging, and shaping that information to persuade as well. Sentence-level media bias analysis is challenging and aims to identify sentences within an article that can illuminate and explain the overall bias of the entire article. This talk will first show that understanding the discourse role of a sentence in telling a news story, as well as its discourse relation with nearby sentences, can help reveal the ideological leanings of the author even when the sentence itself appears merely neutral or factual. This talk will further show that analyzing events with respect to other events in the same document or across documents is critical for identifying bias sentences.
Zheng Wang

Zheng Wang

Zheng Wang is currently Assistant Chief Expert at Huawei under the TopMinds program. He received his PhD from Nanyang Technological University, Singapore. His current research interests focus on AI agents, including retrieval-augmented generation, agent planning, and multimodality. He has published over 40 papers in top-tier AI and data science conferences and journals. His research has been recognized by the ACM SIGSPATIAL Rising Star Award, Outstanding PhD Thesis Award, WAIC Yunfan Award, Google PhD Fellowship, and AISG PhD Fellowship. He has also served as PC member or area chair for NeurIPS, ICML, ICLR, SIGMOD, KDD, ICDM, WWW, ACM MM, etc.

openJiuwen for AI Agent Practice: Advanced Knowledge Retrieval and Workflow Planning

AI agents are increasingly important for assisting humans in complex tasks. In this talk, I will introduce openJiuwen, an open-source platform for building production-ready AI agents. It enables developers to create intelligent, interactive agents with high reliability, workflow orchestration, and automated prompt optimization, supporting both research and practical applications. I will then present our recent research based on openJiuwen, including knowledge retrieval, which improves agent performance via a multi-partition paradigm or editable memory graph; and workflow planning, which enhances agent transferability by leveraging past experiences from an external instruction database. Finally, I will share several perspectives on how openJiuwen can support future AI agent research and applications.