Invited Speakers Short Bio Talks
Roberto Navigli

Roberto Navigli

Professor of Natural Language Processing at the Sapienza University of Rome, where he leads the Sapienza NLP Group. He has received two ERC grants on multilingual semantics, highlighted among the 15 projects through which the ERC has transformed science. He has received several prizes, including two Artificial Intelligence Journal prominent paper awards and several outstanding/best paper awards from ACL.

He leads the Italian Minerva LLM Project - the first LLM pre-trained in Italian - and is the Scientific Director and co-founder of Babelscape, a successful deep-tech company focused on next-generation multilingual NLU and NLG.

He is a Fellow of ACL, AAAI, ELLIS, and EurAI, and has served as General Chair of ACL 2025.

Exploring Semantics in the Age of Large Language Models

Large Language Models (LLMs) have redefined the distributional paradigm in semantics, demonstrating that large-scale statistical learning can yield emergent representations of meaning. Yet, while these models exhibit impressive linguistic fluency and versatility, their internal representations of meaning remain largely opaque, data-driven, and detached from explicit conceptual structure. This talk revisits the problem of meaning representation from a complementary, knowledge-based perspective, presenting an integrated view of several large-scale semantic resources - including BabelNet, NounAtlas, and Concept-pedia - that aim to provide interpretable, multilingual, and multimodal conceptually-grounded frameworks for modeling lexical and conceptual knowledge.

We will also discuss the potential of explicit semantics to interface with LLMs for enhanced interpretability and semantic alignment. In doing so, the talk argues for a renewed synthesis between symbolic and subsymbolic approaches to meaning, illustrating how curated, multilingual knowledge graphs and data-driven models can jointly contribute to a more comprehensive and transparent account of semantics in the era of large-scale neural language modeling.
Ruihong Huang

Ruihong Huang

Associate professor in the Department of Computer Science & Engineering at Texas A&M University (TAMU), College Station. She is also an adjunct associate professor in McWilliams School of Biomedical Informatics at UTHealth Houston. Huang received her PhD in computer science at the University of Utah and completed a postdoc at Stanford University. She joined TAMU in Fall 2015 as an assistant professor and was promoted to associate professor (with tenure) in 2021. Her research is focused on event-centric NLP, discourse analysis, dialogue and pragmatics, LLM evaluation, safety and moral reasoning of LLMs. She is a recipient of the US National Science Foundation CAREER award (2020).

Discourse Structure Guided NLP Models for Fine-grained Media Bias Analysis

Thinking about more and more powerful pretrained models and an increasing context window size supported by recent LLMs, do we still need explicit discourse structures to guide semantic reasoning? In this talk, I will present our research on fine-grained sentence-level media bias analysis that shows that incorporating shallow discourse structures or event relation graphs enables NLP models to better understand broader context and recognize subtle sentence-level ideological bias. News media play a vast role in shaping public opinion not just by supplying information, but by selecting, packaging, and shaping that information to persuade as well. Sentence-level media bias analysis is challenging and aims to identify sentences within an article that can illuminate and explain the overall bias of the entire article. This talk will first show that understanding the discourse role of a sentence in telling a news story, as well as its discourse relation with nearby sentences, can help reveal the ideological leanings of the author even when the sentence itself appears merely neutral or factual. This talk will further show that analyzing events with respect to other events in the same document or across documents is critical for identifying bias sentences.
Regina Zhang

Regina Zhang

Postdoctoral fellow at the Nanyang Technological University. She received her Ph.D. in Computer Science from The University of Hong Kong. Her research lies at the intersection of AI for Science, Graph Representation Learning, and Spatial-Temporal Forecasting, with applications spanning urban computing, biology, and physics. She has published over 18 peer-reviewed papers in premier venues such as AAAI, NeurIPS, ICML, ICDE, WWW, and TKDE.

AI-Powered Graph Representation Learning for Robust and Efficient Urban, Social and Biological Science

The increasing availability of human trajectory and social data, fueled by GPS and social networks, presents a unique opportunity for scientific discovery. However, existing data analysis methods struggle to provide robust, efficient, and generalizable graph representations, hindering their applicability in urban and biological sciences. This research addresses this challenge by developing novel machine learning algorithms specifically tailored for graph-structured data in these domains. This research tackles three key challenges: (1) Sparse Data and Data Distribution Heterogeneity: Current methods often struggle with sparse data and varying data distributions, limiting their ability to capture diverse patterns and hindering scalability. This research proposes novel approaches for flexible, adaptive, and generalizable representations in urban planning and social sciences. (2) Non-General Representation and Difficulty Adapting to New Data: Existing methods often lack the ability to generalize across different datasets and struggle to adapt to new data, hindering their effectiveness in real-world applications. This research aims to develop methods that can learn robust and efficient representations that generalize across different datasets and adapt to new data. (3) Trade-off Between Efficiency and Effectiveness: Balancing processing speed, accuracy, and reliability is crucial in urban and social science data analysis. This research addresses this challenge by developing innovative algorithms that optimize for both efficiency and effectiveness. This research leverages contrastive learning and information bottleneck techniques to develop robust and efficient graph representation learning methods for spatial-temporal data and recommender systems. The developed methods have demonstrated significant improvements in downstream tasks such as traffic prediction, crime prediction, and anomaly detection. Furthermore, the research explores the application of AI in biological science, focusing on developing methods for knowledge representation and data analysis in areas such as cell biology and neuroscience. This research lays a strong foundation for future work in graph-structured data analysis across various domains, including urban science, social science, biological science, and scientific discovery.