I lead the YODA Lab, where we use artificial intelligence based techniques to develop intelligent agent-based systems. Our recent research focus is on the exciting area of human-AI teaming and collaboration!
Prior to joining WashU, I was an assistant professor in the Department of Computer Science at New Mexico State University; a research scientist in the Living Analytics Research Center at Singapore Management University; and a post-doctoral research associate with Shlomo Zilberstein in the Department of Computer Science at the University of Massachusetts at Amherst.
I received my Ph.D. and M.S. in Computer Science from the University of Southern California, supervised by Sven Koenig, and my M.S. and B.S.E. in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania, supervised by Vijay Kumar.
Explanation generation frameworks aim to make AI systems’ decisions transparent and understandable to human users. However, generating explanations in uncertain environments characterized by incomplete information and probabilistic models remains a significant challenge. In this paper, we propose a novel framework for generating probabilistic monolithic explanations and model reconciling explanations. Monolithic explanations provide self-contained reasons for an explanandum without considering the agent receiving the explanation, while model reconciling explanations account for the knowledge of the agent receiving the explanation. For monolithic explanations, our approach integrates uncertainty by utilizing probabilistic logic to increase the probability of the explanandum. For model reconciling explanations, we propose a framework that extends the logic-based variant of the model reconciliation problem to account for probabilistic human models, where the goal is to find explanations that increase the probability of the explanandum while minimizing conflicts between the explanation and the probabilistic human model. We introduce explanatory gain and explanatory power as quantitative metrics to assess the quality of these explanations. Further, we present algorithms that exploit the duality between minimal correction sets and minimal unsatisfiable sets to efficiently compute both types of explanations in probabilistic contexts. Extensive experimental evaluations on various benchmarks demonstrate the effectiveness and scalability of our approach in generating explanations under uncertainty.
@article{journals/jair/VasileiouYPS25,author={Vasileiou, Stylianos Loukas and Yeoh, William and Previti, Alessandro and Son, Tran Cao},title={On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios},journal={Journal of Artificial Intelligence Research},volume={84},number={5},pages={5:1--5:40},year={2025},}
AAMAS
Argumentative Human-AI Decision-Making: Toward AI Agents That Reason With Us, Not For Us
Stylianos Loukas Vasileiou, Antonio Rago, Francesca Toni, and
1 more author
In International Conference on Autonomous Agents and Multiagent Systems, 2026
Computational argumentation offers formal frameworks for transparent, verifiable reasoning but has traditionally been limited by its reliance on domain-specific information and extensive feature engineering. In contrast, LLMs excel at processing unstructured text, yet their opaque nature makes their reasoning difficult to evaluate and trust. We argue that the convergence of these fields will lay the foundation for a new paradigm: Argumentative Human-AI Decision-Making. We analyze how the synergy of argumentation framework mining, argumentation framework synthesis, and argumentative reasoning enables agents that do not just justify decisions, but engage in dialectical processes where decisions are contestable and revisable – reasoning with humans rather than for them. This convergence of computational argumentation and LLMs is essential for human-aware, trustworthy AI in high-stakes domains.
@inproceedings{conf/aamas/VasileiouRTY26,author={Vasileiou, Stylianos Loukas and Rago, Antonio and Toni, Francesca and Yeoh, William},title={Argumentative Human-AI Decision-Making: Toward AI Agents That Reason With Us, Not For Us},booktitle={International Conference on Autonomous Agents and Multiagent Systems},pages={to appear},year={2026},}
AAMAS
Agentic LLMs and Distributed Constraint Reasoning: A Symbiotic Perspective for Neurosymbolic Multi-Agent Systems
Gauthier Picard, William Yeoh, and Roie Zivan
In International Conference on Autonomous Agents and Multiagent Systems, 2026
Distributed Constraint Reasoning (DCR) has long provided a principled framework for modeling and solving multi-agent coordination and optimization problems. However, its practical adoption in real-world, human-centric domains has been hindered by the challenge of translating human intentions, preferences, and constraints into formal symbolic models. At the same time, recent advances in LLMs have enabled powerful agentic capabilities, including natural language understanding, flexible reasoning, and interactive problem solving, but these systems lack the formal rigor and guarantees needed for scalable multi-agent coordination. In this paper, we argue that the convergence of these two paradigms offers a timely and transformative opportunity. We articulate several synergistic research directions: leveraging LLMs for translating natural language into DCR specifications, eliciting and refining user preferences, and enhancing inter-agent communication; and conversely, applying DCR models and algorithms to improve coordination, structured reasoning, resource allocation, and communication sensitivity in Agentic LLM systems. Together, these threads point toward hybrid neurosymbolic systems that combine the adaptability of LLMs with the mathematical rigor of DCR.
@inproceedings{conf/aamas/PicardYZ26,author={Picard, Gauthier and Yeoh, William and Zivan, Roie},title={Agentic LLMs and Distributed Constraint Reasoning: A Symbiotic Perspective for Neurosymbolic Multi-Agent Systems},booktitle={International Conference on Autonomous Agents and Multiagent Systems},pages={to appear},year={2026},}
NeurIPS
Model Reconciliation via Cost-Optimal Explanations in Probabilistic Logic Programming
Yinxu Tang, Stylianos Loukas Vasileiou, Vincent Derkinderen, and
1 more author
In Annual Conference on Neural Information Processing Systems, 2025
In human-AI interaction, effective communication relies on aligning the AI agent’s model with the human user’s mental model, a process known as model reconciliation. However, existing model reconciliation approaches predominantly assume deterministic models, overlooking the fact that human knowledge is often uncertain or probabilistic. To bridge this gap, we present a probabilistic model reconciliation framework that resolves inconsistencies in MPE outcome probabilities between an agent’s and a user’s models. Our approach is built on probabilistic logic programming (PLP) using ProbLog, where explanations are generated as cost-optimal model updates that reconcile these probabilistic differences. We develop two search algorithms – a generic baseline and an optimized version. The latter is guided by theoretical insights and further extended with greedy and weighted variants to enhance scalability and efficiency. Our approach is validated through a user study on explanation types and computational experiments showing that the optimized version consistently outperforms the generic baseline.
@inproceedings{conf/nips/TangVDY25,author={Tang, Yinxu and Vasileiou, Stylianos Loukas and Derkinderen, Vincent and Yeoh, William},title={Model Reconciliation via Cost-Optimal Explanations in Probabilistic Logic Programming},booktitle={Annual Conference on Neural Information Processing Systems},pages={to appear},year={2025},}
AAAI
Does Your AI Agent Get You? A Personalizable Framework for Approximating
Human Models from Argumentation-based Dialogue Traces
Yinxu Tang, Stylianos Loukas Vasileiou, and William Yeoh
In AAAI Conference on Artificial Intelligence, 2025
Explainable AI is increasingly employing argumentation methods to facilitate interactive explanations between AI agents and human users. While existing approaches typically rely on predetermined human user models, there remains a critical gap in dynamically learning and updating these models during interactions. In this paper, we present a framework that enables AI agents to adapt their understanding of human users through argumentation-based dialogues. Our approach, called Persona, draws on prospect theory and integrates a probability weighting function with a Bayesian belief update mechanism that refines a probability distribution over possible human models based on exchanged arguments. Through empirical evaluations with human users in an applied argumentation setting, we demonstrate that Persona effectively captures evolving human beliefs, facilitates personalized interactions, and outperforms state-of-the-art methods.
@inproceedings{conf/aaai/TangV025,author={Tang, Yinxu and Vasileiou, Stylianos Loukas and Yeoh, William},title={Does Your AI Agent Get You? A Personalizable Framework for Approximating
Human Models from Argumentation-based Dialogue Traces},booktitle={{AAAI} Conference on Artificial Intelligence},pages={14405--14413},year={2025},}
AAMAS
Algorithmic Filtering, Out-Group Stereotype, and Polarization on Social Media
Jean Springsteen, William Yeoh, and Dino Christenson
In International Conference on Autonomous Agents and Multiagent Systems, 2024
The introduction of social media websites touted the idea of global communication — exposing users to a worldwide audience and a diverse range of experiences, opinions, and debates. Unfortunately, studies have shown that social networks have instead contributed to growing levels of polarization in society across a wide variety of issues. Social media websites employ algorithmic filtering strategies to drive engagement, which can lead to the formation of filter bubbles and increased levels of polarization. In this paper, we introduce features of affective polarization — feelings towards one’s in-group and out-group — into an opinion dynamics model. Specifically, we show that incorporating a negative out-group stereotype into the opinion dynamics model (1) affects the level of polarization present among agents in the network; (2) changes the effectiveness of algorithmic filtering strategies; and (3) is exacerbated by the presence of extremists in the network. Hence, the inclusion of an affective group mechanism in opinion dynamics modeling provides novel insights into the effects of algorithmic filtering strategies on the extremity of opinions in social networks.
@inproceedings{conf/aamas/Springsteen0C24,author={Springsteen, Jean and Yeoh, William and Christenson, Dino},title={Algorithmic Filtering, Out-Group Stereotype, and Polarization on Social Media},booktitle={International Conference on Autonomous Agents and Multiagent Systems},pages={1782--1790},year={2024},}