I lead the YODA Lab, where we use artificial intelligence based techniques to develop intelligent agent-based systems. Our recent research focus is on the exciting area of human-AI teaming and collaboration!
Prior to joining WashU, I was an assistant professor in the Department of Computer Science at New Mexico State University; a research scientist in the Living Analytics Research Center at Singapore Management University; and a post-doctoral research associate with Shlomo Zilberstein in the Department of Computer Science at the University of Massachusetts at Amherst.
I received my Ph.D. and M.S. in Computer Science from the University of Southern California, supervised by Sven Koenig, and my M.S. and B.S.E. in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania, supervised by Vijay Kumar.
Explanation generation frameworks aim to make AI systems’ decisions transparent and understandable to human users. However, generating explanations in uncertain environments characterized by incomplete information and probabilistic models remains a significant challenge. In this paper, we propose a novel framework for generating probabilistic monolithic explanations and model reconciling explanations. Monolithic explanations provide self-contained reasons for an explanandum without considering the agent receiving the explanation, while model reconciling explanations account for the knowledge of the agent receiving the explanation. For monolithic explanations, our approach integrates uncertainty by utilizing probabilistic logic to increase the probability of the explanandum. For model reconciling explanations, we propose a framework that extends the logic-based variant of the model reconciliation problem to account for probabilistic human models, where the goal is to find explanations that increase the probability of the explanandum while minimizing conflicts between the explanation and the probabilistic human model. We introduce explanatory gain and explanatory power as quantitative metrics to assess the quality of these explanations. Further, we present algorithms that exploit the duality between minimal correction sets and minimal unsatisfiable sets to efficiently compute both types of explanations in probabilistic contexts. Extensive experimental evaluations on various benchmarks demonstrate the effectiveness and scalability of our approach in generating explanations under uncertainty.
@article{jair-VasileiouYPS25,author={Vasileiou, Stylianos Loukas and Yeoh, William and Previti, Alessandro and Son, Tran Cao},title={On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios},journal={Journal of Artificial Intelligence Research},volume={84},number={5},pages={5:1--5:40},year={2025},}
NeurIPS
Model Reconciliation via Cost-Optimal Explanations in Probabilistic Logic Programming
Yinxu Tang, Stylianos Loukas Vasileiou, Vincent Derkinderen, and
1 more author
In Annual Conference on Neural Information Processing Systems, 2025
In human-AI interaction, effective communication relies on aligning the AI agent’s model with the human user’s mental model, a process known as model reconciliation. However, existing model reconciliation approaches predominantly assume deterministic models, overlooking the fact that human knowledge is often uncertain or probabilistic. To bridge this gap, we present a probabilistic model reconciliation framework that resolves inconsistencies in MPE outcome probabilities between an agent’s and a user’s models. Our approach is built on probabilistic logic programming (PLP) using ProbLog, where explanations are generated as cost-optimal model updates that reconcile these probabilistic differences. We develop two search algorithms – a generic baseline and an optimized version. The latter is guided by theoretical insights and further extended with greedy and weighted variants to enhance scalability and efficiency. Our approach is validated through a user study on explanation types and computational experiments showing that the optimized version consistently outperforms the generic baseline.
@inproceedings{conf/nips/tangVDY25,author={Tang, Yinxu and Vasileiou, Stylianos Loukas and Derkinderen, Vincent and Yeoh, William},title={Model Reconciliation via Cost-Optimal Explanations in Probabilistic Logic Programming},booktitle={Annual Conference on Neural Information Processing Systems},pages={to appear},year={2025},}
AAAI
Does Your AI Agent Get You? A Personalizable Framework for Approximating
Human Models from Argumentation-based Dialogue Traces
Yinxu Tang, Stylianos Loukas Vasileiou, and William Yeoh
In AAAI Conference on Artificial Intelligence, 2025
Explainable AI is increasingly employing argumentation methods to facilitate interactive explanations between AI agents and human users. While existing approaches typically rely on predetermined human user models, there remains a critical gap in dynamically learning and updating these models during interactions. In this paper, we present a framework that enables AI agents to adapt their understanding of human users through argumentation-based dialogues. Our approach, called Persona, draws on prospect theory and integrates a probability weighting function with a Bayesian belief update mechanism that refines a probability distribution over possible human models based on exchanged arguments. Through empirical evaluations with human users in an applied argumentation setting, we demonstrate that Persona effectively captures evolving human beliefs, facilitates personalized interactions, and outperforms state-of-the-art methods.
@inproceedings{conf/aaai/TangV025,author={Tang, Yinxu and Vasileiou, Stylianos Loukas and Yeoh, William},title={Does Your AI Agent Get You? A Personalizable Framework for Approximating
Human Models from Argumentation-based Dialogue Traces},booktitle={{AAAI} Conference on Artificial Intelligence},pages={14405--14413},year={2025},}
AAMAS
Algorithmic Filtering, Out-Group Stereotype, and Polarization on Social Media
Jean Springsteen, William Yeoh, and Dino Christenson
In International Conference on Autonomous Agents and Multiagent Systems, 2024
The introduction of social media websites touted the idea of global communication — exposing users to a worldwide audience and a diverse range of experiences, opinions, and debates. Unfortunately, studies have shown that social networks have instead contributed to growing levels of polarization in society across a wide variety of issues. Social media websites employ algorithmic filtering strategies to drive engagement, which can lead to the formation of filter bubbles and increased levels of polarization. In this paper, we introduce features of affective polarization — feelings towards one’s in-group and out-group — into an opinion dynamics model. Specifically, we show that incorporating a negative out-group stereotype into the opinion dynamics model (1) affects the level of polarization present among agents in the network; (2) changes the effectiveness of algorithmic filtering strategies; and (3) is exacerbated by the presence of extremists in the network. Hence, the inclusion of an affective group mechanism in opinion dynamics modeling provides novel insights into the effects of algorithmic filtering strategies on the extremity of opinions in social networks.
@inproceedings{conf/aamas/Springsteen0C24,author={Springsteen, Jean and Yeoh, William and Christenson, Dino},title={Algorithmic Filtering, Out-Group Stereotype, and Polarization on Social Media},booktitle={International Conference on Autonomous Agents and Multiagent Systems},pages={1782--1790},year={2024},}
ICAPS
Using Simple Incentives to Improve Two-Sided Fairness in Ridesharing Systems
Ashwin Kumar, Yevgeniy Vorobeychik, and William Yeoh
In International Conference on Automated Planning and Scheduling, 2023
State-of-the-art order dispatching algorithms for ridesharing batch passenger requests and allocate them to a fleet of vehicles in a centralized manner, optimizing over the estimated values of each passenger-vehicle matching using integer linear programming (ILP). Using good estimates of future values, such ILP-based approaches are able to significantly increase the service rates (percentage of requests served) for a fixed fleet of vehicles. However, such approaches that focus solely on maximizing efficiency can lead to disparities for both drivers (e.g., income inequality) and passengers (e.g., inequality of service for different groups). Existing approaches that consider fairness only do it for naive assignment policies, require extensive training, or look at only single-sided fairness. We propose a simple incentive-based fairness scheme that can be implemented online as a part of this ILP formulation that allows us to improve fairness over a variety of fairness metrics. Deriving from a lens of variance minimization, we describe how these fairness incentives can be formulated for two distinct use cases for passenger groups and driver fairness. We show that under mild conditions, our approach can guarantee an improvement in the chosen metric for the worst-off individual. We also show empirically that our Simple Incentives approach significantly outperforms prior art, despite requiring no retraining; indeed, it often leads to a large improvement over the state-of-the-art fairness-aware approach in both overall service rate and fairness.
@inproceedings{conf/icaps/KumarV023,author={Kumar, Ashwin and Vorobeychik, Yevgeniy and Yeoh, William},title={Using Simple Incentives to Improve Two-Sided Fairness in Ridesharing Systems},booktitle={International Conference on Automated Planning and Scheduling},pages={227--235},year={2023},}