Stephanie Milani

I am a Ph.D. candidate in the Machine Learning Department at Carnegie Mellon University, where I am advised by Fei Fang. Previously, I interned at Microsoft Research with Geoff Gordon and in the Microsoft Research Deep Reinforcement Learning for Games group with Katja Hofmann. I aim to create intelligent agents that can learn quickly, explain their decisions, and work harmoniously with people and other artificially intelligent agents. I am particularly interested in reinforcement learning.

I completed my B.S. in Computer Science and B.A. in Psychology at the University of Maryland, Baltimore County, where I worked with Marie desJardins and Cynthia Matuszek.

Email  /  CV  /  Google Scholar  /  LinkedIn  /  Twitter

Selected Publications

MABL: Bi-Level Latent-Variable World Model for Sample-Efficient Multi-Agent Reinforcement Learning
Aravind Venugopal, Stephanie Milani, Fei Fang, Balaraman Ravindran
AAMAS, 2024

BEDD: The MineRL BASALT Evaluation and Demonstrations Dataset for Training and Benchmarking Agents that Solve Fuzzy Tasks
Stephanie Milani, Anssi Kanervisto, Karolis Ramanauskas, Sander Schulhoff, Brandon Houghton, Rohin Shah
NeurIPS Datasets & Benchmarks Track, 2023 (Oral)

Explainable Reinforcement Learning: A Survey and Comparative Review
Stephanie Milani, Nicholay Topin, Manuela Veloso, Fei Fang
ACM CSUR Special Issue on Trustworthy AI, 2023

Navigates Like Me: Understanding How People Evaluate Human-Like AI in Video Games
Stephanie Milani, Arthur Juliani, Ida Momennejad, Raluca Georgescu, Jaroslaw Rzpecki, Alison Shaw, Gavin Costello, Fei Fang, Sam Devlin, Katja Hofmann
CHI, 2023
Previous versions in NeurIPS-21 Workshop on Human-Centered AI and CHI-22 Late Breaking Work

Uni[MASK]: Unified Inference in Sequential Decision Problems
Micah Carroll, Jessy Lin, Orr Paradise, Raluca Georgescu, Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann, Matthew Hausknecht, Anca Dragan, Sam Devlin
NeurIPS, 2022 (Oral)
Previous version in ICLR-22 Workshop on Generalizable Policy Learning in the Physical World

MAVIPER: Learning Decision Tree Policies for Interpretable Multi-Agent Reinforcement Learning
Stephanie Milani*, Zhicheng Zhang*, Nicholay Topin, Zheyuan Ryan Shi, Charles Kamhoua, Evangelos E. Papalexakis, Fei Fang
ECML-PKDD, 2022
Previous version in AAAI-22 Explainable Agency in AI Workshop

Learning to Play Adaptive Cyber Deception Game
Yinuo Du, Zimeng Song, Stephanie Milani, Coty Gonzalez, Fei Fang
AAMAS OptLearnMAS Workshop, 2022

Iterative Bounding MDPs: Learning Interpretable Policies via Non-Interpretable Methods
Nicholay Topin, Stephanie Milani, Fei Fang, Manuela Veloso
AAAI, 2021

Harnessing the Power of Deception in Attack Graph-Based Security Games
Stephanie Milani, Weiran Shen, Kevin S. Chan, Sridhar Venkatesan, Nandi O. Leslie, Charles Kamhoua, Fei Fang
GameSec, 2020

Planning with Abstract, Learned Models While Learning Transferable Subtasks
John Winder, Stephanie Milani, Matthew Landen, Erebus Oh, Shane Parr, Shawn Squire, Marie desJardins, Cynthia Matuszek
AAAI, 2020
Previous versions in ICAPS-17 IntEx Workshop, RLDM-17, and Do Good Robotics Symposium 2019

Perceptions of Domestic Robots' Normative Behavior Across Cultures
Huao Li, Stephanie Milani, Vigneshram Krishnamoorthy, Michael Lewis, Katia Sycara
AIES, 2019
bibtex


I got this great website here.