Programme

Programme

Download the AAMAS 2023 Booklet

Overview

Monday May 29Tuesday May 30Wednesday May 31Thursday June 1Friday June 2
8:00-8:30Registration opens
CentrEd at ExCeL - Level 0
Registration opens
CentrEd at ExCeL - Level 0
Registration opens
South Halls Entrance S1
Registration opens
South Halls Entrance S1
Registration opens
South Halls Entrance S1
8:30-8:45DC + Workshops + TutorialsWorkshops + TutorialsOpening Session
8:45-9:00Agents and the Industry Panel
Panellist: Kate Larson, Peter Stone, Milind Tambe, and Manuela Veloso
9:00-10:00Keynote: Karl TuylsKeynote: Edith Elkind
10:00-10:45Coffee BreakCoffee BreakCoffee Break +
Poster Session 1 +
Demo 1
Coffee Break +
Poster Session 3 +
Demo 3 +
Card Games Competition
Coffee Break +
Poster Session 5
10:45-12:30DC + Workshops + TutorialsWorkshops + TutorialsTechnical Sessions 1Technical Sessions 3 +
Card Games Competition
Technical Sessions 5
12:30-14:00Lunch BreakLunch BreakLunch Break +
Diversity Event
Platinum Suite 5-7
Lunch BreakLunch Break
14:00-15:45DC + Workshops + TutorialsWorkshops + TutorialsTechnical Sessions 2Technical Sessions 4,
Victor Lesser Dissertation Award Talk: Jiaoyang Li +
Negotiating Agents Competition
Technical Sessions 6
15:45-16:30Coffee BreakCoffee BreakCoffee Break +
Poster Session 2 +
Demo 2
Coffee Break +
Poster Session 4 +
Demo 4 +
Negotiating Agents Competition
Coffee Break +
Poster Session 6
16:30-16:45DC + Workshops + TutorialsWorkshops + TutorialsKeynote: Yejin ChoiAward SessionCommunity Meeting +
Closing Session
16:45-17:30Keynote: Iain Couzin
17:30-17:45
17:45-18:30
18:30-Opening Reception
South Halls Entrance S2
Banquet
The Brewery

See also the workshop and tutorial pages.
You can find the Doctoral Consortium (DC’s) programme details here.


Detailed Schedule

Technical Sessions Overview

Day 1
(Wed)
Day 2
(Thu)
Day 3
(Fri)
TS 1
10:45 - 12:30
TS 2
14:00 - 15:45
TS 3
10:45 - 12:30
TS 4
14:00 - 15:45
TS 5
10:45 - 12:30
TS 6
14:00 - 15:45
Platinum Suite 1Multiagent Reinforcement Learning IMultiagent Reinforcement Learning IIReinforcement LearningReinfocement and Imitation LearningMultiagent Reinforcement Learning IIIDeep Learning
Platinum Suite 2PlanningPlanning + Task/Resource AllocationMultiagent Path FindingMulti-Armed Bandits + Monte Carlo Tree SearchGraph Neural Networks + TransformersMulti-objective Planning and Learning
Platinum Suite 3Fair AllocationsFair Allocations + Public Goods GamesMatchingAuctions + VotingVoting IVoting II
Platinum Suite 4Equilibria and Complexities of GamesBehavioral and Algorithmic Game TheoryLearning in GamesBest Dissertation TalkBlue SkyMechanism Design
Platinum Suite 5-7Human-Agent TeamsHumans and AI AgentsLearning with Humans and RobotsRoboticsAdversarial Learning + Social Networks + Causal GraphsSocial Networks
South Gallery 7-9Knowledge Representation and Reasoning IKnowledge Representation and Reasoning IIEngineering Multiagent SystemsInnovative ApplicationsSimulationsNorms

Technical Sessions

TimeTitleAuthors

Blue Sky

Chair: Michael Winikoff

Day 3 (Fri),
10:45 - 12:30
Models of Anxiety for Agent Deliberation: The Benefits of Anxiety-Sensitive AgentsArvid Horned and Loïs Vanhée
Social Choice Around Decentralized Autonomous Organizations: On the Computational Social Choice of Digital CommunitiesNimrod Talmon
Value Inference in Sociotechnical SystemsEnrico Liscio, Roger Lera-Leri, Filippo Bistaffa, Roel I. J. Dobbe, Catholijn M. Jonker, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar and Pradeep K. Murukannaiah
Presenting Multiagent Challenges in Team Sports AnalyticsDavid Radke and Alexi Orchard
Communication Meaning: Foundations and Directions for Systems ResearchAmit Chopra and Samuel Christie
The Rule–Tool–User Nexus in Digital Collective DecisionsZoi Terzopoulou, Marijn A. Keijzer, Gogulapati Sreedurga and Jobst Heitzig
Epistemic Side Effects: An AI Safety ProblemToryn Q. Klassen, Parand Alizadeh Alamdari and Sheila A. McIlraith
Citizen-Centric Multiagent SystemsSebastian Stein and Vahid Yazdanpanah

Engineering Multiagent Systems

Chair: Louise Dennis

Day 2 (Thu),
10:45 - 12:30
Kiko: Programming Agents to Enact Interaction ModelsSamuel Christie, Munindar P. Singh and Amit Chopra
CraftEnv: A Flexible Collective Robotic Construction Environment for Multi-Agent Reinforcement LearningRui Zhao, Xu Liu, Yizheng Zhang, Minghao Li, Cheng Zhou, Shuai Li and Lei Han
Feedback-Guided Intention Scheduling for BDI AgentsMichael Dann, John Thangarajah and Minyi Li
A Behaviour-Driven Approach for Testing Requirements via User and System Stories in Agent SystemsSebastian Rodriguez, John Thangarajah and Michael Winikoff
ML-MAS: a Hybrid AI Framework for Self-Driving VehiclesHilal Al Shukairi and Rafael C. Cardoso
Signifiers as a First-class Abstraction in Hypermedia Multi-Agent SystemsDanai Vachtsevanou, Andrei Ciortea, Simon Mayer and Jérémy Lemée
MAIDS - a Framework for the Development of Multi-Agent Intentional Dialogue SystemsDĂ©bora Cristina Engelmann, Alison R. Panisson, Renata Vieira, Jomi Fred HĂŒbner, Viviana Mascardi and Rafael H. Bordini
Mandrake: Multiagent Systems as a Basis for Programming Fault-Tolerant Decentralized ApplicationsSamuel Christie, Amit Chopra and Munindar P. Singh

Multiagent Path Finding

Chair: Jiaoyang Li

Day 2 (Thu),
10:45 - 12:30
Anonymous Multi-Agent Path Finding with Individual DeadlinesGilad Fine, Dor Atzmon and Noa Agmon
Learn to solve the min-max multiple traveling salesmen problem with reinforcement learningJunyoung Park, Changhyun Kwon and Jinkyoo Park
Counterfactual Fairness Filter for Fair-Delay Multi-Robot NavigationHikaru Asano, Ryo Yonetani, Mai Nishimura and Tadashi Kozuno
Improved Complexity Results and an Efficient Solution for Connected Multi-Agent Path FindingIsseĂŻnie Calviac, Ocan Sankur and Francois Schwarzentruber
Optimally Solving the Multiple Watchman Route Problem with Heuristic SearchYaakov Livne, Dor Atzmon, Shawn Skyler, Eli Boyarski, Amir Shapiro and Ariel Felner
Distributed Planning with Asynchronous Execution with Local Navigation for Multi-agent Pickup and Delivery ProblemYuki Miyashita, Tomoki Yamauchi and Toshiharu Sugawara
Energy-aware UAV Path Planning with Adaptive SpeedJonathan Diller and Qi Han
Coordination of Multiple Robots along Given Paths with Bounded Junction ComplexityMikkel Abrahamsen, Tzvika Geft, Dan Halperin and Barak Ugav

Innovative Applications

Chair: Shih-Fen Cheng

Day 2 (Thu),
14:00 - 15:45
Efficient Interactive Recommendation with Huffman Tree-based Policy LearningLongxiang Shi, Zilin Zhang, Shoujin Wang, Binbin Zhou, Minghui Wu, Cheng Yang and Shijian Li
ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic CaneShivendra Agrawal, Suresh Nayak, Ashutosh Naik and Bradley Hayes
Preference-Aware Delivery Planning for Last-Mile LogisticsQian Shao and Shih-Fen Cheng
Multi-Agent Reinforcement Learning with Safety Layer for Active Voltage ControlYufeng Shi, Mingxiao Feng, Minrui Wang, Wengang Zhou and Houqiang Li
Multi-agent Signalless Intersection Management with Dynamic Platoon FormationPhuriwat Worrawichaipat, Enrico Gerding, Ioannis Kaparias and Sarvapali Ramchurn
SocialLight: Distributed Cooperation Learning towards Network-Wide Traffic Signal ControlHarsh Goel, Yifeng Zhang, Mehul Damani and Guillaume Sartoretti
Model-Based Reinforcement Learning for Auto-Bidding in Display AdvertisingShuang Chen, Qisen Xu, Liang Zhang, Yongbo Jin, Wenhao Li and Linjian Mo

Human-Agent Teams

Chair: Birgit Lugrin

Day 1 (Wed),
10:45 - 12:30
Establishing Shared Query Understanding in an Open Multi-Agent SystemNikolaos Kondylidis, Ilaria Tiddi and Annette ten Teije
Communicating Agent Intentions for Human-Agent Decision Making under UncertaintyJulie Porteous, Alan Lindsay and Fred Charles
Trusting artificial agents: communication trumps performanceMarin Le Guillou, Laurent Prévot and Bruno Berberian
Nonverbal Human Signals Can Help Autonomous Agents Infer Human Preferences for Their BehaviorKate Candon, Jesse Chen, Yoony Kim, Zoe Hsu, Nathan Tsoi and Marynel VĂĄzquez
On Subset Selection of Multiple Humans To Improve Human-AI Team AccuracySagalpreet Singh, Shweta Jain and Shashi Shekhar Jha
Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual ExplanationsLujain Ibrahim, Mohammad M. Ghassemi and Tuka Alhanai
Automated Task-Time Interventions to Improve Teamwork using Imitation LearningSangwon Seo, Bing Han and Vaibhav V Unhelkar
Should my agent lie for me? A study on humans' attitudes towards deceptive AIStefan Sarkadi, Peidong Mei and Edmond Awad

Knowledge Representation and Reasoning I

Chair: Alessio Lomuscio

Day 1 (Wed),
10:45 - 12:30
A Logic of Only-Believing over Arbitrary Probability DistributionsQihui Feng, Daxin Liu, Vaishak Belle and Gerhard Lakemeyer
A Deontic Logic of Knowingly ComplyingCarlos Areces, Valentin Cassano, Pablo Castro, Raul Fervari and Andrés R. Saravia
Learning Logic Specifications for Soft Policy Guidance in POMCPGiulio Mazzi, Daniele Meli, Alberto Castellini and Alessandro Farinelli
Strategic (Timed) Computation Tree LogicJaime Arias, Wojciech Jamroga, Wojciech Penczek, Laure Petrucci and Teofil Sidoruk
Attention! Dynamic Epistemic Logic Models of (In)attentive AgentsGaia Belardinelli and Thomas Bolander
(Arbitrary) Partial CommunicationRustam Galimullin and Fernando R. Velazquez-Quesada
Epistemic Abstract Argumentation Framework: Formal Foundations, Computation and ComplexityGianvincenzo Alfano, Sergio Greco, Francesco Parisi and Irina Trubitsyna
Actions, Continuous Distributions and Meta-BeliefsVaishak Belle

Knowledge Representation and Reasoning II

Chair: Brian Logan

Day 1 (Wed),
14:00 - 15:45
Provable Optimization of Quantal Response Leader-Follower Games with Exponentially Large Action SpacesJinzhao Li, Daniel Fink, Christopher Wood, Carla P. Gomes and Yexiang Xue
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information TheoryMasoud Tabatabaei and Wojciech Jamroga
Synthesis of Resource-Aware Controllers Against Rational AgentsRodica Condurache, Catalin Dima, Youssouf Oualhadj and Nicolas Troquard
Computationally Feasible StrategiesCatalin Dima and Wojtek Jamroga
Towards the Verification of Strategic Properties in Multi-Agent Systems with Imperfect InformationAngelo Ferrando and Vadim Malvone

Mechanism Design

Chair: Minming Li

Day 3 (Fri),
14:00 - 15:45
Non-Obvious Manipulability for Single-Parameter Agents and Bilateral TradeThomas Archbold, Bart de Keijzer and Carmine Ventre
Mechanism Design for Improving Accessibility to Public FacilitiesHau Chan and Chenhao Wang
Explicit Payments for Obviously Strategyproof MechanismsDiodato Ferraioli and Carmine Ventre
Bilevel Entropy based Mechanism Design for Balancing Meta in Video GamesSumedh Pendurkar, Chris Chow, Luo Jie and Guni Sharon
IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social DilemmasBengisu Guresti, Abdullah Vanlioglu and Nazim Kemal Ure
Settling the Distortion of Distributed Facility LocationAris Filos-Ratsikas, Panagiotis Kanellopoulos, Alexandros Voudouris and Rongsen Zhang
Cost Sharing under Private Valuation and Connection ControlTianyi Zhang, Junyu Zhang, Sizhe Gu and Dengji Zhao
Facility Location Games with ThresholdsHouyu Zhou, Guochuan Zhang, Lili Mei and Minming Li

Planning

Chair: Filippo Bistaffa

Day 1 (Wed),
10:45 - 12:30
Ask and You Shall be Served: Representing and Solving Multi-agent Optimization Problems with Service Requesters and ProvidersMaya Lavie, Tehila Caspi, Omer Lev and Roie Zivan
Fairness Driven Efficient Algorithms for Sequenced Group Trip Planning Query ProblemNapendra Solanki, Shweta Jain, Suman Banerjee and Yayathi Pavan Kumar S
Domain-Independent Deceptive PlanningAdrian Price, Ramon Fraga Pereira, Peta Masters and Mor Vered
CAMS: Collision Avoiding Max-Sum for Mobile Sensor TeamsArseni Pertzovskiy, Roie Zivan and Noa Agmon
Risk-Constrained Planning for Multi-Agent Systems with Shared ResourcesAnna Gautier, Marc Rigter, Bruno Lacerda, Nick Hawes and Michael Wooldridge
Quantitative Planning with Action Deception in Concurrent Stochastic GamesChongyang Shi, Shuo Han and Jie Fu
Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPsStelios Triantafyllou and Goran Radanovic
On-line Estimators for Ad-hoc Task Execution: Learning types and parameters of teammates for effective teamworkMatheus Aparecido Do Carmo Alves, Elnaz Shafipour Yourdshahi, Amokh Varma, Leandro Soriano Marcolino, JĂł Ueyama and Plamen Angelov

Reinforcement Learning

Chair: Diederik M. Roijers

Day 2 (Thu),
10:45 - 12:30
Follow your Nose: Using General Value Functions for Directed Exploration in Reinforcement LearningDurgesh Kalwar, Omkar Shelke, Somjit Nath, Hardik Meisheri and Harshad Khadilkar
FedFormer: Contextual Federation with Attention in Reinforcement LearningLiam Hebert, Lukasz Golab, Pascal Poupart and Robin Cohen
Diverse Policy Optimization for Structured Action SpaceWenhao Li, Baoxiang Wang, Shanchao Yang and Hongyuan Zha
Enhancing Reinforcement Learning Agents with Local GuidesPaul Daoudi, Bogdan Robu, Christophe Prieur, Ludovic Dos Santos and Merwan Barlier
Scalar reward is not enoughPeter Vamplew, Ben Smith, Johan KÀllström, Gabriel Ramos, Roxana Rădulescu, Diederik Roijers, Conor Hayes, Friedrik Hentz, Patrick Mannion, Pieter Libin, Richard Dazeley and Cameron Foale
Targeted Search Control in AlphaZero for Effective Policy ImprovementAlexandre Trudeau and Michael Bowling
Out-of-Distribution Detection for Reinforcement Learning Agents with Probabilistic Dynamics ModelsTom Haider, Karsten Roscher, Felippe Schmoeller da Roza and Stephan GĂŒnnemann
Knowledge Compilation for Constrained Combinatorial Action Spaces in Reinforcement LearningJiajing Ling, Moritz Lukas Schuler, Akshat Kumar and Pradeep Varakantham

Robotics

Chair: Francesco Amigoni

Day 2 (Thu),
14:00 - 15:45
Decentralised and Cooperative Control of Multi-Robot Systems through Distributed OptimisationYi Dong, Zhongguo Li, Xingyu Zhao, Zhengtao Ding and Xiaowei Huang
Byzantine Resilience at Swarm Scale: A Decentralized Blocklist from Inter-robot AccusationsKacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru and Wenchao Li
Stigmergy-based, Dual-Layer Coverage of Unknown RegionsOri Rappel, Michael Amir and Alfred Bruckstein
Mitigating Imminent Collision for Multi-robot Navigation: A TTC-force Reward Shaping ApproachJinlin Chen, Jiannong Cao, Zhiqin Cheng and Wei Li
Safe Deep Reinforcement Learning by Verifying Task-Level PropertiesEnrico Marchesini, Luca Marzari, Alessandro Farinelli and Christopher Amato
Decentralized Safe Navigation for Multi-agent Systems via Risk-aware Weighted Buffered Voronoi CellsYiwei Lyu, John Dolan and Wenhao Luo
Heterogeneous Multi-Robot Reinforcement LearningMatteo Bettini, Ajay Shankar and Amanda Prorok
Gathering of Anonymous AgentsJohn Augustine, Arnhav Datar and Nischith Shadagopan M N

Matching

Chair: Swaprava Nath

Day 2 (Thu),
10:45 - 12:30
Best of Both Worlds Fairness under EntitlementsHaris Aziz, Aditya Ganguly and Evi Micha
Probabilistic Rationing with Categorized Priorities: Processing Reserves Fairly and EfficientlyHaris Aziz
Semi-Popular Matchings and Copeland WinnersTelikepalli Kavitha and Rohit Vaish
Host Community Respecting Refugee HousingDuĆĄan Knop and Ć imon Schierreich
Online matching with delays and stochastic arrival timesMathieu Mari, MichaƂ PawƂowski, Runtian Ren and Piotr Sankowski
Adapting Stable Matchings to Forced and Forbidden PairsNiclas Boehmer and Klaus Heeger
Stable Marriage in Euclidean SpaceYinghui Wen, Zhongyi Zhang and Jiong Guo
A Map of Diverse Synthetic Stable Roommates InstancesNiclas Boehmer, Klaus Heeger and StanisƂaw Szufa

Social Networks

Chair: Tomasz Michalak

Day 3 (Fri),
14:00 - 15:45
Random Majority Opinion Diffusion: Stabilization Time, Absorbing States, and Influential NodesAhad N. Zehmakan
Axiomatic Analysis of Medial Centrality MeasuresWiktoria Kosny and Oskar Skibski
Online Influence Maximization under Decreasing Cascade ModelFang Kong, Jize Xie, Baoxiang Wang, Tao Yao and Shuai Li
Node Conversion Optimization in Multi-hop Influence NetworksJie Zhang, Yuezhou Lv and Zihe Wang
Decentralized core-periphery structure in social networks accelerates cultural innovation in agent-based modelingJesse Milzman and Cody Moser
Being an Influencer is Hard: The Complexity of Influence Maximization in Temporal Graphs with a Fixed SourceArgyrios Deligkas, Eduard Eiben, Tiger-Lily Goldsmith and George Skretas
Enabling Imitation-Based Cooperation in Dynamic Social NetworksJacques Bara, Paolo Turrini and Giulia Andrighetto
The Grapevine Web: Analysing the Spread of False Information in Social Networks with Corrupted SourcesJacques Bara, Charlie Pilgrim, Paolo Turrini and Stanislav Zhydkov

Simulations

Chair: Samarth Swarup

Day 3 (Fri),
10:45 - 12:30
Differentiable Agent-based EpidemiologyAyush Chopra, Alexander RodrĂ­guez, Jayakumar Subramanian, Arnau Quera-Bofarull, Balaji Krishnamurthy, B. Aditya Prakash and Ramesh Raskar
Social Distancing via Social SchedulingDeepesh Kumar Lall, Garima Shakya and Swaprava Nath
Don't Simulate Twice: one-shot sensitivity analyses via automatic differentiationArnau Quera-Bofarull, Ayush Chopra, Joseph Aylett-Bullock, Carolina Cuesta-Lazaro, Ani Calinescu, Ramesh Raskar and Mike Wooldridge
Markov Aggregation for Speeding Up Agent-Based Movement SimulationsBernhard Geiger, Alireza Jahani, Hussain Hussain and Derek Groen
Agent-Based Modeling of Human Decision-makers Under Uncertain Information During Supply Chain ShortagesNutchanon Yongsatianchot, Noah Chicoine, Jacqueline Griffin, Ozlem Ergun and Stacy Marsella
Simulating panic amplification in crowds via a density-emotion interactionErik van Haeringen and Charlotte Gerritsen
Modelling Agent Decision Making in Agent-based Simulation - Analysis Using an Economic Technology Uptake ModelFranziska KlĂŒgl and Hildegunn Kyvik NordĂ„s
Emotion contagion in agent-based simulations of crowds: a systematic reviewErik van Haeringen, Charlotte Gerritsen and Koen Hindriks

Multiagent Reinforcement Learning I

Chair: Frans Oliehoek

Day 1 (Wed),
10:45 - 12:30
Trust Region Bounds for Decentralized PPO Under Non-stationarityMingfei Sun, Sam Devlin, Jacob Beck, Katja Hofmann and Shimon Whiteson
Multi-Agent Reinforcement Learning for Adaptive Mesh RefinementJiachen Yang, Ketan Mittal, Tarik Dzanic, Socratis Petrides, Brendan Keith, Brenden Petersen, Daniel Faissol and Robert Anderson
Adaptive Learning Rates for Multi-Agent Reinforcement LearningJiechuan Jiang and Zongqing Lu
Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement LearningShanqi Liu, Yujing Hu, Runze Wu, Dong Xing, Yu Xiong, Changjie Fan, Kun Kuang and Yong Liu
A Variational Approach to Mutual Information-Based Coordination for Multi-Agent Reinforcement LearningWoojun Kim, Whiyoung Jung, Myungsik Cho and Youngchul Sung
Mediated Multi-Agent Reinforcement LearningDmitry Ivanov, Ilya Zisman and Kirill Chernyshev
EXPODE: EXploiting POlicy Discrepancy for Efficient Exploration in Multi-agent Reinforcement LearningYucong Zhang and Chao Yu
TiZero: Mastering Multi-Agent Football with Curriculum Learning and Self-PlayFanqi Lin, Shiyu Huang, Tim Pearce, Wenze Chen and Wei-Wei Tu

Multiagent Reinforcement Learning II

Chair: Maria Gini

Day 1 (Wed),
14:00 - 15:45
AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent Reinforcement LearningXuefeng Wang, Xinran Li, Jiawei Shao and Jun Zhang
Learning Structured Communication for Multi-Agent Reinforcement LearningJunjie Sheng, Xiangfeng Wang, Bo Jin, Wenhao Li, Jun Wang, Junchi Yan, Tsung-Hui Chang and Hongyuan Zha
Model-based Sparse Communication in Multi-agent Reinforcement LearningShuai Han, Mehdi Dastani and Shihan Wang
Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RLPhillip J.K. Christoffersen, Andreas Haupt and Dylan Hadfield-Menell
The Benefits of Power Regularization in Cooperative Reinforcement LearningMichelle Li and Michael Dennis
MAC-PO: Multi-Agent Experience Replay via Collective Priority OptimizationYongsheng Mei, Hanhan Zhou, Tian Lan, Guru Venkataramani and Peng Wei
Self-Motivated Multi-Agent ExplorationShaowei Zhang, Jiahan Cao, Lei Yuan, Yang Yu and De-Chuan Zhan
Sequential Cooperative Multi-Agent Reinforcement LearningYifan Zang, Jinmin He, Kai Li, Haobo Fu, Qiang Fu and Junliang Xing

Multiagent Reinforcement Learning III

Chair: Chris Amato

Day 3 (Fri),
10:45 - 12:30
Learning Inter-Agent Synergies in Asymmetric Multiagent SystemsGaurav Dixit and Kagan Tumer
Asymptotic Convergence and Performance of Multi-Agent Q-learning DynamicsAamal Hussain, Francesco Belardinelli and Georgios Piliouras
Model-based Dynamic Shielding for Safe and Efficient Multi-agent Reinforcement LearningWenli Xiao, Yiwei Lyu and John Dolan
Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement LearningJihwan Oh, Joonkee Kim, Minchan Jeong and Se-Young Yun
Counter-Example Guided Policy Refinement in Multi-agent Reinforcement LearningBriti Gangopadhyay, Pallab Dasgupta and Soumyajit Dey
Prioritized Tasks Mining for Multi-Task Cooperative Multi-Agent Reinforcement LearningYang Yu, Qiyue Yin, Junge Zhang and Kaiqi Huang
M3: Modularization for Multi-task and Multi-agent Offline Pre-trainingLinghui Meng, Jingqing Ruan, Xuantang Xiong, Xiyun Li, Xi Zhang, Dengpeng Xing and Bo Xu

Equilibria and Complexities of Games

Chair: The Anh Han

Day 1 (Wed),
10:45 - 12:30
Equilibria and Convergence in Fire Sale GamesNils Bertschinger, Martin Hoefer, Simon Krogmann, Pascal Lenzner, Steffen Schuldenzucker and Lisa Wilhelmi
Bridging the Gap Between Single and Multi Objective GamesWillem Röpke, Carla Groenland, Roxana Radulescu, Ann Nowe and Diederik M. Roijers
Is Nash Equilibrium Approximator Learnable?Zhijian Duan, Wenhan Huang, Dinghuai Zhang, Yali Du, Jun Wang, Yaodong Yang and Xiaotie Deng
Learning the Stackelberg Equilibrium in a Newsvendor GameNicolĂČ Cesa-Bianchi, Tommaso Cesari, Takayuki Osogami, Marco Scarsini and Segev Wasserkrug
Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained ComplexityJiehua Chen, Gergely CsĂĄji, Sanjukta Roy and Sofia Simola
Debt Transfers in Financial Networks: Complexity and EquilibriaPanagiotis Kanellopoulos, Maria Kyropoulou and Hao Zhou
A Study of Nash Equilibria in Multi-Objective Normal-Form GamesWillem Röpke, Diederik M. Roijers, Ann Nowe and Roxana Radulescu
Learning Properties in Simulation-Based GamesCyrus Cousins, Bhaskar Mishra, Enrique Areyan Viqueria and Amy Greenwald

Humans and AI Agents

Chair: Reyhan Aydogan

Day 1 (Wed),
14:00 - 15:45
PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI CoordinationXingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang and Yali Du
Semi-Autonomous Systems with Contextual Competence AwarenessSaaduddin Mahmud, Connor Basich and Shlomo Zilberstein
Joint Engagement Classification using Video Augmentation Techniques for Multi-person HRI in the wildYubin Kim, Huili Chen, Sharifa Algohwinem, Cynthia Breazeal and Hae Won Park
Multiagent Inverse Reinforcement Learning via Theory of Mind ReasoningHaochen Wu, Pedro Sequeira and David Pynadath
Persuading to Prepare for Quitting Smoking with a Virtual Coach: Using States and User Characteristics to Predict BehaviorNele Albers, Mark A. Neerincx and Willem-Paul Brinkman
Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response GenerationYushan Qian, Bo Wang, Shangzhao Ma, Wu Bin, Shuo Zhang, Dongming Zhao, Kun Huang and Yuexian Hou
Generating Stylistic and Personalized Dialogues for Virtual Agents in NarrativesWeilai Xu, Fred Charles and Charlie Hargood
Reducing Racial Bias by Interacting with Virtual Agents: An Intervention in Virtual RealityDavid Obremski, Ohenewa Bediako Akuffo, Leonie LĂŒcke, Miriam Semineth, Sarah Tomiczek, Hanna-Finja Weichert and Birgit Lugrin

Planning + Task/Resource Allocation

Chair: Roie Zivan

Day 1 (Wed),
14:00 - 15:45
Online Coalitional Skill FormationSaar Cohen and Noa Agmon
Multi-Agent Consensus-based Bundle Allocation for Multi-mode Composite TasksGauthier Picard
Allocation Problem in Remote Teleoperation: Online Matching with Offline Reusable Resources and Delayed AssignmentsOsnat Ackerman Viden, Yohai Trabelsi, Pan Xu, Karthik Abinav Sankararaman, Oleg Maksimov and Sarit Kraus
Optimal Coalition Structures for Probabilistically Monotone Partition Function GamesShaheen Fatima and Michael Wooldridge
A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task DensityGrace Cai, Noble Harasha and Nancy Lynch
Abstracting Noisy Robot ProgramsTill Hofmann and Vaishak Belle
Structural Credit Assignment-Guided Coordinated MCTS: An Efficient and Scalable Method for Online Multiagent PlanningQian Che, Wanyuan Wang, Fengchen Wang, Tianchi Qiao, Xiang Liu, Jiuchuan Jiang, Bo An and Yichuan Jiang
Strategic Planning for Flexible Agent Availability in Large Taxi FleetsRajiv Ranjan Kumar, Pradeep Varakantham and Shih-Fen Cheng

Learning in Games

Chair: Makoto Yokoo

Day 2 (Thu),
10:45 - 12:30
Empirical Game-Theoretic Analysis for Mean Field GamesYongzhao Wang and Michael Wellman
Differentiable Arbitrating in Zero-sum Markov GamesJing Wang, Meichen Song, Feng Gao, Boyi Liu, Zhaoran Wang and Yi Wu
Learning Parameterized Families of GamesMadelyn Gatchel and Bryce Wiedenbeck
Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed Cooperative-Competitive GamesZelai Xu, Yancheng Liang, Chao Yu, Yu Wang and Yi Wu
Multiplicative Weights Updates for Extensive Form GamesChirag Chhablani, Michael Sullins and Ian Kash
A Hybrid Framework of Reinforcement Learning and Physics-Informed Deep Learning for Spatiotemporal Mean Field GamesXu Chen, Shuo Liu and Xuan Di
Adversarial Inverse Reinforcement Learning for Mean Field GamesYang Chen, Libo Zhang, Jiamou Liu and Michael Witbrock
Cost Inference for Feedback Dynamic Games from Noisy Partial State Observations and Incomplete TrajectoriesJingqi Li, Chih-Yuan Chiu, Lasse Peters, Somayeh Sojoudi, Claire Tomlin and David Fridovich-Keil

Fair Allocations

Chair: Ulle Endriss

Day 1 (Wed),
10:45 - 12:30
Fair Allocation of Two Types of ChoresHaris Aziz, Jeremy Lindsay, Angus Ritossa and Mashbat Suzuki
Fairly Dividing Mixtures of Goods and Chores under Lexicographic PreferencesHadi Hosseini, Sujoy Sikdar, Rohit Vaish and Lirong Xia
Graphical House AllocationHadi Hosseini, Justin Payan, Rik Sengupta, Rohit Vaish and Vignesh Viswanathan
Approximation Algorithm for Computing Budget-Feasible EF1 AllocationsJiarui Gan, Bo Li and Xiaowei Wu
Yankee Swap: a Fast and Simple Fair Allocation Mechanism for Matroid Rank ValuationsVignesh Viswanathan and Yair Zick
Fairness in the Assignment Problem with Uncertain PrioritiesZeyu Shen, Zhiyi Wang, Xingyu Zhu, Brandon Fain and Kamesh Munagala
Possible Fairness for Allocating Indivisible ResourcesHaris Aziz, Bo Li, Shiji Xing and Yu Zhou
Efficient Nearly-Fair Division with Capacity ConstraintsHila Shoshan, Noam Hazon and Erel Segal-Halevi

Fair Allocations + Public Goods Games

Chair: Hadi Hosseini

Day 1 (Wed),
14:00 - 15:45
Equitability and Welfare Maximization for Allocating Indivisible ItemsAnkang Sun, Bo Chen and Xuan Vinh Doan
Best of Both Worlds: Agents with EntitlementsMartin Hoefer, Marco Schmalhofer and Giovanna Varricchio
Mitigating Skewed Bidding for Conference Paper AssignmentInbal Rozenzweig, Reshef Meir, Nicholas Mattei and Ofra Amir
Price of Anarchy in a Double-Sided Critical Distribution SystemDavid SychrovskĂœ, Jakub ČernĂœ, Sylvain Lichau and Martin Loebl
Improved EFX approximation guarantees under ordinal-based assumptionsEvangelos Markakis and Christodoulos Santorinaios
Assigning Agents to Increase Network-Based Neighborhood DiversityZirou Qiu, Andrew Yuan, Chen Chen, Madhav Marathe, S.S. Ravi, Daniel Rosenkrantz, Richard Stearns and Anil Vullikanti
Altruism, Collectivism and Egalitarianism: On a Variety of Prosocial Behaviors in Binary Networked Public Goods GamesJichen Li, Xiaotie Deng, Yukun Cheng, Yuqi Pan, Xuanzhi Xia, Zongjun Yang and Jan Xie
The Role of Space, Density and Migration in Social DilemmasJacques Bara, Fernando P. Santos and Paolo Turrini

Multi-Armed Bandits + Monte Carlo Tree Search

Chair: Tom Cesari

Day 2 (Thu),
14:00 - 15:45
Indexability is Not Enough for Whittle: Improved, Near-Optimal Algorithms for Restless BanditsAbheek Ghosh, Dheeraj Nagaraj, Manish Jain and Milind Tambe
Avoiding Starvation of Arms in Restless Multi-Armed BanditsDexun Li and Pradeep Varakantham
Restless Multi-Armed Bandits for Maternal and Child Health: Results from Decision-Focused LearningShresth Verma, Aditya Mate, Kai Wang, Neha Madhiwalla, Aparna Hegde, Aparna Taneja and Milind Tambe
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit TasksArpita Biswas, Jackson Killian, Paula Rodriguez Diaz, Susobhan Ghosh and Milind Tambe
On Regret-optimal Cooperative Nonstochastic Multi-armed BanditsJialin Yi and Milan Vojnovic
Equilibrium Bandits: Learning Optimal Equilibria of Unknown DynamicsSiddharth Chandak, Ilai Bistritz and Nicholas Bambos
ExPoSe: Combining State-Based Exploration with Gradient-Based Online SearchDixant Mittal, Siddharth Aravindan and Wee Sun Lee
Formally-Sharp DAgger for MCTS: Lower-Latency Monte Carlo Tree Search using Data Aggregation with Formal MethodsDebraj Chakraborty, Damien Busatto-Gaston, Jean-François Raskin and Guillermo Perez

Reinfocement and Imitation Learning

Chair: Matt Taylor

Day 2 (Thu),
14:00 - 15:45
Decentralized model-free reinforcement learning in stochastic games with average-reward objectiveRomain Cravic, Nicolas Gast and Bruno Gaujal
Less Is More: Refining Datasets for Offline Reinforcement Learning with Reward MachinesHaoyuan Sun and Feng Wu
A Self-Organizing Neuro-Fuzzy Q-Network: Systematic Design with Offline Hybrid LearningJohn Hostetter, Mark Abdelshiheed, Tiffany Barnes and Min Chi
Learning to Coordinate from Offline Datasets with Uncoordinated Behavior PoliciesJinming Ma and Feng Wu
D-Shape: Demonstration-Shaped Reinforcement Learning via Goal-ConditioningCaroline Wang, Garrett Warnell and Peter Stone
How To Guide Your Learner: Imitation Learning with Active Adaptive Expert InvolvementXuhui Liu, Feng Xu, Xinyu Zhang, Tianyuan Liu, Shengyi Jiang, Ruifeng Chen, Zongzhang Zhang and Yang Yu
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive GamesThe Viet Bui, Tien Mai and Thanh Nguyen
Curriculum Offline Reinforcement LearningYuanying Cai, Chuheng Zhang, Hanye Zhao, Li Zhao and Jiang Bian

Norms

Chair: Pradeep Murukannaiah

Day 3 (Fri),
14:00 - 15:45
The Importance of Credo in Multiagent LearningDavid Radke, Kate Larson and Tim Brecht
Contextual Integrity for Argumentation-based Privacy ReasoningGideon Ogunniye and Nadin Kokciyan
Predicting privacy preferences for smart devices as normsMarc Serramia, William Seymour, Natalia Criado and Michael Luck
Agent-directed runtime norm synthesisAndreasa Morris Martin, Marina De Vos, Julian Padget and Oliver Ray
Emergence of Norms in Interactions with Complex RewardsDhaminda Abeywickrama, Nathan Griffiths, Zhou Xu and Alex Mouzakitis

Graph Neural Networks + Transformers

Chair: Ann Nowe

Day 3 (Fri),
10:45 - 12:30
User Device Interaction Prediction via Relational Gated Graph Attention Network and Intent-aware EncoderJingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Wenxin Tang, Runjie Zhou and Yong Jiang
Inferring Player Location in Sports Matches: Multi-Agent Spatial Imputation from Limited ObservationsGregory Everett, Ryan Beal, Tim Matthews, Joseph Early, Timothy Norman and Sarvapali Ramchurn
Learning Graph-Enhanced Commander-Executor for Multi-Agent NavigationXinyi Yang, Shiyu Huang, Yiwen Sun, Yuxiang Yang, Chao Yu, Wei-Wei Tu, Huazhong Yang and Yu Wang
Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for Multi-Agent LearningRyan Kortvelesy, Steven Morad and Amanda Prorok
Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting ProblemPeiwang Tang and Xianchao Zhang
TransfQMix: Transformers for Leveraging the Graph Structure of Multi-Agent Reinforcement Learning ProblemsMatteo Gallici, Mario Martin and Ivan Masmitja
Intelligent Onboard Routing in Stochastic Dynamic Environments using TransformersRohit Chowdhury, Raswanth Murugan and Deepak Subramani

Voting I

Chair: Alan Tsang

Day 3 (Fri),
10:45 - 12:30
Characterizations of Sequential Valuation RulesChris Dong and Patrick Lederer
Collecting, Classifying, Analyzing, and Using Real-World Ranking DataNiclas Boehmer and Nathan Schaar
Margin of Victory for Weighted Tournament SolutionsMichelle Döring and Jannik Peters
Bribery Can Get Harder in Structured Multiwinner Approval ElectionBartosz Kusek, Robert Bredereck, Piotr Faliszewski, Andrzej Kaczmarczyk and DuĆĄan Knop
Strategyproof Social Decision Schemes on Super Condorcet DomainsFelix Brandt, Patrick Lederer and Sascha Tausch
Separating and Collapsing Electoral Control TypesBenjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David NarvĂĄez, Conor Taliancich and Henry B. Welles
The Distortion of Approval Voting with RunoffSoroush Ebadian, Mohamad Latifian and Nisarg Shah

Voting II

Chair: Reshef Meir

Day 3 (Fri),
14:00 - 15:45
On the Complexity of the Two-Stage Majority RuleYongjie Yang
Fairness in Participatory Budgeting via Equality of ResourcesJan Maly, Simon Rey, Ulle Endriss and Martin Lackner
Free-Riding in Multi-Issue DecisionsMartin Lackner, Jan Maly and Oliviero Nardi
k-prize Weighted Voting GameWei-Chen Lee, David Hyland, Alessandro Abate, Edith Elkind, Jiarui Gan, Julian Gutierrez, Paul Harrenstein and Michael Wooldridge
Computing the Best Policy That Survives a VoteAndrei Constantinescu and Roger Wattenhofer
Voting by AxiomsMarie Christin Schmidtlein and Ulle Endriss
A Hotelling-Downs game for strategic candidacy with binary issuesJavier Maass, Vincent Mousseau and Anaëlle Wilczynski
Voting with Limited Energy: A Study of Plurality and BordaZoi Terzopoulou

Multi-objective Planning and Learning

Chair: Gauthier Picard

Day 3 (Fri),
14:00 - 15:45
Revealed multi-objective utility aggregation in human drivingAtrisha Sarkar, Kate Larson and Krzysztof Czarnecki
A Brief Guide to Multi-Objective Reinforcement Learning and PlanningConor F Hayes, Roxana Radulescu, Eugenio Bargiacchi, Johan Kallstrom, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowe, Gabriel Ramos, Marcello Restelli, Peter Vamplew and Diederik M. Roijers
Welfare and Fairness in Multi-objective Reinforcement LearningZiming Fan, Nianli Peng, Muhang Tian and Brandon Fain
Preference-Based Multi-Objective Multi-Agent Path FindingFlorence Ho and Shinji Nakadai
Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement PrioritizationLucas N. Alegre, Ana L. C. Bazzan, Diederik M. Roijers, Ann Nowé and Bruno C. da Silva
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the UtilityZhaori Guo, Timothy Norman and Enrico Gerding

Auctions + Voting

Chair: Noam Hazon

Day 2 (Thu),
14:00 - 15:45
Price of Anarchy for First Price Auction with Risk-Averse BiddersZhiqiang Zhuang, Kewen Wang and Zhe Wang
A Redistribution Framework for Diffusion AuctionsSizhe Gu, Yao Zhang, Yida Zhao and Dengji Zhao
Sybil-Proof Diffusion Auction in Social NetworksHongyin Chen, Xiaotie Deng, Ying Wang, Yue Wu and Dengji Zhao
Representing and Reasoning about AuctionsMunyque Mittelmann, Sylvain Bouveret and Laurent Perrussel
Revisiting the Distortion of Distributed VotingAris Filos-Ratsikas and Alexandros Voudouris
Bounded Approval Ballots: Balancing Expressiveness and Simplicity for Multiwinner ElectionsDorothea Baumeister, Linus Boes, Christian Laußmann and Simon Rey
On the Distortion of Single Winner Elections with Aligned CandidatesDimitris Fotakis and Laurent Gourves
SAT-based Judgment AggregationAri Conati, Andreas Niskanen and Matti JĂ€rvisalo

Learning with Humans and Robots

Chair: Jonathan Gratch

Day 2 (Thu),
10:45 - 12:30
GANterfactual-RL: Understanding Reinforcement Learning Agents' Strategies through Visual Counterfactual ExplanationsTobias Huber, Maximilian Demmler, Silvan Mertes, Matthew Olson and Elisabeth André
Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative ExplorationChao Yu, Xinyi Yang, Jiaxuan Gao, Jiayu Chen, Yunfei Li, Jijia Liu, Yunfei Xiang, Ruixin Huang, Huazhong Yang, Yi Wu and Yu Wang
Dec-AIRL: Decentralized Adversarial IRL for Human-Robot TeamingPrasanth Sengadu Suresh, Yikang Gui and Prashant Doshi
Structural Attention-based Recurrent Variational Autoencoder for Highway Vehicle Anomaly DetectionNeeloy Chakraborty, Aamir Hasan, Shuijing Liu, Tianchen Ji, Weihang Liang, D. Livingston McPherson and Katherine Driggs-Campbell
Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired SkillsMaxence Hussonnois, Thommen Karimpanal George and Santu Rana
Learning from Multiple Independent Advisors in Multi-agent Reinforcement LearningSriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson and Mark Crowley

Behavioral and Algorithmic Game Theory

Chair: Zoi Terzopoulou

Day 1 (Wed),
14:00 - 15:45
Non-strategic Econometrics (for Initial Play)Daniel Chui, Jason Hartline and James Wright
Efficient Stackelberg Strategies for Finitely Repeated GamesNatalie Collina, Eshwar Ram Arunachaleswaran and Michael Kearns
Learning Density-Based Correlated Equilibria for Markov GamesLibo Zhang, Yang Chen, Toru Takisaka, Bakh Khoussainov, Michael Witbrock and Jiamou Liu
IRS: An Incentive-compatible Reward Scheme for AlgorandMaizi Liao, Wojciech Golab and Seyed Majid Zahedi
Data Structures for Deviation PayoffsBryce Wiedenbeck and Erik Brinkman

Deep Learning

Chair: Joydeep Biswas

Day 3 (Fri),
14:00 - 15:45
Worst-Case Adaptive Submodular CoverJing Yuan and Shaojie Tang
Minimax Strikes BackQuentin Cohen-Solal and Tristan Cazenave
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement LearningBram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew Taylor, Mykola Pechenizkiy and Decebal Constantin Mocanu
Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep Reinforcement LearningWoojun Kim and Youngchul Sung
Learning Rewards to Optimize Global Performance Metrics in Deep Reinforcement LearningJunqi Qian, Paul Weng and Chenmien Tan
A Deep Reinforcement Learning Approach for Online Parcel AssignmentHao Zeng, Qiong Wu, Kunpeng Han, Junying He and Haoyuan Hu
CoRaL: Continual Representation Learning for Overcoming Catastrophic ForgettingMohammad Yasar and Tariq Iqbal
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and HealthcareGe Gao, Song Ju, Markel Sanz Ausin and Min Chi

Adversarial Learning + Social Networks + Causal Graphs

Chair: Paolo Turrini

Day 3 (Fri),
10:45 - 12:30
Adversarial Link Prediction in Spatial NetworksMichaƂ Tomasz Godziszewski, Yevgeniy Vorobeychik and Tomasz Michalak
Distributed Mechanism Design in Social NetworksHaoxin Liu, Yao Zhang and Dengji Zhao
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time AttacksMohammad Mohammadi, Jonathan Nöther, Debmalya Mandal, Adish Singla and Goran Radanovic
How to Turn an MAS into a Graphical Causal ModelH. Van Dyke Parunak
FedMM: A Communication Efficient Solver for Federated Adversarial Domain AdaptationYan Shen, Jian Du, Han Zhao, Zhanghexuan Ji, Chunwei Ma and Mingchen Gao

Best Dissertation Talk

Chair: Paolo Turrini

Day 2 (Thu),
14:00 - 15:45
Efficient and Effective Techniques for Large-Scale Multi-Agent Path FindingJiaoyang Li

Poster Sessions

TimeTitleAuthorsThemePoster Board ID

Day 1

Day 1Establishing Shared Query Understanding in an Open Multi-Agent SystemNikolaos Kondylidis, Ilaria Tiddi and Annette ten TeijeHuman-Agent Teams121
Communicating Agent Intentions for Human-Agent Decision Making under UncertaintyJulie Porteous, Alan Lindsay and Fred CharlesHuman-Agent Teams122
Trusting artificial agents: communication trumps performanceMarin Le Guillou, Laurent Prévot and Bruno BerberianHuman-Agent Teams123
Nonverbal Human Signals Can Help Autonomous Agents Infer Human Preferences for Their BehaviorKate Candon, Jesse Chen, Yoony Kim, Zoe Hsu, Nathan Tsoi and Marynel VĂĄzquezHuman-Agent Teams124
On Subset Selection of Multiple Humans To Improve Human-AI Team AccuracySagalpreet Singh, Shweta Jain and Shashi Shekhar JhaHuman-Agent Teams125
Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual ExplanationsLujain Ibrahim, Mohammad M. Ghassemi and Tuka AlhanaiHuman-Agent Teams126
Automated Task-Time Interventions to Improve Teamwork using Imitation LearningSangwon Seo, Bing Han and Vaibhav V UnhelkarHuman-Agent Teams127
Should my agent lie for me? A study on humans' attitudes towards deceptive AIStefan Sarkadi, Peidong Mei and Edmond AwadHuman-Agent Teams128
A Logic of Only-Believing over Arbitrary Probability DistributionsQihui Feng, Daxin Liu, Vaishak Belle and Gerhard LakemeyerKnowledge Representation and Reasoning I49
A Deontic Logic of Knowingly ComplyingCarlos Areces, Valentin Cassano, Pablo Castro, Raul Fervari and Andrés R. SaraviaKnowledge Representation and Reasoning I50
Learning Logic Specifications for Soft Policy Guidance in POMCPGiulio Mazzi, Daniele Meli, Alberto Castellini and Alessandro FarinelliKnowledge Representation and Reasoning I51
Strategic (Timed) Computation Tree LogicJaime Arias, Wojciech Jamroga, Wojciech Penczek, Laure Petrucci and Teofil SidorukKnowledge Representation and Reasoning I52
Attention! Dynamic Epistemic Logic Models of (In)attentive AgentsGaia Belardinelli and Thomas BolanderKnowledge Representation and Reasoning I53
(Arbitrary) Partial CommunicationRustam Galimullin and Fernando R. Velazquez-QuesadaKnowledge Representation and Reasoning I65
Epistemic Abstract Argumentation Framework: Formal Foundations, Computation and ComplexityGianvincenzo Alfano, Sergio Greco, Francesco Parisi and Irina TrubitsynaKnowledge Representation and Reasoning I66
Actions, Continuous Distributions and Meta-BeliefsVaishak BelleKnowledge Representation and Reasoning I67
Provable Optimization of Quantal Response Leader-Follower Games with Exponentially Large Action SpacesJinzhao Li, Daniel Fink, Christopher Wood, Carla P. Gomes and Yexiang XueKnowledge Representation and Reasoning II68
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information TheoryMasoud Tabatabaei and Wojciech JamrogaKnowledge Representation and Reasoning II69
Synthesis of Resource-Aware Controllers Against Rational AgentsRodica Condurache, Catalin Dima, Youssouf Oualhadj and Nicolas TroquardKnowledge Representation and Reasoning II81
Computationally Feasible StrategiesCatalin Dima and Wojtek JamrogaKnowledge Representation and Reasoning II82
Towards the Verification of Strategic Properties in Multi-Agent Systems with Imperfect InformationAngelo Ferrando and Vadim MalvoneKnowledge Representation and Reasoning II83
Ask and You Shall be Served: Representing and Solving Multi-agent Optimization Problems with Service Requesters and ProvidersMaya Lavie, Tehila Caspi, Omer Lev and Roie ZivanPlanning84
Fairness Driven Efficient Algorithms for Sequenced Group Trip Planning Query ProblemNapendra Solanki, Shweta Jain, Suman Banerjee and Yayathi Pavan Kumar SPlanning85
Domain-Independent Deceptive PlanningAdrian Price, Ramon Fraga Pereira, Peta Masters and Mor VeredPlanning86
CAMS: Collision Avoiding Max-Sum for Mobile Sensor TeamsArseni Pertzovskiy, Roie Zivan and Noa AgmonPlanning87
Risk-Constrained Planning for Multi-Agent Systems with Shared ResourcesAnna Gautier, Marc Rigter, Bruno Lacerda, Nick Hawes and Michael WooldridgePlanning88
Quantitative Planning with Action Deception in Concurrent Stochastic GamesChongyang Shi, Shuo Han and Jie FuPlanning89
Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPsStelios Triantafyllou and Goran RadanovicPlanning90
On-line Estimators for Ad-hoc Task Execution: Learning types and parameters of teammates for effective teamworkMatheus Aparecido Do Carmo Alves, Elnaz Shafipour Yourdshahi, Amokh Varma, Leandro Soriano Marcolino, JĂł Ueyama and Plamen AngelovPlanning91
Trust Region Bounds for Decentralized PPO Under Non-stationarityMingfei Sun, Sam Devlin, Jacob Beck, Katja Hofmann and Shimon WhitesonMultiagent Reinforcement Learning I1
Multi-Agent Reinforcement Learning for Adaptive Mesh RefinementJiachen Yang, Ketan Mittal, Tarik Dzanic, Socratis Petrides, Brendan Keith, Brenden Petersen, Daniel Faissol and Robert AndersonMultiagent Reinforcement Learning I2
Adaptive Learning Rates for Multi-Agent Reinforcement LearningJiechuan Jiang and Zongqing LuMultiagent Reinforcement Learning I3
Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement LearningShanqi Liu, Yujing Hu, Runze Wu, Dong Xing, Yu Xiong, Changjie Fan, Kun Kuang and Yong LiuMultiagent Reinforcement Learning I4
A Variational Approach to Mutual Information-Based Coordination for Multi-Agent Reinforcement LearningWoojun Kim, Whiyoung Jung, Myungsik Cho and Youngchul SungMultiagent Reinforcement Learning I5
Mediated Multi-Agent Reinforcement LearningDmitry Ivanov, Ilya Zisman and Kirill ChernyshevMultiagent Reinforcement Learning I6
EXPODE: EXploiting POlicy Discrepancy for Efficient Exploration in Multi-agent Reinforcement LearningYucong Zhang and Chao YuMultiagent Reinforcement Learning I7
TiZero: Mastering Multi-Agent Football with Curriculum Learning and Self-PlayFanqi Lin, Shiyu Huang, Tim Pearce, Wenze Chen and Wei-Wei TuMultiagent Reinforcement Learning I8
AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent Reinforcement LearningXuefeng Wang, Xinran Li, Jiawei Shao and Jun ZhangMultiagent Reinforcement Learning II9
Learning Structured Communication for Multi-Agent Reinforcement LearningJunjie Sheng, Xiangfeng Wang, Bo Jin, Wenhao Li, Jun Wang, Junchi Yan, Tsung-Hui Chang and Hongyuan ZhaMultiagent Reinforcement Learning II10
Model-based Sparse Communication in Multi-agent Reinforcement LearningShuai Han, Mehdi Dastani and Shihan WangMultiagent Reinforcement Learning II17
Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RLPhillip J.K. Christoffersen, Andreas Haupt and Dylan Hadfield-MenellMultiagent Reinforcement Learning II18
The Benefits of Power Regularization in Cooperative Reinforcement LearningMichelle Li and Michael DennisMultiagent Reinforcement Learning II19
MAC-PO: Multi-Agent Experience Replay via Collective Priority OptimizationYongsheng Mei, Hanhan Zhou, Tian Lan, Guru Venkataramani and Peng WeiMultiagent Reinforcement Learning II20
Self-Motivated Multi-Agent ExplorationShaowei Zhang, Jiahan Cao, Lei Yuan, Yang Yu and De-Chuan ZhanMultiagent Reinforcement Learning II21
Sequential Cooperative Multi-Agent Reinforcement LearningYifan Zang, Jinmin He, Kai Li, Haobo Fu, Qiang Fu and Junliang XingMultiagent Reinforcement Learning II22
Equilibria and Convergence in Fire Sale GamesNils Bertschinger, Martin Hoefer, Simon Krogmann, Pascal Lenzner, Steffen Schuldenzucker and Lisa WilhelmiEquilibria and Complexities of Games11
Bridging the Gap Between Single and Multi Objective GamesWillem Röpke, Carla Groenland, Roxana Radulescu, Ann Nowe and Diederik M. RoijersEquilibria and Complexities of Games12
Is Nash Equilibrium Approximator Learnable?Zhijian Duan, Wenhan Huang, Dinghuai Zhang, Yali Du, Jun Wang, Yaodong Yang and Xiaotie DengEquilibria and Complexities of Games13
Learning the Stackelberg Equilibrium in a Newsvendor GameNicolĂČ Cesa-Bianchi, Tommaso Cesari, Takayuki Osogami, Marco Scarsini and Segev WasserkrugEquilibria and Complexities of Games14
Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained ComplexityJiehua Chen, Gergely CsĂĄji, Sanjukta Roy and Sofia SimolaEquilibria and Complexities of Games15
Debt Transfers in Financial Networks: Complexity and EquilibriaPanagiotis Kanellopoulos, Maria Kyropoulou and Hao ZhouEquilibria and Complexities of Games16
A Study of Nash Equilibria in Multi-Objective Normal-Form GamesWillem Röpke, Diederik M. Roijers, Ann Nowe and Roxana RadulescuEquilibria and Complexities of Games27
Learning Properties in Simulation-Based GamesCyrus Cousins, Bhaskar Mishra, Enrique Areyan Viqueria and Amy GreenwaldEquilibria and Complexities of Games28
PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI CoordinationXingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang and Yali DuHumans and AI Agents129
Semi-Autonomous Systems with Contextual Competence AwarenessSaaduddin Mahmud, Connor Basich and Shlomo ZilbersteinHumans and AI Agents130
Joint Engagement Classification using Video Augmentation Techniques for Multi-person HRI in the wildYubin Kim, Huili Chen, Sharifa Algohwinem, Cynthia Breazeal and Hae Won ParkHumans and AI Agents131
Multiagent Inverse Reinforcement Learning via Theory of Mind ReasoningHaochen Wu, Pedro Sequeira and David PynadathHumans and AI Agents132
Persuading to Prepare for Quitting Smoking with a Virtual Coach: Using States and User Characteristics to Predict BehaviorNele Albers, Mark A. Neerincx and Willem-Paul BrinkmanHumans and AI Agents133
Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response GenerationYushan Qian, Bo Wang, Shangzhao Ma, Wu Bin, Shuo Zhang, Dongming Zhao, Kun Huang and Yuexian HouHumans and AI Agents134
Generating Stylistic and Personalized Dialogues for Virtual Agents in NarrativesWeilai Xu, Fred Charles and Charlie HargoodHumans and AI Agents135
Reducing Racial Bias by Interacting with Virtual Agents: An Intervention in Virtual RealityDavid Obremski, Ohenewa Bediako Akuffo, Leonie LĂŒcke, Miriam Semineth, Sarah Tomiczek, Hanna-Finja Weichert and Birgit LugrinHumans and AI Agents136
Online Coalitional Skill FormationSaar Cohen and Noa AgmonPlanning + Task/Resource Allocation92
Multi-Agent Consensus-based Bundle Allocation for Multi-mode Composite TasksGauthier PicardPlanning + Task/Resource Allocation93
Allocation Problem in Remote Teleoperation: Online Matching with Offline Reusable Resources and Delayed AssignmentsOsnat Ackerman Viden, Yohai Trabelsi, Pan Xu, Karthik Abinav Sankararaman, Oleg Maksimov and Sarit KrausPlanning + Task/Resource Allocation94
Optimal Coalition Structures for Probabilistically Monotone Partition Function GamesShaheen Fatima and Michael WooldridgePlanning + Task/Resource Allocation95
A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task DensityGrace Cai, Noble Harasha and Nancy LynchPlanning + Task/Resource Allocation96
Abstracting Noisy Robot ProgramsTill Hofmann and Vaishak BellePlanning + Task/Resource Allocation97
Structural Credit Assignment-Guided Coordinated MCTS: An Efficient and Scalable Method for Online Multiagent PlanningQian Che, Wanyuan Wang, Fengchen Wang, Tianchi Qiao, Xiang Liu, Jiuchuan Jiang, Bo An and Yichuan JiangPlanning + Task/Resource Allocation98
Strategic Planning for Flexible Agent Availability in Large Taxi FleetsRajiv Ranjan Kumar, Pradeep Varakantham and Shih-Fen ChengPlanning + Task/Resource Allocation99
Fair Allocation of Two Types of ChoresHaris Aziz, Jeremy Lindsay, Angus Ritossa and Mashbat SuzukiFair Allocations43
Fairly Dividing Mixtures of Goods and Chores under Lexicographic PreferencesHadi Hosseini, Sujoy Sikdar, Rohit Vaish and Lirong XiaFair Allocations44
Graphical House AllocationHadi Hosseini, Justin Payan, Rik Sengupta, Rohit Vaish and Vignesh ViswanathanFair Allocations45
Approximation Algorithm for Computing Budget-Feasible EF1 AllocationsJiarui Gan, Bo Li and Xiaowei WuFair Allocations46
Yankee Swap: a Fast and Simple Fair Allocation Mechanism for Matroid Rank ValuationsVignesh Viswanathan and Yair ZickFair Allocations47
Fairness in the Assignment Problem with Uncertain PrioritiesZeyu Shen, Zhiyi Wang, Xingyu Zhu, Brandon Fain and Kamesh MunagalaFair Allocations48
Possible Fairness for Allocating Indivisible ResourcesHaris Aziz, Bo Li, Shiji Xing and Yu ZhouFair Allocations58
Efficient Nearly-Fair Division with Capacity ConstraintsHila Shoshan, Noam Hazon and Erel Segal-HaleviFair Allocations59
Equitability and Welfare Maximization for Allocating Indivisible ItemsAnkang Sun, Bo Chen and Xuan Vinh DoanFair Allocations + Public Goods Games60
Best of Both Worlds: Agents with EntitlementsMartin Hoefer, Marco Schmalhofer and Giovanna VarricchioFair Allocations + Public Goods Games61
Mitigating Skewed Bidding for Conference Paper AssignmentInbal Rozenzweig, Reshef Meir, Nicholas Mattei and Ofra AmirFair Allocations + Public Goods Games62
Price of Anarchy in a Double-Sided Critical Distribution SystemDavid SychrovskĂœ, Jakub ČernĂœ, Sylvain Lichau and Martin LoeblFair Allocations + Public Goods Games63
Improved EFX approximation guarantees under ordinal-based assumptionsEvangelos Markakis and Christodoulos SantorinaiosFair Allocations + Public Goods Games64
Assigning Agents to Increase Network-Based Neighborhood DiversityZirou Qiu, Andrew Yuan, Chen Chen, Madhav Marathe, S.S. Ravi, Daniel Rosenkrantz, Richard Stearns and Anil VullikantiFair Allocations + Public Goods Games74
Altruism, Collectivism and Egalitarianism: On a Variety of Prosocial Behaviors in Binary Networked Public Goods GamesJichen Li, Xiaotie Deng, Yukun Cheng, Yuqi Pan, Xuanzhi Xia, Zongjun Yang and Jan XieFair Allocations + Public Goods Games75
The Role of Space, Density and Migration in Social DilemmasJacques Bara, Fernando P. Santos and Paolo TurriniFair Allocations + Public Goods Games76
Non-strategic Econometrics (for Initial Play)Daniel Chui, Jason Hartline and James WrightBehavioral and Algorithmic Game Theory29
Efficient Stackelberg Strategies for Finitely Repeated GamesNatalie Collina, Eshwar Ram Arunachaleswaran and Michael KearnsBehavioral and Algorithmic Game Theory30
Learning Density-Based Correlated Equilibria for Markov GamesLibo Zhang, Yang Chen, Toru Takisaka, Bakh Khoussainov, Michael Witbrock and Jiamou LiuBehavioral and Algorithmic Game Theory31
IRS: An Incentive-compatible Reward Scheme for AlgorandMaizi Liao, Wojciech Golab and Seyed Majid ZahediBehavioral and Algorithmic Game Theory32
Data Structures for Deviation PayoffsBryce Wiedenbeck and Erik BrinkmanBehavioral and Algorithmic Game Theory42
Evaluating a mechanism for explaining BDI agent behaviourMichael Winikoff and Galina SidorenkoHumans and AI / Human-Agent Interaction137
Learning Manner of Execution from Partial CorrectionsMattias Appelgren and Alex LascaridesHumans and AI / Human-Agent Interaction138
What Do You Care About: Inferring Values from EmotionsJieting Luo, Mehdi Dastani, Thomas Studer and Beishui LiaoHumans and AI / Human-Agent Interaction139
`Why didn't you allocate this task to them?' Negotiation-Aware Explicable Task Allocation and Contrastive Explanation GenerationZahra Zahedi, Sailik Sengupta and Subbarao KambhampatiHumans and AI / Human-Agent Interaction140
Explaining agent preferences and behavior: integrating reward decomposition and contrastive highlightsYael Septon, Yotam Amitai and Ofra AmirHumans and AI / Human-Agent Interaction141
Explanation Styles for Trustworthy Autonomous SystemsDavid Robb, Xingkun Liu and Helen HastieHumans and AI / Human-Agent Interaction142
Modeling the Interpretation of Animations to Help Improve Emotional ExpressionTaĂ­ssa Ribeiro, Ricardo Rodrigues and Carlos MartinhoHumans and AI / Human-Agent Interaction143
Artificial prediction markets present a novel opportunity for human-AI collaborationTatiana Chakravorti, Vaibhav Singh, Michael McLaughlin, Robert Fraleigh, Christopher Griffin, Anthony Kwasnica, David Pennock, C. Lee Giles and Sarah RajtmajerHumans and AI / Human-Agent Interaction144
Causal Explanations for Sequential Decision Making Under UncertaintySamer Nashed, Saaduddin Mahmud, Claudia Goldman and Shlomo ZilbersteinHumans and AI / Human-Agent Interaction145
Hierarchical Reinforcement Learning with Human-AI Collaborative Sub-Goals OptimizationHaozhe Ma, Thanh Vinh Vo and Tze Yun LeongHumans and AI / Human-Agent Interaction146
Context-aware agents based on Psychological Archetypes for TeamworkAnupama Arukgoda, Erandi Lakshika, Michael Barlow and Kasun GunawardanaHumans and AI / Human-Agent Interaction147
Personalized Agent Explanations for Human-Agent Teamwork: Adapting Explanations to User Trust, Workload, and PerformanceRuben Verhagen, Mark Neerincx, Can Parlar, Marin Vogel and Myrthe TielmanHumans and AI / Human-Agent Interaction148
A Teachable Agent to Enhance Elderly's IkigaiPing Chen, Xinjia Yu, Su Fang Lim and Zhiqi ShenHumans and AI / Human-Agent Interaction149
Improving Human-Robot Team Performance with Proactivity and Shared Mental ModelsGwen Edgar, Matthias Scheutz and Matthew McwilliamsHumans and AI / Human-Agent Interaction150
Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning ModelsKhaing Phyo Wai, Minghong Geng, Budhitama Subagdja, Shubham Pateria and Ah-Hwee TanHumans and AI / Human-Agent Interaction151
Learning Constraints From Human Stop-feedback in Reinforcement LearningSilvia Poletti, Alberto Testolin and Sebastian TschiatschekHumans and AI / Human-Agent Interaction152
Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AIMalek Mechergui and Sarath SreedharanHumans and AI / Human-Agent Interaction153
Effectiveness of Teamwork-Level Interventions through Decision-Theoretic Reasoning in a Minecraft Search-and-Rescue TaskDavid Pynadath, Nik Gurney, Sarah Kenny, Rajay Kumar, Stacy Marsella, Haley Matuszak, Hala Mostafa, Pedro Sequeira, Volkan Ustun and Peggy WuHumans and AI / Human-Agent Interaction154
Leveraging Hierarchical Reinforcement Learning for Ad-hoc TeamingStéphane Aroca-Ouellette, Miguel Aroca-Ouellette, Upasana Biswas, Katharina Kann and Alessandro RonconeHumans and AI / Human-Agent Interaction155
Asynchronous Communication Aware Multi-Agent Task AllocationBen Rachmut, Sofia Amador Nelke and Roie ZivanKnowledge Representation, Reasoning, and Planning100
Towards Robust Contrastive Explanations for Human-Neural Multi-agent SystemsFrancesco Leofante and Alessio LomuscioKnowledge Representation, Reasoning, and Planning101
Visual Explanations for Defence in Abstract ArgumentationSylvie Doutre, Théo Duchatelle and Marie-Christine Lagasquie-SchiexKnowledge Representation, Reasoning, and Planning102
Minimising Task Tardiness for Multi-Agent Pickup and DeliverySaravanan Ramanathan, Yihao Liu, Xueyan Tang, Wentong Cai and Jingning LiKnowledge Representation, Reasoning, and Planning103
Probabilistic Deduction as a Probabilistic Extension of Assumption-based ArgumentationXiuyi FanKnowledge Representation, Reasoning, and Planning104
Bayes-Adaptive Monte-Carlo Planning for Type-Based Reasoning in Large Partially Observable, Multi-Agent EnvironmentsJonathon Schwartz and Hanna KurniawatiKnowledge Representation, Reasoning, and Planning105
Blame Attribution for Multi-Agent Pathfinding Execution FailuresAvraham Natan, Roni Stern and Meir KalechKnowledge Representation, Reasoning, and Planning106
A Semantic Approach to Decidability in Epistemic PlanningAlessandro Burigana, Paolo Felli, Marco Montali and Nicolas TroquardKnowledge Representation, Reasoning, and Planning107
Forward-PECVaR Algorithm: Exact Evaluation for CVaR SSPsWilly Reis, Denis Pais, Valdinei Freire and Karina DelgadoKnowledge Representation, Reasoning, and Planning108
Explainable Ensemble Classification Model based on ArgumentationNadia Abchiche-Mimouni, Leila Amgoud and Farida ZehraouiKnowledge Representation, Reasoning, and Planning109
Updating Action Descriptions and Plans for Cognitive AgentsPeter Stringer, Rafael C. Cardoso, Clare Dixon, Michael Fisher and Louise DennisKnowledge Representation, Reasoning, and Planning110
Argument-based Explanation FunctionsLeila Amgoud, Philippe Muller and Henri TrenquierKnowledge Representation, Reasoning, and Planning111
A Formal Framework for Deceptive Topic Planning in Information-Seeking DialoguesAndreas BrÀnnström, Virginia Dignum and Juan Carlos NievesKnowledge Representation, Reasoning, and Planning112
Memoryless Adversaries in Imperfect Information GamesDhananjay Raju, Georgios Bakirtzis and Ufuk TopcuKnowledge Representation, Reasoning, and Planning113
Bounded and Unbounded Verification of RNN-Based Agents in Non-deterministic EnvironmentsMehran Hosseini and Alessio LomuscioKnowledge Representation, Reasoning, and Planning114
Methods and Mechanisms for Interactive Novelty Handling in Adversarial EnvironmentsTung Thai, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Subbarao Kambhampati, Neeraj Varshney, Chitta Baral, Jivko Sinapov and Matthias ScheutzKnowledge Representation, Reasoning, and Planning115
One-Shot Learning from a Demonstration with Hierarchical Latent LanguageNathaniel Weir, Xingdi Yuan, Marc-Alexandre CÎté, Matthew Hausknecht, Romain Laroche, Ida Momennejad, Harm Van Seijen and Benjamin Van DurmeKnowledge Representation, Reasoning, and Planning116
Emergent Compositional Concept Communication through Mutual Information in Multi-Agent TeamsSeth Karten, Siva Kailas and Katia SycaraKnowledge Representation, Reasoning, and Planning117
Reasoning about Uncertainty in AgentSpeak using Dynamic Epistemic LogicMichael Vezina, Babak Esfandiari, François Schwarzentruber and Sandra MorleyKnowledge Representation, Reasoning, and Planning118
Towards Optimal and Scalable Evacuation Planning Using Data-driven Agent Based ModelsKazi Ashik Islam, Da Qi Chen, Madhav Marathe, Henning Mortveit, Samarth Swarup and Anil VullikantiKnowledge Representation, Reasoning, and Planning119
Intention Progression with Maintenance GoalsDi Wu, Yuan Yao, Natasha Alechina, Brian Logan and John ThangarajahKnowledge Representation, Reasoning, and Planning120
Safety Guarantees in Multi-agent Learning via Trapping RegionsAleksander Czechowski and Frans OliehoekLearning and Adaptation23
Multi-Team Fitness Critics For Robust TeamingJoshua Cook, Tristan Scheiner and Kagan TumerLearning and Adaptation24
Multi-Agent Deep Reinforcement Learning for High-Frequency Multi-Market MakingPankaj KumarLearning and Adaptation25
TA-Explore: Teacher-Assisted Exploration for Facilitating Fast Reinforcement LearningAli Beikmohammadi and Sindri MagnĂșssonLearning and Adaptation26
Which way is `right'?: Uncovering limitations of Vision-and-Language Navigation modelsMeera Hahn, James M. Rehg and Amit RajLearning and Adaptation33
Learning Individual Difference Rewards in Multi-Agent Reinforcement LearningChen Yang, Guangkai Yang and Junge ZhangLearning and Adaptation34
TiLD: Third-person Imitation Learning by Estimating Domain Cognitive Differences of Visual DemonstrationsZixuan Chen, Wenbin Li, Yang Gao and Yiyu ChenLearning and Adaptation35
Off-Beat Multi-Agent Reinforcement LearningWei Qiu, Weixun Wang, Rundong Wang, Bo An, Yujing Hu, Svetlana Obraztsova, Zinovi Rabinovich, Jianye Hao, Yingfeng Chen and Changjie FanLearning and Adaptation36
AJAR: An Argumentation-based Judging Agents Framework for Ethical Reinforcement LearningBenoĂźt Alcaraz, Olivier Boissier, RĂ©my Chaput and Christopher LeturcLearning and Adaptation37
Never Worse, Mostly Better: Stable Policy Improvement in Deep Reinforcement LearningPranav Khanna, Guy Tennenholtz, Nadav Merlis, Shie Mannor and Chen TesslerLearning and Adaptation38
Selectively Sharing Experiences Improves Multi-Agent Reinforcement LearningMatthias Gerstgrasser, Tom Danino and Sarah KerenLearning and Adaptation39
The challenge of redundancy on multi-agent value factorisationSiddarth Singh and Benjamin RosmanLearning and Adaptation40
Robust Ordinal Regression for Collaborative Preference Learning with Opinion SynergiesMohamed Ouaguenouni, Hugo Gilbert, Meltem Ozturk and Olivier SpanjaardLearning and Adaptation41
Off-the-Grid MARL: Datasets and Baselines for Offline Multi-Agent Reinforcement LearningJuan Claude Formanek, Asad Jeewa, Arnu Pretorius and Jonathan ShockLearning and Adaptation54
Search-Improved Game-Theoretic Multiagent Reinforcement Learning in General and Negotiation GamesZun Li, Marc Lanctot, Kevin McKee, Luke Marris, Ian Gemp, Daniel Hennes, Kate Larson, Yoram Bachrach, Michael Wellman and Paul MullerLearning and Adaptation55
Grey-box Adversarial Attack on Communication in Multi-agent Reinforcement LearningXiao Ma and Wu-Jun LiLearning and Adaptation56
Reward-Machine-Guided, Self-Paced Reinforcement LearningCevahir Koprulu and Ufuk TopcuLearning and Adaptation57
Centralized Cooperative Exploration Policy for Continuous Control TasksChao Li, Chen Gong, Xinwen Hou, Yu Liu and Qiang HeLearning and Adaptation70
Do As You Teach: A Multi-Teacher Approach to Self-Play in Deep Reinforcement LearningChaitanya Kharyal, Tanmay Sinha, Sai Krishna Gottipati, Fatemeh Abdollahi, Srijita Das and Matthew TaylorLearning and Adaptation71
PORTAL: Automatic Curricula Generation for Multiagent Reinforcement LearningJizhou Wu, Tianpei Yang, Xiaotian Hao, Jianye Hao, Yan Zheng, Weixun Wang and Matthew E. TaylorLearning and Adaptation72
AI-driven Prices for Externalities and Sustainability in Production MarketsPanayiotis Danassis, Aris Filos-Ratsikas, Haipeng Chen, Milind Tambe and Boi FaltingsLearning and Adaptation73
For One and All: Individual and Group Fairness in the Allocation of Indivisible GoodsJonathan Scarlett, Nicholas Teh and Yair ZickSocial Choice and Cooperative Game Theory78
Matching Algorithms under Diversity-Based ReservationsHaris Aziz, Sean Morota Chu and Zhaohong SunSocial Choice and Cooperative Game Theory79
Social Mechanism Design: A Low-Level IntroductionBenjamin Abramowitz and Nicholas MatteiSocial Choice and Cooperative Game Theory80
Online 2-stage Stable MatchingEvripidis Bampis, Bruno Escoffier and Paul YoussefSocial Choice and Cooperative Game Theory161
Strategic Play By Resource-Bounded Agents in Security GamesXinming Liu and Joseph HalpernMarkets, Auctions, and Non-Cooperative Game Theory157
Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid MethodologyZijian Shi and John CartlidgeMarkets, Auctions, and Non-Cooperative Game Theory158
Regularization for Strategy Exploration in Empirical Game-Theoretic AnalysisYongzhao Wang and Michael WellmanMarkets, Auctions, and Non-Cooperative Game Theory159
A Scalable Opponent Model Using Bayesian Learning for Automated Bilateral Multi-Issue NegotiationShengbo Chang and Katsuhide FujitaMarkets, Auctions, and Non-Cooperative Game Theory160
Modeling Robustness in Decision-Focused Learning as a Stackelberg GameSonja Johnson-Yu, Kai Wang, Jessie Finocchiaro, Aparna Taneja and Milind TambeMarkets, Auctions, and Non-Cooperative Game Theory156
Fair Facility Location for Socially Equitable RepresentationHelen Sternbach and Sara CohenCoordination, Organisations, Institutions, and Norms77

Day 2

Day 2Kiko: Programming Agents to Enact Interaction ModelsSamuel Christie, Munindar P. Singh and Amit ChopraEngineering Multiagent Systems149
CraftEnv: A Flexible Collective Robotic Construction Environment for Multi-Agent Reinforcement LearningRui Zhao, Xu Liu, Yizheng Zhang, Minghao Li, Cheng Zhou, Shuai Li and Lei HanEngineering Multiagent Systems150
Feedback-Guided Intention Scheduling for BDI AgentsMichael Dann, John Thangarajah and Minyi LiEngineering Multiagent Systems151
A Behaviour-Driven Approach for Testing Requirements via User and System Stories in Agent SystemsSebastian Rodriguez, John Thangarajah and Michael WinikoffEngineering Multiagent Systems152
ML-MAS: a Hybrid AI Framework for Self-Driving VehiclesHilal Al Shukairi and Rafael C. CardosoEngineering Multiagent Systems153
Signifiers as a First-class Abstraction in Hypermedia Multi-Agent SystemsDanai Vachtsevanou, Andrei Ciortea, Simon Mayer and Jérémy LeméeEngineering Multiagent Systems154
MAIDS - a Framework for the Development of Multi-Agent Intentional Dialogue SystemsDĂ©bora Cristina Engelmann, Alison R. Panisson, Renata Vieira, Jomi Fred HĂŒbner, Viviana Mascardi and Rafael H. BordiniEngineering Multiagent Systems155
Mandrake: Multiagent Systems as a Basis for Programming Fault-Tolerant Decentralized ApplicationsSamuel Christie, Amit Chopra and Munindar P. SinghEngineering Multiagent Systems156
Anonymous Multi-Agent Path Finding with Individual DeadlinesGilad Fine, Dor Atzmon and Noa AgmonMultiagent Path Finding101
Learn to solve the min-max multiple traveling salesmen problem with reinforcement learningJunyoung Park, Changhyun Kwon and Jinkyoo ParkMultiagent Path Finding102
Counterfactual Fairness Filter for Fair-Delay Multi-Robot NavigationHikaru Asano, Ryo Yonetani, Mai Nishimura and Tadashi KozunoMultiagent Path Finding106
Improved Complexity Results and an Efficient Solution for Connected Multi-Agent Path FindingIsseĂŻnie Calviac, Ocan Sankur and Francois SchwarzentruberMultiagent Path Finding107
Optimally Solving the Multiple Watchman Route Problem with Heuristic SearchYaakov Livne, Dor Atzmon, Shawn Skyler, Eli Boyarski, Amir Shapiro and Ariel FelnerMultiagent Path Finding108
Distributed Planning with Asynchronous Execution with Local Navigation for Multi-agent Pickup and Delivery ProblemYuki Miyashita, Tomoki Yamauchi and Toshiharu SugawaraMultiagent Path Finding109
Energy-aware UAV Path Planning with Adaptive SpeedJonathan Diller and Qi HanMultiagent Path Finding110
Coordination of Multiple Robots along Given Paths with Bounded Junction ComplexityMikkel Abrahamsen, Tzvika Geft, Dan Halperin and Barak UgavMultiagent Path Finding111
Efficient Interactive Recommendation with Huffman Tree-based Policy LearningLongxiang Shi, Zilin Zhang, Shoujin Wang, Binbin Zhou, Minghui Wu, Cheng Yang and Shijian LiInnovative Applications131
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and HealthcareGe Gao, Song Ju, Markel Sanz Ausin and Min ChiInnovative Applications52
ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic CaneShivendra Agrawal, Suresh Nayak, Ashutosh Naik and Bradley HayesInnovative Applications132
Preference-Aware Delivery Planning for Last-Mile LogisticsQian Shao and Shih-Fen ChengInnovative Applications133
Multi-Agent Reinforcement Learning with Safety Layer for Active Voltage ControlYufeng Shi, Mingxiao Feng, Minrui Wang, Wengang Zhou and Houqiang LiInnovative Applications134
Multi-agent Signalless Intersection Management with Dynamic Platoon FormationPhuriwat Worrawichaipat, Enrico Gerding, Ioannis Kaparias and Sarvapali RamchurnInnovative Applications135
SocialLight: Distributed Cooperation Learning towards Network-Wide Traffic Signal ControlHarsh Goel, Yifeng Zhang, Mehul Damani and Guillaume SartorettiInnovative Applications136
Model-Based Reinforcement Learning for Auto-Bidding in Display AdvertisingShuang Chen, Qisen Xu, Liang Zhang, Yongbo Jin, Wenhao Li and Linjian MoInnovative Applications137
Follow your Nose: Using General Value Functions for Directed Exploration in Reinforcement LearningDurgesh Kalwar, Omkar Shelke, Somjit Nath, Hardik Meisheri and Harshad KhadilkarReinforcement Learning1
FedFormer: Contextual Federation with Attention in Reinforcement LearningLiam Hebert, Lukasz Golab, Pascal Poupart and Robin CohenReinforcement Learning2
Diverse Policy Optimization for Structured Action SpaceWenhao Li, Baoxiang Wang, Shanchao Yang and Hongyuan ZhaReinforcement Learning3
Enhancing Reinforcement Learning Agents with Local GuidesPaul Daoudi, Bogdan Robu, Christophe Prieur, Ludovic Dos Santos and Merwan BarlierReinforcement Learning4
Scalar reward is not enoughPeter Vamplew, Ben Smith, Johan KÀllström, Gabriel Ramos, Roxana Rădulescu, Diederik Roijers, Conor Hayes, Friedrik Hentz, Patrick Mannion, Pieter Libin, Richard Dazeley and Cameron FoaleReinforcement Learning5
Targeted Search Control in AlphaZero for Effective Policy ImprovementAlexandre Trudeau and Michael BowlingReinforcement Learning6
Out-of-Distribution Detection for Reinforcement Learning Agents with Probabilistic Dynamics ModelsTom Haider, Karsten Roscher, Felippe Schmoeller da Roza and Stephan GĂŒnnemannReinforcement Learning7
Knowledge Compilation for Constrained Combinatorial Action Spaces in Reinforcement LearningJiajing Ling, Moritz Lukas Schuler, Akshat Kumar and Pradeep VarakanthamReinforcement Learning17
Best of Both Worlds Fairness under EntitlementsHaris Aziz, Aditya Ganguly and Evi MichaMatching16
Probabilistic Rationing with Categorized Priorities: Processing Reserves Fairly and EfficientlyHaris AzizMatching24
Semi-Popular Matchings and Copeland WinnersTelikepalli Kavitha and Rohit VaishMatching25
Host Community Respecting Refugee HousingDuĆĄan Knop and Ć imon SchierreichMatching26
Online matching with delays and stochastic arrival timesMathieu Mari, MichaƂ PawƂowski, Runtian Ren and Piotr SankowskiMatching27
Adapting Stable Matchings to Forced and Forbidden PairsNiclas Boehmer and Klaus HeegerMatching28
Stable Marriage in Euclidean SpaceYinghui Wen, Zhongyi Zhang and Jiong GuoMatching29
A Map of Diverse Synthetic Stable Roommates InstancesNiclas Boehmer, Klaus Heeger and StanisƂaw SzufaMatching30
Empirical Game-Theoretic Analysis for Mean Field GamesYongzhao Wang and Michael WellmanLearning in Games18
Differentiable Arbitrating in Zero-sum Markov GamesJing Wang, Meichen Song, Feng Gao, Boyi Liu, Zhaoran Wang and Yi WuLearning in Games19
Learning Parameterized Families of GamesMadelyn Gatchel and Bryce WiedenbeckLearning in Games20
Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed Cooperative-Competitive GamesZelai Xu, Yancheng Liang, Chao Yu, Yu Wang and Yi WuLearning in Games21
Cost Inference for Feedback Dynamic Games from Noisy Partial State Observations and Incomplete TrajectoriesJingqi Li, Chih-Yuan Chiu, Lasse Peters, Somayeh Sojoudi, Claire Tomlin and David Fridovich-KeilLearning in Games22
Multiplicative Weights Updates for Extensive Form GamesChirag Chhablani, Michael Sullins and Ian KashLearning in Games23
A Hybrid Framework of Reinforcement Learning and Physics-Informed Deep Learning for Spatiotemporal Mean Field GamesXu Chen, Shuo Liu and Xuan DiLearning in Games33
Adversarial Inverse Reinforcement Learning for Mean Field GamesYang Chen, Libo Zhang, Jiamou Liu and Michael WitbrockLearning in Games34
Indexability is Not Enough for Whittle: Improved, Near-Optimal Algorithms for Restless BanditsAbheek Ghosh, Dheeraj Nagaraj, Manish Jain and Milind TambeMulti-Armed Bandits + Monte Carlo Tree Search52
Avoiding Starvation of Arms in Restless Multi-Armed BanditsDexun Li and Pradeep VarakanthamMulti-Armed Bandits + Monte Carlo Tree Search53
Restless Multi-Armed Bandits for Maternal and Child Health: Results from Decision-Focused LearningShresth Verma, Aditya Mate, Kai Wang, Neha Madhiwalla, Aparna Hegde, Aparna Taneja and Milind TambeMulti-Armed Bandits + Monte Carlo Tree Search54
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit TasksArpita Biswas, Jackson Killian, Paula Rodriguez Diaz, Susobhan Ghosh and Milind TambeMulti-Armed Bandits + Monte Carlo Tree Search55
On Regret-optimal Cooperative Nonstochastic Multi-armed BanditsJialin Yi and Milan VojnovicMulti-Armed Bandits + Monte Carlo Tree Search65
Equilibrium Bandits: Learning Optimal Equilibria of Unknown DynamicsSiddharth Chandak, Ilai Bistritz and Nicholas BambosMulti-Armed Bandits + Monte Carlo Tree Search66
ExPoSe: Combining State-Based Exploration with Gradient-Based Online SearchDixant Mittal, Siddharth Aravindan and Wee Sun LeeMulti-Armed Bandits + Monte Carlo Tree Search67
Formally-Sharp DAgger for MCTS: Lower-Latency Monte Carlo Tree Search using Data Aggregation with Formal MethodsDebraj Chakraborty, Damien Busatto-Gaston, Jean-François Raskin and Guillermo PerezMulti-Armed Bandits + Monte Carlo Tree Search68
Curriculum Offline Reinforcement LearningYuanying Cai, Chuheng Zhang, Hanye Zhao, Li Zhao and Jiang BianReinfocement and Immitation Learning35
Decentralized model-free reinforcement learning in stochastic games with average-reward objectiveRomain Cravic, Nicolas Gast and Bruno GaujalReinfocement and Immitation Learning36
Less Is More: Refining Datasets for Offline Reinforcement Learning with Reward MachinesHaoyuan Sun and Feng WuReinfocement and Immitation Learning37
A Self-Organizing Neuro-Fuzzy Q-Network: Systematic Design with Offline Hybrid LearningJohn Hostetter, Mark Abdelshiheed, Tiffany Barnes and Min ChiReinfocement and Immitation Learning38
Learning to Coordinate from Offline Datasets with Uncoordinated Behavior PoliciesJinming Ma and Feng WuReinfocement and Immitation Learning39
D-Shape: Demonstration-Shaped Reinforcement Learning via Goal-ConditioningCaroline Wang, Garrett Warnell and Peter StoneReinfocement and Immitation Learning49
How To Guide Your Learner: Imitation Learning with Active Adaptive Expert InvolvementXuhui Liu, Feng Xu, Xinyu Zhang, Tianyuan Liu, Shengyi Jiang, Ruifeng Chen, Zongzhang Zhang and Yang YuReinfocement and Immitation Learning50
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive GamesThe Viet Bui, Tien Mai and Thanh NguyenReinfocement and Immitation Learning51
Price of Anarchy for First Price Auction with Risk-Averse BiddersZhiqiang Zhuang, Kewen Wang and Zhe WangAuctions + Voting8
A Redistribution Framework for Diffusion AuctionsSizhe Gu, Yao Zhang, Yida Zhao and Dengji ZhaoAuctions + Voting9
Sybil-Proof Diffusion Auction in Social NetworksHongyin Chen, Xiaotie Deng, Ying Wang, Yue Wu and Dengji ZhaoAuctions + Voting10
Representing and Reasoning about AuctionsMunyque Mittelmann, Sylvain Bouveret and Laurent PerrusselAuctions + Voting11
Revisiting the Distortion of Distributed VotingAris Filos-Ratsikas and Alexandros VoudourisAuctions + Voting12
Bounded Approval Ballots: Balancing Expressiveness and Simplicity for Multiwinner ElectionsDorothea Baumeister, Linus Boes, Christian Laußmann and Simon ReyAuctions + Voting13
On the Distortion of Single Winner Elections with Aligned CandidatesDimitris Fotakis and Laurent GourvesAuctions + Voting14
SAT-based Judgment AggregationAri Conati, Andreas Niskanen and Matti JĂ€rvisaloAuctions + Voting15
GANterfactual-RL: Understanding Reinforcement Learning Agents' Strategies through Visual Counterfactual ExplanationsTobias Huber, Maximilian Demmler, Silvan Mertes, Matthew Olson and Elisabeth AndréLearning with Humans and Robots69
Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative ExplorationChao Yu, Xinyi Yang, Jiaxuan Gao, Jiayu Chen, Yunfei Li, Jijia Liu, Yunfei Xiang, Ruixin Huang, Huazhong Yang, Yi Wu and Yu WangLearning with Humans and Robots70
Dec-AIRL: Decentralized Adversarial IRL for Human-Robot TeamingPrasanth Sengadu Suresh, Yikang Gui and Prashant DoshiLearning with Humans and Robots71
Structural Attention-based Recurrent Variational Autoencoder for Highway Vehicle Anomaly DetectionNeeloy Chakraborty, Aamir Hasan, Shuijing Liu, Tianchen Ji, Weihang Liang, D. Livingston McPherson and Katherine Driggs-CampbellLearning with Humans and Robots81
Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired SkillsMaxence Hussonnois, Thommen Karimpanal George and Santu RanaLearning with Humans and Robots82
Learning from Multiple Independent Advisors in Multi-agent Reinforcement LearningSriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson and Mark CrowleyLearning with Humans and Robots83
Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMOYangkun Chen, Joseph Suarez, Junjie Zhang, Chenghui Yu, Bo Wu, Hanmo Chen, Hengman Zhu, Rui Du, Shanliang Qian, Shuai Liu, Weijun Hong, Jinke He, Yibing Zhang, Liang Zhao, Clare Zhu, Julian Togelius, Sharada Mohanty, Jiaxin Chen, Xiu Li, Xiaolong Zhu and Phillip IsolaEngineering Multiagent Systems157
SE4AI issues on Designing a Social Media Agent: Agile Use Case design for Behavioral Game TheoryFrancisco Marcondes, José João Almeida and Paulo NovaisEngineering Multiagent Systems158
Modeling Application Scenarios for Responsible Autonomy using Computational TranscendenceJayati Deshmukh, Nikitha Adivi and Srinath SrinivasaEngineering Multiagent Systems159
Domain-Expert Configuration of Hypermedia Multi-Agent Systems in Industrial Use CasesJérémy Lemée, Samuele Burattini, Simon Mayer and Andrei CiorteaEngineering Multiagent Systems160
Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential LoadsVincent Mai, Philippe Maisonneuve, Tianyu Zhang, Hadi Nekoei, Liam Paull and Antoine Lesage LandryInnovative Applications138
The Swiss GambitÁgnes Cseh, Pascal FĂŒhrlich and Pascal LenznerInnovative Applications139
An Adversarial Strategic Game for Machine Learning as a Service using System FeaturesGuoxin Sun, Tansu Alpcan, Andrew Cullen, Seyit Camtepe and Benjamin RubinsteinInnovative Applications140
Optimizing Crop Management with Reinforcement Learning and Imitation LearningRan Tao, Pan Zhao, Jing Wu, Nicolas F. Martin, Matthew T. Harrison, Carla Ferreira, Zahra Kalantari and Naira HovakimyanInnovative Applications141
A Novel Aggregation Framework for the Efficient Integration of Distributed Energy Resources in the Smart GridStavros Orfanoudakis and Georgios ChalkiadakisInnovative Applications142
Near Optimal Strategies for Honeypots Placement in Dynamic and Large Active Directory NetworksQuang Huy Ngo, Mingyu Guo and Hung NguyenInnovative Applications143
A Novel Demand Response Model and Method for Peak Reduction in Smart Grids -- PowerTACSanjay Chandlekar, Arthik Boroju, Shweta Jain and Sujit GujarInnovative Applications144
Shopping Assistance for Everyone: Dynamic Query Generation On a Semantic Digital Twin As a Basis for Autonomous Shopping AssistanceMichaela KĂŒmpel, Jonas Dech, Alina Hawkin and Michael BeetzInnovative Applications145
Counterfactually Fair Dynamic Assignment: A Case Study on PolicingTasfia Mashiat, Xavier Gitiaux, Huzefa Rangwala and Sanmay DasInnovative Applications146
A Cloud-Based Solution for Multi-Agent Traffic Control SystemsChikadibia Ihejimba, Behnan Torabi and Rym Z. WenksternInnovative Applications147
Balancing Fairness and Efficiency in Transport Network Design through Reinforcement LearningDimitris Michailidis, Sennay Ghebreab and Fernando SantosInnovative Applications148
From Abstractions to Grounded Languages for Robust Coordination of Task Planning RobotsYu ZhangRobotics112
Idleness Estimation for Distributed Multiagent Patrolling StrategiesMehdi Othmani-Guibourg, Jean-Loup Farges and Amal El Fallah SeghrouchniRobotics113
Simpler rather than Challenging: Design of Non-Dyadic Human-Robot Collaboration to Mediate Concurrent Human-Human TasksFrancesco Semeraro, Jon Carberry and Angelo CangelosiRobotics114
Learning to Self-Reconfigure for Freeform Modular Robots via Altruism Multi-Agent Reinforcement LearningLei Wu, Bin Guo, Qiuyun Zhang, Zhuo Sun, Jieyi Zhang and Zhiwen YuRobotics115
Learning Multiple Tasks with Non-stationary Interdependencies in Autonomous RobotsAlejandro Romero, Gianluca Baldassarre, Richard Duro and Vieri Giuliano SantucciRobotics116
A Lattice Model of 3D Environments For Provable ManipulationJohn Harwell, London Lowmanstone and Maria GiniRobotics117
HoLA Robots: Mitigating Plan-Deviation Attacks in Multi-Robot Systems with Co-Observations and Horizon-Limiting AnnouncementsKacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru and Wenchao LiRobotics118
Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel TimesAtsuyoshi Kita, Nobuhiro Suenari, Masashi Okada and Tadahiro TaniguchiRobotics119
RTransNav:Relation-wise Transformer Network for More Successful Object Goal NavigationKang Zhou, Chi Guo, Huyin Zhang and Wenfei GuoRobotics120
Multi-Agent Pickup and Delivery in Presence of Another Team of RobotsBenedetta Flammini, Davide Azzalini and Francesco AmigoniRobotics121
Reward Relabelling for combined Reinforcement and Imitation Learning on sparse-reward tasksJesĂșs Bujalance MartĂ­n and Fabien MoutardeRobotics122
Connectivity Enhanced Safe Neural Network Planner for Lane Changing in Mixed TrafficXiangguo Liu, Ruochen Jiao, Bowen Zheng, Dave Liang and Qi ZhuRobotics123
Bringing Diversity to Autonomous Vehicles: An Interpretable Multi-vehicle Decision-making and Planning FrameworkLicheng Wen, Pinlong Cai, Daocheng Fu, Song Mao and Yikang LiRobotics124
Loss of Distributed Coverage Using Lazy Agents Operating Under Discrete, Local, Event-Triggered CommunicationEdward Vickery and Aditya ParanjapeRobotics125
Multi-Agent Path Finding via Reinforcement Learning with Hybrid RewardCheng Zhao, Liansheng Zhuang, Haonan Liu, Yihong Huang and Yang JianRobotics126
Multi-Agent Pickup and Delivery with Task Probability DistributionAndrea Di Pietro, Nicola Basilico and Francesco AmigoniRobotics127
Minimally Constraining Line-of-Sight Connectivity Maintenance for Collision-free Multi-Robot Networks under UncertaintyYupeng Yang, Yiwei Lyu and Wenhao LuoRobotics128
Multi-Agent Path Finding with Time Windows: Preliminary ResultsJianqi Gao, Qi Liu, Shiyu Chen, Kejian Yan, Xinyi Li and Yanjie LiRobotics129
Two Level Actor-Critic Using Multiple TeachersSu Zhang, Srijita Das, Sriram Ganapathi Subramanian and Matthew E. TaylorLearning and Adaptation84
Provably Efficient Offline RL with OptionsXiaoyan Hu and Ho-fung LeungLearning and Adaptation85
Learning to Perceive in Deep Model-Free Reinforcement LearningGonçalo Querido, Alberto Sardinha and Francisco MeloLearning and Adaptation86
SCRIMP: Scalable Communication for Reinforcement- and Imitation-Learning-Based Multi-Agent PathfindingYutong Wang, Bairan Xiang, Shinan Huang and Guillaume SartorettiLearning and Adaptation87
Learning Group-Level Information Integration in Multi-Agent CommunicationXiangrui Meng and Ying TanLearning and Adaptation88
Learnability with PAC Semantics for Multi-agent BeliefsIonela Mocanu, Vaishak Belle and Brendan JubaLearning and Adaptation89
Improving Cooperative Multi-Agent Exploration via Surprise Minimization and Social Influence MaximizationMingyang Sun, Yaqing Hou, Jie Kang, Haiyin Piao, Yifeng Zeng, Hongwei Ge and Qiang ZhangLearning and Adaptation90
Learning to Operate in Open Worlds by Adapting Planning ModelsWiktor Piotrowski, Roni Stern, Yoni Sher, Jacob Le, Matthew Klenk, Johan de Kleer and Shiwali MohanLearning and Adaptation91
End-to-End Optimization and Learning for Multiagent EnsemblesJames Kotary, Vincenzo Di Vito and Ferdinando FiorettoLearning and Adaptation92
Optimal Decoy Resource Allocation for Proactive Defense in Probabilistic Attack GraphsHaoxiang Ma, Shuo Han, Nandi Leslie, Charles Kamhoua and Jie FuLearning and Adaptation93
Referential communication in heterogeneous communities of pre-trained visual deep networksMatéo Mahaut, Roberto DessÏ, Francesca Franzon and Marco BaroniLearning and Adaptation94
A Learning Approach to Complex Contagion Influence MaximizationHaipeng Chen, Bryan Wilder, Wei Qiu, Bo An, Eric Rice and Milind TambeLearning and Adaptation95
Analyzing the Sensitivity to Policy-Value Decoupling in Deep Reinforcement Learning GeneralizationNasik Muhammad Nafi, Raja Farrukh Ali and William HsuLearning and Adaptation96
Reinforcement Learning with Depreciating AssetsTaylor Dohmen and Ashutosh TrivediLearning and Adaptation97
Matching Options to Tasks using Option-Indexed Hierarchical Reinforcement LearningKushal Chauhan, Soumya Chatterjee, Akash Reddy, Aniruddha S, Balaraman Ravindran and Pradeep ShenoyLearning and Adaptation98
DGPO: Discovering Multiple Strategies with Diversity-Guided Policy OptimizationWenze Chen, Shiyu Huang, Yuan Chiang, Ting Chen and Jun ZhuLearning and Adaptation99
Accelerating Neural MCTS Algorithms using Neural Sub-Net StructuresPrashank Kadam, Ruiyang Xu and Karl LieberherrLearning and Adaptation100
Provably Efficient Convergence of Primal-Dual Actor-Critic with Nonlinear Function ApproximationJing Dong, Li Shen, Yinggan Xu and Baoxiang WangLearning and Adaptation103
Achieving near-optimal regrets in confounded contextual banditsXueping Gong and Jiheng ZhangLearning and Adaptation104
Towards multi-agent learning of causal networksStefano Mariani, Franco Zambonelli and Pasquale RosetiLearning and Adaptation105
Proportional Fairness in Obnoxious Facility LocationHaris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani and Toby WalshSocial Choice and Cooperative Game Theory31
Distortion in Attribute Approval Committee ElectionsDorothea Baumeister and Linus BoesSocial Choice and Cooperative Game Theory32
Relaxations of Envy-Freeness Over GraphsJustin Payan, Rik Sengupta and Vignesh ViswanathanSocial Choice and Cooperative Game Theory40
Fairly Allocating (Contiguous) Dynamic Indivisible Items with Few AdjustmentsMingwei YangSocial Choice and Cooperative Game Theory41
Measuring a Priori Voting Power - Taking Delegations SeriouslyRachael Colley, Théo Delemazure and Hugo GilbertSocial Choice and Cooperative Game Theory42
Sampling-Based Winner Prediction in District-Based ElectionsDebajyoti Kar, Palash Dey and Swagato SanyalSocial Choice and Cooperative Game Theory43
Cedric: A Collaborative DDoS Defense System Using CreditJiawei Li, Hui Wang and Jilong WangSocial Choice and Cooperative Game Theory44
Social Aware Coalition Formation with Bounded Coalition SizeChaya Levinger, Amos Azaria and Noam HazonSocial Choice and Cooperative Game Theory45
Repeatedly Matching Items to Agents Fairly and EfficientlyShivika Narang and Ioannis CaragiannisSocial Choice and Cooperative Game Theory46
The complexity of minimizing envy in house allocationJayakrishnan Madathil, Neeldhara Misra and Aditi SethiaSocial Choice and Cooperative Game Theory47
Error in the Euclidean Preference ModelLuke Thorburn, Maria Polukarov and Carmine VentreSocial Choice and Cooperative Game Theory48
Distance Hypergraph Polymatrix Coordination GamesAlessandro AloisioSocial Choice and Cooperative Game Theory56
Search versus Search for Collapsing Electoral Control TypesBenjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David NarvĂĄez, Conor Taliancich and Henry B. WellesSocial Choice and Cooperative Game Theory57
Does Delegating Votes Protect Against Pandering Candidates?Xiaolin Sun, Jacob Masur, Benjamin Abramowitz, Nicholas Mattei and Zizhan ZhengSocial Choice and Cooperative Game Theory58
Resilient Fair Allocation of Indivisible GoodsDolev Mutzari, Yonatan Aumann and Sarit KrausSocial Choice and Cooperative Game Theory59
Stability of Weighted Majority Voting under Estimated WeightsShaojie Bai, Dongxia Wang, Muller Tim, Peng Cheng and Jiming ChenSocial Choice and Cooperative Game Theory60
Indivisible Participatory Budgeting with Multiple Degrees of Sophistication of ProjectsGogulapati SreedurgaSocial Choice and Cooperative Game Theory61
Incentivizing Sequential Crowdsourcing SystemsYuan LuoMarkets, Auctions, and Non-Cooperative Game Theory62
No-regret Learning Dynamics for Sequential Correlated EquilibriaHugh ZhangMarkets, Auctions, and Non-Cooperative Game Theory63
Fair Pricing for Time-Flexible Smart Energy MarketsRoland Saur, Han La Poutré and Neil Yorke-SmithMarkets, Auctions, and Non-Cooperative Game Theory64
Budget-Feasible Mechanism Design for Cost-Benefit Optimization in Gradual Service ProcurementFarzaneh Farhadi, Maria Chli and Nicholas R. JenningsMarkets, Auctions, and Non-Cooperative Game Theory73
Analysis of a Learning Based Algorithm for Budget PacingMax Springer and Mohammadtaghi HajiaghayiMarkets, Auctions, and Non-Cooperative Game Theory74
Finding Optimal Nash Equilibria in Multiplayer Games via Correlation PlansYouzhi Zhang, Bo An and V.S. SubrahmanianMarkets, Auctions, and Non-Cooperative Game Theory75
Diffusion Multi-unit Auctions with Diminishing Marginal Utility BuyersHaolin Liu, Xinyuan Lian and Dengji ZhaoMarkets, Auctions, and Non-Cooperative Game Theory76
Improving Quantal Cognitive Hierarchy Model Through Iterative Population LearningYuhong Xu, Shih-Fen Cheng and Xinyu ChenMarkets, Auctions, and Non-Cooperative Game Theory77
A Nash-Bargaining-Based Mechanism for One-Sided Matching Markets under Dichotomous UtilitiesJugal Garg, Thorben Tröbst and Vijay VaziraniMarkets, Auctions, and Non-Cooperative Game Theory78
Differentially Private Diffusion Auction: The Single-unit CaseFengjuan Jia, Mengxiao Zhang, Jiamou Liu and Bakh KhoussainovMarkets, Auctions, and Non-Cooperative Game Theory79
Learning in teams: peer evaluation for fair assessment of individual contributionsFedor DuzhinMarkets, Auctions, and Non-Cooperative Game Theory80

Day 3

Day 3Models of Anxiety for Agent Deliberation: The Benefits of Anxiety-Sensitive AgentsArvid Horned and Loïs VanhéeBlue Sky111
Social Choice Around Decentralized Autonomous Organizations: On the Computational Social Choice of Digital CommunitiesNimrod TalmonBlue Sky112
Value Inference in Sociotechnical SystemsEnrico Liscio, Roger Lera-Leri, Filippo Bistaffa, Roel I. J. Dobbe, Catholijn M. Jonker, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar and Pradeep K. MurukannaiahBlue Sky113
Presenting Multiagent Challenges in Team Sports AnalyticsDavid Radke and Alexi OrchardBlue Sky114
Communication Meaning: Foundations and Directions for Systems ResearchAmit Chopra and Samuel ChristieBlue Sky115
The Rule–Tool–User Nexus in Digital Collective DecisionsZoi Terzopoulou, Marijn A. Keijzer, Gogulapati Sreedurga and Jobst HeitzigBlue Sky116
Epistemic Side Effects: An AI Safety ProblemToryn Q. Klassen, Parand Alizadeh Alamdari and Sheila A. McIlraithBlue Sky117
Citizen-Centric Multiagent SystemsSebastian Stein and Vahid YazdanpanahBlue Sky118
Non-Obvious Manipulability for Single-Parameter Agents and Bilateral TradeThomas Archbold, Bart de Keijzer and Carmine VentreMechanism Design126
Mechanism Design for Improving Accessibility to Public FacilitiesHau Chan and Chenhao WangMechanism Design127
Explicit Payments for Obviously Strategyproof MechanismsDiodato Ferraioli and Carmine VentreMechanism Design128
Bilevel Entropy based Mechanism Design for Balancing Meta in Video GamesSumedh Pendurkar, Chris Chow, Luo Jie and Guni SharonMechanism Design129
IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social DilemmasBengisu Guresti, Abdullah Vanlioglu and Nazim Kemal UreMechanism Design130
Settling the Distortion of Distributed Facility LocationAris Filos-Ratsikas, Panagiotis Kanellopoulos, Alexandros Voudouris and Rongsen ZhangMechanism Design131
Cost Sharing under Private Valuation and Connection ControlTianyi Zhang, Junyu Zhang, Sizhe Gu and Dengji ZhaoMechanism Design132
Facility Location Games with ThresholdsHouyu Zhou, Guochuan Zhang, Lili Mei and Minming LiMechanism Design133
Decentralised and Cooperative Control of Multi-Robot Systems through Distributed OptimisationYi Dong, Zhongguo Li, Xingyu Zhao, Zhengtao Ding and Xiaowei HuangRobotics81
Byzantine Resilience at Swarm Scale: A Decentralized Blocklist from Inter-robot AccusationsKacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru and Wenchao LiRobotics82
Stigmergy-based, Dual-Layer Coverage of Unknown RegionsOri Rappel, Michael Amir and Alfred BrucksteinRobotics83
Mitigating Imminent Collision for Multi-robot Navigation: A TTC-force Reward Shaping ApproachJinlin Chen, Jiannong Cao, Zhiqin Cheng and Wei LiRobotics84
Gathering of Anonymous AgentsJohn Augustine, Arnhav Datar and Nischith Shadagopan M NRobotics85
Safe Deep Reinforcement Learning by Verifying Task-Level PropertiesEnrico Marchesini, Luca Marzari, Alessandro Farinelli and Christopher AmatoRobotics86
Decentralized Safe Navigation for Multi-agent Systems via Risk-aware Weighted Buffered Voronoi CellsYiwei Lyu, John Dolan and Wenhao LuoRobotics87
Heterogeneous Multi-Robot Reinforcement LearningMatteo Bettini, Ajay Shankar and Amanda ProrokRobotics88
Random Majority Opinion Diffusion: Stabilization Time, Absorbing States, and Influential NodesAhad N. ZehmakanSocial Networks134
Axiomatic Analysis of Medial Centrality MeasuresWiktoria Kosny and Oskar SkibskiSocial Networks135
Online Influence Maximization under Decreasing Cascade ModelFang Kong, Jize Xie, Baoxiang Wang, Tao Yao and Shuai LiSocial Networks136
Node Conversion Optimization in Multi-hop Influence NetworksJie Zhang, Yuezhou Lv and Zihe WangSocial Networks137
Decentralized core-periphery structure in social networks accelerates cultural innovation in agent-based modelingJesse Milzman and Cody MoserSocial Networks138
Being an Influencer is Hard: The Complexity of Influence Maximization in Temporal Graphs with a Fixed SourceArgyrios Deligkas, Eduard Eiben, Tiger-Lily Goldsmith and George SkretasSocial Networks139
Enabling Imitation-Based Cooperation in Dynamic Social NetworksJacques Bara, Paolo Turrini and Giulia AndrighettoSocial Networks140
The Grapevine Web: Analysing the Spread of False Information in Social Networks with Corrupted SourcesJacques Bara, Charlie Pilgrim, Paolo Turrini and Stanislav ZhydkovSocial Networks141
Differentiable Agent-based EpidemiologyAyush Chopra, Alexander RodrĂ­guez, Jayakumar Subramanian, Arnau Quera-Bofarull, Balaji Krishnamurthy, B. Aditya Prakash and Ramesh RaskarSimulations89
Social Distancing via Social SchedulingDeepesh Kumar Lall, Garima Shakya and Swaprava NathSimulations90
Don't Simulate Twice: one-shot sensitivity analyses via automatic differentiationArnau Quera-Bofarull, Ayush Chopra, Joseph Aylett-Bullock, Carolina Cuesta-Lazaro, Ani Calinescu, Ramesh Raskar and Mike WooldridgeSimulations91
Markov Aggregation for Speeding Up Agent-Based Movement SimulationsBernhard Geiger, Alireza Jahani, Hussain Hussain and Derek GroenSimulations92
Agent-Based Modeling of Human Decision-makers Under Uncertain Information During Supply Chain ShortagesNutchanon Yongsatianchot, Noah Chicoine, Jacqueline Griffin, Ozlem Ergun and Stacy MarsellaSimulations93
Simulating panic amplification in crowds via a density-emotion interactionErik van Haeringen and Charlotte GerritsenSimulations94
Modelling Agent Decision Making in Agent-based Simulation - Analysis Using an Economic Technology Uptake ModelFranziska KlĂŒgl and Hildegunn Kyvik NordĂ„sSimulations95
Emotion contagion in agent-based simulations of crowds: a systematic reviewErik van Haeringen, Charlotte Gerritsen and Koen HindriksSimulations96
Learning Inter-Agent Synergies in Asymmetric Multiagent SystemsGaurav Dixit and Kagan TumerMultiagent Reinforcement Learning III23
Asymptotic Convergence and Performance of Multi-Agent Q-learning DynamicsAamal Hussain, Francesco Belardinelli and Georgios PiliourasMultiagent Reinforcement Learning III8
Model-based Dynamic Shielding for Safe and Efficient Multi-agent Reinforcement LearningWenli Xiao, Yiwei Lyu and John DolanMultiagent Reinforcement Learning III24
Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement LearningJihwan Oh, Joonkee Kim, Minchan Jeong and Se-Young YunMultiagent Reinforcement Learning III9
Counter-Example Guided Policy Refinement in Multi-agent Reinforcement LearningBriti Gangopadhyay, Pallab Dasgupta and Soumyajit DeyMultiagent Reinforcement Learning III25
Prioritized Tasks Mining for Multi-Task Cooperative Multi-Agent Reinforcement LearningYang Yu, Qiyue Yin, Junge Zhang and Kaiqi HuangMultiagent Reinforcement Learning III10
M3: Modularization for Multi-task and Multi-agent Offline Pre-trainingLinghui Meng, Jingqing Ruan, Xuantang Xiong, Xiyun Li, Xi Zhang, Dengpeng Xing and Bo XuMultiagent Reinforcement Learning III26
The Importance of Credo in Multiagent LearningDavid Radke, Kate Larson and Tim BrechtNorms121
Contextual Integrity for Argumentation-based Privacy ReasoningGideon Ogunniye and Nadin KokciyanNorms122
Predicting privacy preferences for smart devices as normsMarc Serramia, William Seymour, Natalia Criado and Michael LuckNorms123
Agent-directed runtime norm synthesisAndreasa Morris Martin, Marina De Vos, Julian Padget and Oliver RayNorms124
Emergence of Norms in Interactions with Complex RewardsDhaminda Abeywickrama, Nathan Griffiths, Zhou Xu and Alex MouzakitisNorms125
User Device Interaction Prediction via Relational Gated Graph Attention Network and Intent-aware EncoderJingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Wenxin Tang, Runjie Zhou and Yong JiangGraph Neural Networks + Transformers4
Inferring Player Location in Sports Matches: Multi-Agent Spatial Imputation from Limited ObservationsGregory Everett, Ryan Beal, Tim Matthews, Joseph Early, Timothy Norman and Sarvapali RamchurnGraph Neural Networks + Transformers20
Learning Graph-Enhanced Commander-Executor for Multi-Agent NavigationXinyi Yang, Shiyu Huang, Yiwen Sun, Yuxiang Yang, Chao Yu, Wei-Wei Tu, Huazhong Yang and Yu WangGraph Neural Networks + Transformers5
Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for Multi-Agent LearningRyan Kortvelesy, Steven Morad and Amanda ProrokGraph Neural Networks + Transformers21
Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting ProblemPeiwang Tang and Xianchao ZhangGraph Neural Networks + Transformers6
TransfQMix: Transformers for Leveraging the Graph Structure of Multi-Agent Reinforcement Learning ProblemsMatteo Gallici, Mario Martin and Ivan MasmitjaGraph Neural Networks + Transformers22
Intelligent Onboard Routing in Stochastic Dynamic Environments using TransformersRohit Chowdhury, Raswanth Murugan and Deepak SubramaniGraph Neural Networks + Transformers7
Characterizations of Sequential Valuation RulesChris Dong and Patrick LedererVoting I11
Collecting, Classifying, Analyzing, and Using Real-World Ranking DataNiclas Boehmer and Nathan SchaarVoting I27
Margin of Victory for Weighted Tournament SolutionsMichelle Döring and Jannik PetersVoting I12
Bribery Can Get Harder in Structured Multiwinner Approval ElectionBartosz Kusek, Robert Bredereck, Piotr Faliszewski, Andrzej Kaczmarczyk and DuĆĄan KnopVoting I28
Strategyproof Social Decision Schemes on Super Condorcet DomainsFelix Brandt, Patrick Lederer and Sascha TauschVoting I13
Separating and Collapsing Electoral Control TypesBenjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David NarvĂĄez, Conor Taliancich and Henry B. WellesVoting I29
The Distortion of Approval Voting with RunoffSoroush Ebadian, Mohamad Latifian and Nisarg ShahVoting I14
On the Complexity of the Two-Stage Majority RuleYongjie YangVoting II43
Fairness in Participatory Budgeting via Equality of ResourcesJan Maly, Simon Rey, Ulle Endriss and Martin LacknerVoting II59
Free-Riding in Multi-Issue DecisionsMartin Lackner, Jan Maly and Oliviero NardiVoting II44
k-prize Weighted Voting GameWei-Chen Lee, David Hyland, Alessandro Abate, Edith Elkind, Jiarui Gan, Julian Gutierrez, Paul Harrenstein and Michael WooldridgeVoting II60
Computing the Best Policy That Survives a VoteAndrei Constantinescu and Roger WattenhoferVoting II45
Voting by AxiomsMarie Christin Schmidtlein and Ulle EndrissVoting II61
A Hotelling-Downs game for strategic candidacy with binary issuesJavier Maass, Vincent Mousseau and Anaëlle WilczynskiVoting II46
Voting with Limited Energy: A Study of Plurality and BordaZoi TerzopoulouVoting II62
Revealed multi-objective utility aggregation in human drivingAtrisha Sarkar, Kate Larson and Krzysztof CzarneckiMulti-objective Planning and Learning1
A Brief Guide to Multi-Objective Reinforcement Learning and PlanningConor F Hayes, Roxana Radulescu, Eugenio Bargiacchi, Johan Kallstrom, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowe, Gabriel Ramos, Marcello Restelli, Peter Vamplew and Diederik M. RoijersMulti-objective Planning and Learning2
Welfare and Fairness in Multi-objective Reinforcement LearningZiming Fan, Nianli Peng, Muhang Tian and Brandon FainMulti-objective Planning and Learning3
Preference-Based Multi-Objective Multi-Agent Path FindingFlorence Ho and Shinji NakadaiMulti-objective Planning and Learning17
Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement PrioritizationLucas N. Alegre, Ana L. C. Bazzan, Diederik M. Roijers, Ann Nowé and Bruno C. da SilvaMulti-objective Planning and Learning18
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the UtilityZhaori Guo, Timothy Norman and Enrico GerdingMulti-objective Planning and Learning19
Worst-Case Adaptive Submodular CoverJing Yuan and Shaojie TangDeep Learning33
Minimax Strikes BackQuentin Cohen-Solal and Tristan CazenaveDeep Learning49
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement LearningBram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew Taylor, Mykola Pechenizkiy and Decebal Constantin MocanuDeep Learning34
Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep Reinforcement LearningWoojun Kim and Youngchul SungDeep Learning50
Learning Rewards to Optimize Global Performance Metrics in Deep Reinforcement LearningJunqi Qian, Paul Weng and Chenmien TanDeep Learning35
A Deep Reinforcement Learning Approach for Online Parcel AssignmentHao Zeng, Qiong Wu, Kunpeng Han, Junying He and Haoyuan HuDeep Learning51
CoRaL: Continual Representation Learning for Overcoming Catastrophic ForgettingMohammad Yasar and Tariq IqbalDeep Learning36
FedMM: A Communication Efficient Solver for Federated Adversarial Domain AdaptationYan Shen, Jian Du, Han Zhao, Zhanghexuan Ji, Chunwei Ma and Mingchen GaoAdversarial Learning + Social Networks + Causal Graphs146
Adversarial Link Prediction in Spatial NetworksMichaƂ Tomasz Godziszewski, Yevgeniy Vorobeychik and Tomasz MichalakAdversarial Learning + Social Networks + Causal Graphs142
Distributed Mechanism Design in Social NetworksHaoxin Liu, Yao Zhang and Dengji ZhaoAdversarial Learning + Social Networks + Causal Graphs143
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time AttacksMohammad Mohammadi, Jonathan Nöther, Debmalya Mandal, Adish Singla and Goran RadanovicAdversarial Learning + Social Networks + Causal Graphs144
How to Turn an MAS into a Graphical Causal ModelH. Van Dyke ParunakAdversarial Learning + Social Networks + Causal Graphs145
Agent-based Simulation of District-based Elections with Heterogeneous PopulationsAdway MitraModelling and Simulation of Societies97
Deep Learning-based Spatially Explicit Emulation of an Agent-Based Simulator for Pandemic in a CityVarun Madhavan, Adway Mitra and Partha Pratim ChakrabartiModelling and Simulation of Societies98
A Decentralized Agent-Based Task Scheduling Framework for Handling Uncertain Events in Fog ComputingYikun Yang, Fenghui Ren and Minjie ZhangModelling and Simulation of Societies99
Co-evolution of social and non-social guilt in structured populationsTheodor Cimpeanu, LuĂ­s Moniz Pereira and The Anh HanModelling and Simulation of Societies100
Phantom - A RL-driven Multi-Agent Framework to Model Complex SystemsLeo Ardon, Jared Vann, Deepeka Garg, Thomas Spooner and Sumitra GaneshModelling and Simulation of Societies101
Simulation Model with Side Trips at a Large-Scale EventRyo Niwa, Shunki Takami, Shusuke Shigenaka, Masaki Onishi, Wataru Naito and Tetsuo YasutakaModelling and Simulation of Societies102
The Price of Algorithmic Pricing: Investigating Collusion in a Market Simulation with AI AgentsMichael Schlechtinger, Damaris Kosack, Heiko Paulheim, Thomas Fetzer and Franz KrauseModelling and Simulation of Societies103
Crowd simulation incorporating a route choice model and similarity evaluation using real large-scale dataRyo Nishida, Masaki Onishi and Koichi HashimotoModelling and Simulation of Societies104
Capturing Hiders with Moving ObstaclesAyushman Panda and Kamalakar KarlapalemModelling and Simulation of Societies105
COBAI : a generic agent-based model of human behaviors centered on contexts and interactionsMaëlle Beuret, Irene Foucherot, Christian Gentil and Joël SavelliModelling and Simulation of Societies106
Learning Solutions in Large Economic Networks using Deep Multi-Agent Reinforcement LearningMichael Curry, Alexander Trott, Soham Phade, Yu Bai and Stephan ZhengModelling and Simulation of Societies107
Opinion Dynamics in Populations of Converging and Polarizing AgentsAnshul Toshniwal and Fernando P. SantosModelling and Simulation of Societies108
On a Voter Model with Context-Dependent Opinion AdoptionLuca Becchetti, Vincenzo Bonifaci, Emilio Cruciani and Francesco PasqualeModelling and Simulation of Societies109
Cognitive Bias-Aware Dissemination Strategies for Opinion Dynamics with External Information SourcesAbdullah Al Maruf, Luyao Niu, Bhaskar Ramasubramanian, Andrew Clark and Radha PoovendranModelling and Simulation of Societies110
Representation-based Individual Fairness in k-clusteringDebajyoti Kar, Mert Kosan, Debmalya Mandal, Sourav Medya, Arlei Silva, Palash Dey and Swagato SanyalCoordination, Organisations, Institutions, and Norms147
S&F: Sources and Facts Reliability Evaluation MethodQuentin Elsaesser, Patricia Everaere and SĂ©bastien KoniecznyCoordination, Organisations, Institutions, and Norms148
Offline Multi-Agent Reinforcement Learning with Coupled Value FactorizationXiangsen Wang and Xianyuan ZhanCoordination, Organisations, Institutions, and Norms149
Learning Optimal “Pigovian Tax” in Sequential Social DilemmasYun Hua, Shang Gao, Wenhao Li, Bo Jin, Xiangfeng Wang and Hongyuan ZhaCoordination, Organisations, Institutions, and Norms150
PACCART: Reinforcing Trust in Multiuser Privacy Agreement SystemsDaan Di Scala and Pinar YolumCoordination, Organisations, Institutions, and Norms151
Explain to Me: Towards Understanding Privacy DecisionsGonul Ayci, Arzucan Ozgur, Murat Sensoy and Pinar YolumCoordination, Organisations, Institutions, and Norms152
The Resilience Game: A New Formalization of Resilience for Groups of Goal-Oriented Autonomous AgentsMichael A. Goodrich, Jennifer Leaf, Julie A. Adams and Matthias ScheutzCoordination, Organisations, Institutions, and Norms153
Differentially Private Network Data Collection for Influence MaximizationM. Amin Rahimian, Fang-Yi Yu and Carlos HurtadoCoordination, Organisations, Institutions, and Norms154
Inferring Implicit Trait Preferences from Demonstrations of Task Allocation in Heterogeneous TeamsVivek Mallampati and Harish RavichandarCoordination, Organisations, Institutions, and Norms155
From Scripts to RL Environments: Towards Imparting Commonsense Knowledge to RL AgentsAbhinav Joshi, Areeb Ahmad, Umang Pandey and Ashutosh ModiLearning and Adaptation37
Hierarchical Reinforcement Learning with Attention RewardSihong Luo, Jinghao Chen, Zheng Hu, Chunhong Zhang and Benhui ZhuangLearning and Adaptation38
FedHQL: Federated Heterogeneous Q-LearningFlint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Cheston Tan and Bryan Kian Hsiang LowLearning and Adaptation39
Know Your Enemy: Identifying and Adapting to Adversarial Attacks in Deep Reinforcement LearningSeĂĄn Caulfield Curley, Karl Mason and Patrick MannionLearning and Adaptation40
Transformer Actor-Critic with Regularization: Automated Stock Trading using Reinforcement LearningNamyeong Lee and Jun MoonLearning and Adaptation41
Model-Based Actor-Critic for Multi-Objective Reinforcement Learning with Dynamic Utility FunctionsJohan KÀllström and Fredrik HeintzLearning and Adaptation53
Relaxed Exploration Constrained Reinforcement LearningShahaf Shperberg, Bo Liu and Peter StoneLearning and Adaptation54
Causality Detection for Efficient Multi-Agent Reinforcement LearningRafael Pina, Varuna De Silva and Corentin ArtaudLearning and Adaptation55
Diversity Through Exclusion (DTE): Niche Identification for Reinforcement Learning through Value-DecompositionPeter Sunehag, Alexander Vezhnevets, Edgar Duéñez-Guzmån, Igor Mordatch and Joel LeiboLearning and Adaptation56
Temporally Layered Architecture for Adaptive, Distributed and Continuous ControlDevdhar Patel, Joshua Russell, Francesca Walsh, Tauhidur Rahman, Terrence Sejnowski and Hava SiegelmannLearning and Adaptation57
Multi-objective Reinforcement Learning in Factored MDPs with Graph Neural NetworksMarc Vincent, Amal El Fallah Seghrouchni, Vincent Corruble, Narayan Bernardin, Rami Kassab and Frédéric BarbarescoLearning and Adaptation65
An Analysis of Connections Between Regret Minimization and Actor Critic Methods in Cooperative SettingsChirag Chhablani and Ian KashLearning and Adaptation66
Attention-Based Recurrency for Multi-Agent Reinforcement Learning under State UncertaintyThomy Phan, Fabian Ritz, Jonas NĂŒĂŸlein, Michael Kölle, Thomas Gabor and Claudia Linnhoff-PopienLearning and Adaptation67
A Theory of Mind Approach as Test-Time Mitigation Against Emergent Adversarial CommunicationNancirose Piazza and Vahid BehzadanLearning and Adaptation68
Defensive Collaborative Learning: Protecting Objective Privacy in Data SharingCynthia Huang and Pascal PoupartLearning and Adaptation69
Neuro-Symbolic World Models for Adapting to Open World NoveltyJonathan Balloch, Zhiyu Lin, Robert Wright, Mustafa Hussain, Aarun Srinivas, Xiangyu Peng, Julia Kim and Mark RiedlLearning and Adaptation70
Modeling Dynamic Environments with Scene Graph MemoryAndrey Kurenkov, Michael Lingelbach, Tanmay Agarwal, Chengshu Li, Emily Jin, Ruohan Zhang, Fei-Fei Li, Jiajun Wu, Silvio Savarese and Roberto MartĂ­n-MartĂ­nLearning and Adaptation71
Group Fair Clustering Revisited -- Notions and Efficient AlgorithmShivam Gupta, Ganesh Ghalme, Narayanan C. Krishnan and Shweta JainLearning and Adaptation72
LTL-Based Non-Markovian Inverse Reinforcement LearningAlvaro Velasquez, Ashutosh Gupta, Ashutosh Trivedi, Krishna S, Mohammad Afzal and Sankalp GambhirLearning and Adaptation73
The Parameterized Complexity of Welfare Guarantees in Schelling SegregationArgyrios Deligkas, Eduard Eiben and Tiger-Lily GoldsmithSocial Choice and Cooperative Game Theory15
Fair Chore Division under Binary Supermodular CostsSiddharth Barman, Vishnu Narayan and Paritosh VermaSocial Choice and Cooperative Game Theory16
Deliberation as Evidence Disclosure: A Tale of Two Protocol TypesJulian Chingoma and Adrian HaretSocial Choice and Cooperative Game Theory30
How Does Fairness Affect the Complexity of Gerrymandering?Sandip Banerjee, Rajesh Chitnis and Abhiruk LahiriSocial Choice and Cooperative Game Theory31
Individual-Fair and Group-Fair Social Choice Rules under Single-Peaked PreferencesGogulapati Sreedurga, Soumyarup Sadhukhan, Souvik Roy and Yadari NarahariSocial Choice and Cooperative Game Theory32
Maximin share Allocations for Assignment ValuationsPooja Kulkarni, Rucha Kulkarni and Ruta MehtaSocial Choice and Cooperative Game Theory47
Computational Complexity of Verifying the Group No-show ParadoxFarhad Mohsin, Qishen Han, Sikai Ruan, Pin-Yu Chen, Francesca Rossi and Lirong XiaSocial Choice and Cooperative Game Theory48
Optimal Capacity Modification for Many-To-One Matching ProblemsJiehua Chen and Gergely CsĂĄjiSocial Choice and Cooperative Game Theory63
Learning to Explain Voting RulesInwon Kang, Qishen Han and Lirong XiaSocial Choice and Cooperative Game Theory64
MMS Allocations of Chores with Connectivity Constraints: New Methods and New ResultsMingyu Xiao, Guoliang Qiu and Sen HuangSocial Choice and Cooperative Game Theory75
Group Fairness in Peer ReviewHaris Aziz, Evi Micha and Nisarg ShahSocial Choice and Cooperative Game Theory76
Altruism in Facility Location ProblemsHouyu Zhou, Hau Chan and Minming LiSocial Choice and Cooperative Game Theory77
Transfer Learning based Agent for Automated NegotiationSiqi Chen, Qisong Sun, Heng You, Tianpei Yang and Jianye HaoMarkets, Auctions, and Non-Cooperative Game Theory78
Single-Peaked Jump Schelling GamesTobias Friedrich, Pascal Lenzner, Louise Molitor and Lars SeifertMarkets, Auctions, and Non-Cooperative Game Theory89
Defining deception in structural causal gamesFrancis Rhys Ward, Francesco Belardinelli and Francesca ToniMarkets, Auctions, and Non-Cooperative Game Theory80
Game Model Learning for Mean Field GamesYongzhao Wang and Michael WellmanMarkets, Auctions, and Non-Cooperative Game Theory156
Two-phase security gamesAndrzej Nagórko, PaweƂ Ciosmak and Tomasz MichalakMarkets, Auctions, and Non-Cooperative Game Theory157
Stationary Equilibrium of Mean Field Games with Congestion-dependent Sojourn TimesCostas Courcoubetis and Antonis DimakisMarkets, Auctions, and Non-Cooperative Game Theory158
Last-mile Collaboration: A Decentralized Mechanism with Bounded Performance Guarantees and Implementation StrategiesKeyang Zhang, Jose Javier Escribano Macias, Dario Paccagnan and Panagiotis AngeloudisMarkets, Auctions, and Non-Cooperative Game Theory159
Deep Learning-Powered Iterative Combinatorial Auctions with Active LearningBenjamin Estermann, Stefan Kramer, Roger Wattenhofer and Ye WangMarkets, Auctions, and Non-Cooperative Game Theory160
Revenue Maximization Mechanisms for an Uninformed Mediator with Communication AbilitiesZhikang Fan and Weiran ShenMarkets, Auctions, and Non-Cooperative Game Theory161

Demo Sessions

TimeTitleAuthors

Day 1 (Wed) Demo Sessions: Morning

Day 1 (Wed) MorningTDD for AOP: Test-Driven Development for Agent-Oriented ProgrammingCleber Amaral, Jomi Fred Hubner and Timotheus Kampik
Demonstrating Performance Benefits of Human-Swarm TeamingWilliam Hunt, Jack Ryan, Ayodeji O Abioye, Sarvapali D Ramchurn and Mohammad D Soorati
Robust JaCaMo Applications via Exceptions and AccountabilityMatteo Baldoni, Cristina Baroglio, Roberto Micalizio and Stefano Tedeschi
Real Time Gesturing in Embodied Agents for Dynamic Content CreationHazel Watson-Smith, Felix Marcon Swadel, Jo Hutton, Kirstin Marcon, Mark Sagar, Shane Blackett, Tiago Ribeiro, Travers Biddle and Tim Wu
A Web-based Tool for Detecting Argument Validity and NoveltySandrine Chausson, Ameer Saadat-Yazdi, Xue Li, Jeff Z. Pan, Vaishak Belle, Nadin Kokciyan and Bjorn Ross

Day 1 (Wed) Demo Sessions: Afternoon

Day 1 (Wed) AfternoonTDD for AOP: Test-Driven Development for Agent-Oriented ProgrammingCleber Amaral, Jomi Fred Hubner and Timotheus Kampik
Demonstrating Performance Benefits of Human-Swarm TeamingWilliam Hunt, Jack Ryan, Ayodeji O Abioye, Sarvapali D Ramchurn and Mohammad D Soorati
The influence maximisation gameSukankana Chakraborty, Sebastian Stein, Ananthram Swami, Matthew Jones and Lewis Hill
Interaction-Oriented Programming: Intelligent, Meaning-Based Multiagent SystemsAmit Chopra, Samuel Christie and Munindar P. Singh
Improvement and Evaluation of the Policy Legibility in Reinforcement LearningYanyu Liu, Yifeng Zeng, Biyang Ma, Yinghui Pan, Huifan Gao and Xiaohan Huang. 
Visualizing Logic Explanations for Social Media ModerationMarc Roig Vilamala, Dave Braines, Federico Cerutti and Alun Preece

Day 2 (Thu) Demo Sessions: Morning

Day 2 (Thu) MorningMulti-Robot Warehouse Optimization: Leveraging Machine Learning for Improved PerformanceMara Cairo, Graham Doerksen, Bevin Eldaphonse, Johannes Gunther, Nikolai Kummer, Jordan Maretzki, Gupreet Mohhar, Payam Mousavi, Sean Murphy, Laura Petrich, Sahir, Jubair Sheikh, Talat Syed and Matthew E. Taylor
Hiking up that HILL with Cogment-Verse: Train & operate multi-agent systems learning from HumansSai Krishna Gottipati, Luong-Ha Nguyen, Clodéric Mars and Matthew E. Taylor
The influence maximisation gameSukankana Chakraborty, Sebastian Stein, Ananthram Swami, Matthew Jones and Lewis Hill
Interaction-Oriented Programming: Intelligent, Meaning-Based Multiagent SystemsAmit Chopra, Samuel Christie and Munindar P. Singh
Improvement and Evaluation of the Policy Legibility in Reinforcement LearningYanyu Liu, Yifeng Zeng, Biyang Ma, Yinghui Pan, Huifan Gao and Xiaohan Huang. 
Visualizing Logic Explanations for Social Media ModerationMarc Roig Vilamala, Dave Braines, Federico Cerutti and Alun Preece

Day 2 (Thu) Demo Sessions: Afternoon

Day 2 (Thu) AfternoonMulti-Robot Warehouse Optimization: Leveraging Machine Learning for Improved PerformanceMara Cairo, Graham Doerksen, Bevin Eldaphonse, Johannes Gunther, Nikolai Kummer, Jordan Maretzki, Gupreet Mohhar, Payam Mousavi, Sean Murphy, Laura Petrich, Sahir, Jubair Sheikh, Talat Syed and Matthew E. Taylor
Hiking up that HILL with Cogment-Verse: Train & operate multi-agent systems learning from HumansSai Krishna Gottipati, Luong-Ha Nguyen, Clodéric Mars and Matthew E. Taylor
Robust JaCaMo Applications via Exceptions and AccountabilityMatteo Baldoni, Cristina Baroglio, Roberto Micalizio and Stefano Tedeschi
Real Time Gesturing in Embodied Agents for Dynamic Content CreationHazel Watson-Smith, Felix Marcon Swadel, Jo Hutton, Kirstin Marcon, Mark Sagar, Shane Blackett, Tiago Ribeiro, Travers Biddle and Tim Wu
A Web-based Tool for Detecting Argument Validity and NoveltySandrine Chausson, Ameer Saadat-Yazdi, Xue Li, Jeff Z. Pan, Vaishak Belle, Nadin Kokciyan and Bjorn Ross