Programme
Download the AAMAS 2023 Booklet
Overview
Monday May 29 | Tuesday May 30 | Wednesday May 31 | Thursday June 1 | Friday June 2 | |
8:00-8:30 | Registration opens CentrEd at ExCeL - Level 0 | Registration opens CentrEd at ExCeL - Level 0 | Registration opens South Halls Entrance S1 | Registration opens South Halls Entrance S1 | Registration opens South Halls Entrance S1 |
8:30-8:45 | DC + Workshops + Tutorials | Workshops + Tutorials | Opening Session | ||
8:45-9:00 | Agents and the Industry Panel Panellist: Kate Larson, Peter Stone, Milind Tambe, and Manuela Veloso |
||||
9:00-10:00 | Keynote: Karl Tuyls | Keynote: Edith Elkind | |||
10:00-10:45 | Coffee Break | Coffee Break | Coffee Break + Poster Session 1 + Demo 1 | Coffee Break + Poster Session 3 + Demo 3 + Card Games Competition | Coffee Break + Poster Session 5 |
10:45-12:30 | DC + Workshops + Tutorials | Workshops + Tutorials | Technical Sessions 1 | Technical Sessions 3 + Card Games Competition | Technical Sessions 5 |
12:30-14:00 | Lunch Break | Lunch Break | Lunch Break + Diversity Event Platinum Suite 5-7 | Lunch Break | Lunch Break |
14:00-15:45 | DC + Workshops + Tutorials | Workshops + Tutorials | Technical Sessions 2 | Technical Sessions 4, Victor Lesser Dissertation Award Talk: Jiaoyang Li + Negotiating Agents Competition | Technical Sessions 6 |
15:45-16:30 | Coffee Break | Coffee Break | Coffee Break + Poster Session 2 + Demo 2 | Coffee Break + Poster Session 4 + Demo 4 + Negotiating Agents Competition | Coffee Break + Poster Session 6 |
16:30-16:45 | DC + Workshops + Tutorials | Workshops + Tutorials | Keynote: Yejin Choi | Award Session | Community Meeting + Closing Session |
16:45-17:30 | Keynote: Iain Couzin | ||||
17:30-17:45 | |||||
17:45-18:30 | |||||
18:30- | Opening Reception South Halls Entrance S2 | Banquet The Brewery |
See also the workshop and tutorial pages.
You can find the Doctoral Consortium (DC’s) programme details here.
Detailed Schedule
Technical Sessions Overview
Technical Sessions
Time | Title | Authors |
Blue Sky Chair: Michael Winikoff |
||
Day 3 (Fri), 10:45 - 12:30 | Models of Anxiety for Agent Deliberation: The Benefits of Anxiety-Sensitive Agents | Arvid Horned and Loïs Vanhée |
Social Choice Around Decentralized Autonomous Organizations: On the Computational Social Choice of Digital Communities | Nimrod Talmon | |
Value Inference in Sociotechnical Systems | Enrico Liscio, Roger Lera-Leri, Filippo Bistaffa, Roel I. J. Dobbe, Catholijn M. Jonker, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar and Pradeep K. Murukannaiah | |
Presenting Multiagent Challenges in Team Sports Analytics | David Radke and Alexi Orchard | |
Communication Meaning: Foundations and Directions for Systems Research | Amit Chopra and Samuel Christie | |
The Rule–Tool–User Nexus in Digital Collective Decisions | Zoi Terzopoulou, Marijn A. Keijzer, Gogulapati Sreedurga and Jobst Heitzig | |
Epistemic Side Effects: An AI Safety Problem | Toryn Q. Klassen, Parand Alizadeh Alamdari and Sheila A. McIlraith | |
Citizen-Centric Multiagent Systems | Sebastian Stein and Vahid Yazdanpanah | |
Engineering Multiagent Systems Chair: Louise Dennis |
||
Day 2 (Thu), 10:45 - 12:30 | Kiko: Programming Agents to Enact Interaction Models | Samuel Christie, Munindar P. Singh and Amit Chopra |
CraftEnv: A Flexible Collective Robotic Construction Environment for Multi-Agent Reinforcement Learning | Rui Zhao, Xu Liu, Yizheng Zhang, Minghao Li, Cheng Zhou, Shuai Li and Lei Han | |
Feedback-Guided Intention Scheduling for BDI Agents | Michael Dann, John Thangarajah and Minyi Li | |
A Behaviour-Driven Approach for Testing Requirements via User and System Stories in Agent Systems | Sebastian Rodriguez, John Thangarajah and Michael Winikoff | |
ML-MAS: a Hybrid AI Framework for Self-Driving Vehicles | Hilal Al Shukairi and Rafael C. Cardoso | |
Signifiers as a First-class Abstraction in Hypermedia Multi-Agent Systems | Danai Vachtsevanou, Andrei Ciortea, Simon Mayer and Jérémy Lemée | |
MAIDS - a Framework for the Development of Multi-Agent Intentional Dialogue Systems | Débora Cristina Engelmann, Alison R. Panisson, Renata Vieira, Jomi Fred Hübner, Viviana Mascardi and Rafael H. Bordini | |
Mandrake: Multiagent Systems as a Basis for Programming Fault-Tolerant Decentralized Applications | Samuel Christie, Amit Chopra and Munindar P. Singh | |
Multiagent Path Finding Chair: Jiaoyang Li |
||
Day 2 (Thu), 10:45 - 12:30 | Anonymous Multi-Agent Path Finding with Individual Deadlines | Gilad Fine, Dor Atzmon and Noa Agmon |
Learn to solve the min-max multiple traveling salesmen problem with reinforcement learning | Junyoung Park, Changhyun Kwon and Jinkyoo Park | |
Counterfactual Fairness Filter for Fair-Delay Multi-Robot Navigation | Hikaru Asano, Ryo Yonetani, Mai Nishimura and Tadashi Kozuno | |
Improved Complexity Results and an Efficient Solution for Connected Multi-Agent Path Finding | Isseïnie Calviac, Ocan Sankur and Francois Schwarzentruber | |
Optimally Solving the Multiple Watchman Route Problem with Heuristic Search | Yaakov Livne, Dor Atzmon, Shawn Skyler, Eli Boyarski, Amir Shapiro and Ariel Felner | |
Distributed Planning with Asynchronous Execution with Local Navigation for Multi-agent Pickup and Delivery Problem | Yuki Miyashita, Tomoki Yamauchi and Toshiharu Sugawara | |
Energy-aware UAV Path Planning with Adaptive Speed | Jonathan Diller and Qi Han | |
Coordination of Multiple Robots along Given Paths with Bounded Junction Complexity | Mikkel Abrahamsen, Tzvika Geft, Dan Halperin and Barak Ugav | |
Innovative Applications Chair: Shih-Fen Cheng |
||
Day 2 (Thu), 14:00 - 15:45 | Efficient Interactive Recommendation with Huffman Tree-based Policy Learning | Longxiang Shi, Zilin Zhang, Shoujin Wang, Binbin Zhou, Minghui Wu, Cheng Yang and Shijian Li |
ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic Cane | Shivendra Agrawal, Suresh Nayak, Ashutosh Naik and Bradley Hayes | |
Preference-Aware Delivery Planning for Last-Mile Logistics | Qian Shao and Shih-Fen Cheng | |
Multi-Agent Reinforcement Learning with Safety Layer for Active Voltage Control | Yufeng Shi, Mingxiao Feng, Minrui Wang, Wengang Zhou and Houqiang Li | |
Multi-agent Signalless Intersection Management with Dynamic Platoon Formation | Phuriwat Worrawichaipat, Enrico Gerding, Ioannis Kaparias and Sarvapali Ramchurn | |
SocialLight: Distributed Cooperation Learning towards Network-Wide Traffic Signal Control | Harsh Goel, Yifeng Zhang, Mehul Damani and Guillaume Sartoretti | |
Model-Based Reinforcement Learning for Auto-Bidding in Display Advertising | Shuang Chen, Qisen Xu, Liang Zhang, Yongbo Jin, Wenhao Li and Linjian Mo | |
Human-Agent Teams Chair: Birgit Lugrin |
||
Day 1 (Wed), 10:45 - 12:30 | Establishing Shared Query Understanding in an Open Multi-Agent System | Nikolaos Kondylidis, Ilaria Tiddi and Annette ten Teije |
Communicating Agent Intentions for Human-Agent Decision Making under Uncertainty | Julie Porteous, Alan Lindsay and Fred Charles | |
Trusting artificial agents: communication trumps performance | Marin Le Guillou, Laurent Prévot and Bruno Berberian | |
Nonverbal Human Signals Can Help Autonomous Agents Infer Human Preferences for Their Behavior | Kate Candon, Jesse Chen, Yoony Kim, Zoe Hsu, Nathan Tsoi and Marynel Vázquez | |
On Subset Selection of Multiple Humans To Improve Human-AI Team Accuracy | Sagalpreet Singh, Shweta Jain and Shashi Shekhar Jha | |
Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual Explanations | Lujain Ibrahim, Mohammad M. Ghassemi and Tuka Alhanai | |
Automated Task-Time Interventions to Improve Teamwork using Imitation Learning | Sangwon Seo, Bing Han and Vaibhav V Unhelkar | |
Should my agent lie for me? A study on humans' attitudes towards deceptive AI | Stefan Sarkadi, Peidong Mei and Edmond Awad | |
Knowledge Representation and Reasoning I Chair: Alessio Lomuscio |
||
Day 1 (Wed), 10:45 - 12:30 | A Logic of Only-Believing over Arbitrary Probability Distributions | Qihui Feng, Daxin Liu, Vaishak Belle and Gerhard Lakemeyer |
A Deontic Logic of Knowingly Complying | Carlos Areces, Valentin Cassano, Pablo Castro, Raul Fervari and Andrés R. Saravia | |
Learning Logic Specifications for Soft Policy Guidance in POMCP | Giulio Mazzi, Daniele Meli, Alberto Castellini and Alessandro Farinelli | |
Strategic (Timed) Computation Tree Logic | Jaime Arias, Wojciech Jamroga, Wojciech Penczek, Laure Petrucci and Teofil Sidoruk | |
Attention! Dynamic Epistemic Logic Models of (In)attentive Agents | Gaia Belardinelli and Thomas Bolander | |
(Arbitrary) Partial Communication | Rustam Galimullin and Fernando R. Velazquez-Quesada | |
Epistemic Abstract Argumentation Framework: Formal Foundations, Computation and Complexity | Gianvincenzo Alfano, Sergio Greco, Francesco Parisi and Irina Trubitsyna | |
Actions, Continuous Distributions and Meta-Beliefs | Vaishak Belle | |
Knowledge Representation and Reasoning II Chair: Brian Logan |
||
Day 1 (Wed), 14:00 - 15:45 | Provable Optimization of Quantal Response Leader-Follower Games with Exponentially Large Action Spaces | Jinzhao Li, Daniel Fink, Christopher Wood, Carla P. Gomes and Yexiang Xue |
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information Theory | Masoud Tabatabaei and Wojciech Jamroga | |
Synthesis of Resource-Aware Controllers Against Rational Agents | Rodica Condurache, Catalin Dima, Youssouf Oualhadj and Nicolas Troquard | |
Computationally Feasible Strategies | Catalin Dima and Wojtek Jamroga | |
Towards the Verification of Strategic Properties in Multi-Agent Systems with Imperfect Information | Angelo Ferrando and Vadim Malvone | |
Mechanism Design Chair: Minming Li |
||
Day 3 (Fri), 14:00 - 15:45 | Non-Obvious Manipulability for Single-Parameter Agents and Bilateral Trade | Thomas Archbold, Bart de Keijzer and Carmine Ventre |
Mechanism Design for Improving Accessibility to Public Facilities | Hau Chan and Chenhao Wang | |
Explicit Payments for Obviously Strategyproof Mechanisms | Diodato Ferraioli and Carmine Ventre | |
Bilevel Entropy based Mechanism Design for Balancing Meta in Video Games | Sumedh Pendurkar, Chris Chow, Luo Jie and Guni Sharon | |
IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social Dilemmas | Bengisu Guresti, Abdullah Vanlioglu and Nazim Kemal Ure | |
Settling the Distortion of Distributed Facility Location | Aris Filos-Ratsikas, Panagiotis Kanellopoulos, Alexandros Voudouris and Rongsen Zhang | |
Cost Sharing under Private Valuation and Connection Control | Tianyi Zhang, Junyu Zhang, Sizhe Gu and Dengji Zhao | |
Facility Location Games with Thresholds | Houyu Zhou, Guochuan Zhang, Lili Mei and Minming Li | |
Planning Chair: Filippo Bistaffa |
||
Day 1 (Wed), 10:45 - 12:30 | Ask and You Shall be Served: Representing and Solving Multi-Agent Optimization Problems with Service Requesters and Providers | Maya Lavie, Tehila Caspi, Omer Lev and Roie Zivan |
Fairness Driven Efficient Algorithms for Sequenced Group Trip Planning Query Problem | Napendra Solanki, Shweta Jain, Suman Banerjee and Yayathi Pavan Kumar S | |
Domain-Independent Deceptive Planning | Adrian Price, Ramon Fraga Pereira, Peta Masters and Mor Vered | |
CAMS: Collision Avoiding Max-Sum for Mobile Sensor Teams | Arseni Pertzovskiy, Roie Zivan and Noa Agmon | |
Risk-Constrained Planning for Multi-Agent Systems with Shared Resources | Anna Gautier, Marc Rigter, Bruno Lacerda, Nick Hawes and Michael Wooldridge | |
Quantitative Planning with Action Deception in Concurrent Stochastic Games | Chongyang Shi, Shuo Han and Jie Fu | |
Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPs | Stelios Triantafyllou and Goran Radanovic | |
On-line Estimators for Ad-hoc Task Execution: Learning types and parameters of teammates for effective teamwork | Matheus Aparecido Do Carmo Alves, Elnaz Shafipour Yourdshahi, Amokh Varma, Leandro Soriano Marcolino, Jó Ueyama and Plamen Angelov | |
Reinforcement Learning Chair: Diederik M. Roijers |
||
Day 2 (Thu), 10:45 - 12:30 | Follow your Nose: Using General Value Functions for Directed Exploration in Reinforcement Learning | Durgesh Kalwar, Omkar Shelke, Somjit Nath, Hardik Meisheri and Harshad Khadilkar |
FedFormer: Contextual Federation with Attention in Reinforcement Learning | Liam Hebert, Lukasz Golab, Pascal Poupart and Robin Cohen | |
Diverse Policy Optimization for Structured Action Space | Wenhao Li, Baoxiang Wang, Shanchao Yang and Hongyuan Zha | |
Enhancing Reinforcement Learning Agents with Local Guides | Paul Daoudi, Bogdan Robu, Christophe Prieur, Ludovic Dos Santos and Merwan Barlier | |
Scalar reward is not enough | Peter Vamplew, Ben Smith, Johan Källström, Gabriel Ramos, Roxana Rădulescu, Diederik Roijers, Conor Hayes, Friedrik Hentz, Patrick Mannion, Pieter Libin, Richard Dazeley and Cameron Foale | |
Targeted Search Control in AlphaZero for Effective Policy Improvement | Alexandre Trudeau and Michael Bowling | |
Out-of-Distribution Detection for Reinforcement Learning Agents with Probabilistic Dynamics Models | Tom Haider, Karsten Roscher, Felippe Schmoeller da Roza and Stephan Günnemann | |
Knowledge Compilation for Constrained Combinatorial Action Spaces in Reinforcement Learning | Jiajing Ling, Moritz Lukas Schuler, Akshat Kumar and Pradeep Varakantham | |
Robotics Chair: Francesco Amigoni |
||
Day 2 (Thu), 14:00 - 15:45 | Decentralised and Cooperative Control of Multi-Robot Systems through Distributed Optimisation | Yi Dong, Zhongguo Li, Xingyu Zhao, Zhengtao Ding and Xiaowei Huang |
Byzantine Resilience at Swarm Scale: A Decentralized Blocklist from Inter-robot Accusations | Kacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru and Wenchao Li | |
Stigmergy-based, Dual-Layer Coverage of Unknown Regions | Ori Rappel, Michael Amir and Alfred Bruckstein | |
Mitigating Imminent Collision for Multi-robot Navigation: A TTC-force Reward Shaping Approach | Jinlin Chen, Jiannong Cao, Zhiqin Cheng and Wei Li | |
Safe Deep Reinforcement Learning by Verifying Task-Level Properties | Enrico Marchesini, Luca Marzari, Alessandro Farinelli and Christopher Amato | |
Decentralized Safe Navigation for Multi-agent Systems via Risk-aware Weighted Buffered Voronoi Cells | Yiwei Lyu, John Dolan and Wenhao Luo | |
Heterogeneous Multi-Robot Reinforcement Learning | Matteo Bettini, Ajay Shankar and Amanda Prorok | |
Gathering of Anonymous Agents | John Augustine, Arnhav Datar and Nischith Shadagopan M N | |
Matching Chair: Swaprava Nath |
||
Day 2 (Thu), 10:45 - 12:30 | Best of Both Worlds Fairness under Entitlements | Haris Aziz, Aditya Ganguly and Evi Micha |
Probabilistic Rationing with Categorized Priorities: Processing Reserves Fairly and Efficiently | Haris Aziz | |
Semi-Popular Matchings and Copeland Winners | Telikepalli Kavitha and Rohit Vaish | |
Host Community Respecting Refugee Housing | Dušan Knop and Šimon Schierreich | |
Online matching with delays and stochastic arrival times | Mathieu Mari, Michał Pawłowski, Runtian Ren and Piotr Sankowski | |
Adapting Stable Matchings to Forced and Forbidden Pairs | Niclas Boehmer and Klaus Heeger | |
Stable Marriage in Euclidean Space | Yinghui Wen, Zhongyi Zhang and Jiong Guo | |
A Map of Diverse Synthetic Stable Roommates Instances | Niclas Boehmer, Klaus Heeger and Stanisław Szufa | |
Social Networks Chair: Tomasz Michalak |
||
Day 3 (Fri), 14:00 - 15:45 | Random Majority Opinion Diffusion: Stabilization Time, Absorbing States, and Influential Nodes | Ahad N. Zehmakan |
Axiomatic Analysis of Medial Centrality Measures | Wiktoria Kosny and Oskar Skibski | |
Online Influence Maximization under Decreasing Cascade Model | Fang Kong, Jize Xie, Baoxiang Wang, Tao Yao and Shuai Li | |
Node Conversion Optimization in Multi-hop Influence Networks | Jie Zhang, Yuezhou Lv and Zihe Wang | |
Decentralized core-periphery structure in social networks accelerates cultural innovation in agent-based modeling | Jesse Milzman and Cody Moser | |
Being an Influencer is Hard: The Complexity of Influence Maximization in Temporal Graphs with a Fixed Source | Argyrios Deligkas, Eduard Eiben, Tiger-Lily Goldsmith and George Skretas | |
Enabling Imitation-Based Cooperation in Dynamic Social Networks | Jacques Bara, Paolo Turrini and Giulia Andrighetto | |
The Grapevine Web: Analysing the Spread of False Information in Social Networks with Corrupted Sources | Jacques Bara, Charlie Pilgrim, Paolo Turrini and Stanislav Zhydkov | |
Simulations Chair: Samarth Swarup |
||
Day 3 (Fri), 10:45 - 12:30 | Differentiable Agent-based Epidemiology | Ayush Chopra, Alexander Rodríguez, Jayakumar Subramanian, Arnau Quera-Bofarull, Balaji Krishnamurthy, B. Aditya Prakash and Ramesh Raskar |
Social Distancing via Social Scheduling | Deepesh Kumar Lall, Garima Shakya and Swaprava Nath | |
Don't Simulate Twice: one-shot sensitivity analyses via automatic differentiation | Arnau Quera-Bofarull, Ayush Chopra, Joseph Aylett-Bullock, Carolina Cuesta-Lazaro, Ani Calinescu, Ramesh Raskar and Mike Wooldridge | |
Markov Aggregation for Speeding Up Agent-Based Movement Simulations | Bernhard Geiger, Alireza Jahani, Hussain Hussain and Derek Groen | |
Agent-Based Modeling of Human Decision-makers Under Uncertain Information During Supply Chain Shortages | Nutchanon Yongsatianchot, Noah Chicoine, Jacqueline Griffin, Ozlem Ergun and Stacy Marsella | |
Simulating panic amplification in crowds via a density-emotion interaction | Erik van Haeringen and Charlotte Gerritsen | |
Modelling Agent Decision Making in Agent-based Simulation - Analysis Using an Economic Technology Uptake Model | Franziska Klügl and Hildegunn Kyvik Nordås | |
Emotion contagion in agent-based simulations of crowds: a systematic review | Erik van Haeringen, Charlotte Gerritsen and Koen Hindriks | |
Multiagent Reinforcement Learning I Chair: Frans Oliehoek |
||
Day 1 (Wed), 10:45 - 12:30 | Trust Region Bounds for Decentralized PPO Under Non-stationarity | Mingfei Sun, Sam Devlin, Jacob Beck, Katja Hofmann and Shimon Whiteson |
Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement | Jiachen Yang, Ketan Mittal, Tarik Dzanic, Socratis Petrides, Brendan Keith, Brenden Petersen, Daniel Faissol and Robert Anderson | |
Adaptive Learning Rates for Multi-Agent Reinforcement Learning | Jiechuan Jiang and Zongqing Lu | |
Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement Learning | Shanqi Liu, Yujing Hu, Runze Wu, Dong Xing, Yu Xiong, Changjie Fan, Kun Kuang and Yong Liu | |
A Variational Approach to Mutual Information-Based Coordination for Multi-Agent Reinforcement Learning | Woojun Kim, Whiyoung Jung, Myungsik Cho and Youngchul Sung | |
Mediated Multi-Agent Reinforcement Learning | Dmitry Ivanov, Ilya Zisman and Kirill Chernyshev | |
EXPODE: EXploiting POlicy Discrepancy for Efficient Exploration in Multi-agent Reinforcement Learning | Yucong Zhang and Chao Yu | |
TiZero: Mastering Multi-Agent Football with Curriculum Learning and Self-Play | Fanqi Lin, Shiyu Huang, Tim Pearce, Wenze Chen and Wei-Wei Tu | |
Multiagent Reinforcement Learning II Chair: Maria Gini |
||
Day 1 (Wed), 14:00 - 15:45 | AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent Reinforcement Learning | Xuefeng Wang, Xinran Li, Jiawei Shao and Jun Zhang |
Learning Structured Communication for Multi-Agent Reinforcement Learning | Junjie Sheng, Xiangfeng Wang, Bo Jin, Wenhao Li, Jun Wang, Junchi Yan, Tsung-Hui Chang and Hongyuan Zha | |
Model-based Sparse Communication in Multi-agent Reinforcement Learning | Shuai Han, Mehdi Dastani and Shihan Wang | |
Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL | Phillip J.K. Christoffersen, Andreas Haupt and Dylan Hadfield-Menell | |
The Benefits of Power Regularization in Cooperative Reinforcement Learning | Michelle Li and Michael Dennis | |
MAC-PO: Multi-Agent Experience Replay via Collective Priority Optimization | Yongsheng Mei, Hanhan Zhou, Tian Lan, Guru Venkataramani and Peng Wei | |
Self-Motivated Multi-Agent Exploration | Shaowei Zhang, Jiahan Cao, Lei Yuan, Yang Yu and De-Chuan Zhan | |
Sequential Cooperative Multi-Agent Reinforcement Learning | Yifan Zang, Jinmin He, Kai Li, Haobo Fu, Qiang Fu and Junliang Xing | |
Multiagent Reinforcement Learning III Chair: Chris Amato |
||
Day 3 (Fri), 10:45 - 12:30 | Learning Inter-Agent Synergies in Asymmetric Multiagent Systems | Gaurav Dixit and Kagan Tumer |
Asymptotic Convergence and Performance of Multi-Agent Q-learning Dynamics | Aamal Hussain, Francesco Belardinelli and Georgios Piliouras | |
Model-based Dynamic Shielding for Safe and Efficient Multi-agent Reinforcement Learning | Wenli Xiao, Yiwei Lyu and John Dolan | |
Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning | Jihwan Oh, Joonkee Kim, Minchan Jeong and Se-Young Yun | |
Counter-Example Guided Policy Refinement in Multi-agent Reinforcement Learning | Briti Gangopadhyay, Pallab Dasgupta and Soumyajit Dey | |
Prioritized Tasks Mining for Multi-Task Cooperative Multi-Agent Reinforcement Learning | Yang Yu, Qiyue Yin, Junge Zhang and Kaiqi Huang | |
M3: Modularization for Multi-task and Multi-agent Offline Pre-training | Linghui Meng, Jingqing Ruan, Xuantang Xiong, Xiyun Li, Xi Zhang, Dengpeng Xing and Bo Xu | |
Equilibria and Complexities of Games Chair: The Anh Han |
||
Day 1 (Wed), 10:45 - 12:30 | Equilibria and Convergence in Fire Sale Games | Nils Bertschinger, Martin Hoefer, Simon Krogmann, Pascal Lenzner, Steffen Schuldenzucker and Lisa Wilhelmi |
Bridging the Gap Between Single and Multi Objective Games | Willem Röpke, Carla Groenland, Roxana Radulescu, Ann Nowe and Diederik M. Roijers | |
Is Nash Equilibrium Approximator Learnable? | Zhijian Duan, Wenhan Huang, Dinghuai Zhang, Yali Du, Jun Wang, Yaodong Yang and Xiaotie Deng | |
Learning the Stackelberg Equilibrium in a Newsvendor Game | Nicolò Cesa-Bianchi, Tommaso Cesari, Takayuki Osogami, Marco Scarsini and Segev Wasserkrug | |
Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained Complexity | Jiehua Chen, Gergely Csáji, Sanjukta Roy and Sofia Simola | |
Debt Transfers in Financial Networks: Complexity and Equilibria | Panagiotis Kanellopoulos, Maria Kyropoulou and Hao Zhou | |
A Study of Nash Equilibria in Multi-Objective Normal-Form Games | Willem Röpke, Diederik M. Roijers, Ann Nowe and Roxana Radulescu | |
Learning Properties in Simulation-Based Games | Cyrus Cousins, Bhaskar Mishra, Enrique Areyan Viqueria and Amy Greenwald | |
Humans and AI Agents Chair: Reyhan Aydogan |
||
Day 1 (Wed), 14:00 - 15:45 | PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI Coordination | Xingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang and Yali Du |
Semi-Autonomous Systems with Contextual Competence Awareness | Saaduddin Mahmud, Connor Basich and Shlomo Zilberstein | |
Joint Engagement Classification using Video Augmentation Techniques for Multi-person HRI in the wild | Yubin Kim, Huili Chen, Sharifa Algohwinem, Cynthia Breazeal and Hae Won Park | |
Multiagent Inverse Reinforcement Learning via Theory of Mind Reasoning | Haochen Wu, Pedro Sequeira and David Pynadath | |
Persuading to Prepare for Quitting Smoking with a Virtual Coach: Using States and User Characteristics to Predict Behavior | Nele Albers, Mark A. Neerincx and Willem-Paul Brinkman | |
Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response Generation | Yushan Qian, Bo Wang, Shangzhao Ma, Wu Bin, Shuo Zhang, Dongming Zhao, Kun Huang and Yuexian Hou | |
Generating Stylistic and Personalized Dialogues for Virtual Agents in Narratives | Weilai Xu, Fred Charles and Charlie Hargood | |
Reducing Racial Bias by Interacting with Virtual Agents: An Intervention in Virtual Reality | David Obremski, Ohenewa Bediako Akuffo, Leonie Lücke, Miriam Semineth, Sarah Tomiczek, Hanna-Finja Weichert and Birgit Lugrin | |
Planning + Task/Resource Allocation Chair: Roie Zivan |
||
Day 1 (Wed), 14:00 - 15:45 | Online Coalitional Skill Formation | Saar Cohen and Noa Agmon |
Multi-Agent Consensus-based Bundle Allocation for Multi-mode Composite Tasks | Gauthier Picard | |
Allocation Problem in Remote Teleoperation: Online Matching with Offline Reusable Resources and Delayed Assignments | Osnat Ackerman Viden, Yohai Trabelsi, Pan Xu, Karthik Abinav Sankararaman, Oleg Maksimov and Sarit Kraus | |
Optimal Coalition Structures for Probabilistically Monotone Partition Function Games | Shaheen Fatima and Michael Wooldridge | |
A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density | Grace Cai, Noble Harasha and Nancy Lynch | |
Abstracting Noisy Robot Programs | Till Hofmann and Vaishak Belle | |
Structural Credit Assignment-Guided Coordinated MCTS: An Efficient and Scalable Method for Online Multiagent Planning | Qian Che, Wanyuan Wang, Fengchen Wang, Tianchi Qiao, Xiang Liu, Jiuchuan Jiang, Bo An and Yichuan Jiang | |
Strategic Planning for Flexible Agent Availability in Large Taxi Fleets | Rajiv Ranjan Kumar, Pradeep Varakantham and Shih-Fen Cheng | |
Learning in Games Chair: Makoto Yokoo |
||
Day 2 (Thu), 10:45 - 12:30 | Empirical Game-Theoretic Analysis for Mean Field Games | Yongzhao Wang and Michael Wellman |
Differentiable Arbitrating in Zero-sum Markov Games | Jing Wang, Meichen Song, Feng Gao, Boyi Liu, Zhaoran Wang and Yi Wu | |
Learning Parameterized Families of Games | Madelyn Gatchel and Bryce Wiedenbeck | |
Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed Cooperative-Competitive Games | Zelai Xu, Yancheng Liang, Chao Yu, Yu Wang and Yi Wu | |
Multiplicative Weights Updates for Extensive Form Games | Chirag Chhablani, Michael Sullins and Ian Kash | |
A Hybrid Framework of Reinforcement Learning and Physics-Informed Deep Learning for Spatiotemporal Mean Field Games | Xu Chen, Shuo Liu and Xuan Di | |
Adversarial Inverse Reinforcement Learning for Mean Field Games | Yang Chen, Libo Zhang, Jiamou Liu and Michael Witbrock | |
Cost Inference for Feedback Dynamic Games from Noisy Partial State Observations and Incomplete Trajectories | Jingqi Li, Chih-Yuan Chiu, Lasse Peters, Somayeh Sojoudi, Claire Tomlin and David Fridovich-Keil | |
Fair Allocations Chair: Ulle Endriss |
||
Day 1 (Wed), 10:45 - 12:30 | Fair Allocation of Two Types of Chores | Haris Aziz, Jeremy Lindsay, Angus Ritossa and Mashbat Suzuki |
Fairly Dividing Mixtures of Goods and Chores under Lexicographic Preferences | Hadi Hosseini, Sujoy Sikdar, Rohit Vaish and Lirong Xia | |
Graphical House Allocation | Hadi Hosseini, Justin Payan, Rik Sengupta, Rohit Vaish and Vignesh Viswanathan | |
Approximation Algorithm for Computing Budget-Feasible EF1 Allocations | Jiarui Gan, Bo Li and Xiaowei Wu | |
Yankee Swap: a Fast and Simple Fair Allocation Mechanism for Matroid Rank Valuations | Vignesh Viswanathan and Yair Zick | |
Fairness in the Assignment Problem with Uncertain Priorities | Zeyu Shen, Zhiyi Wang, Xingyu Zhu, Brandon Fain and Kamesh Munagala | |
Possible Fairness for Allocating Indivisible Resources | Haris Aziz, Bo Li, Shiji Xing and Yu Zhou | |
Efficient Nearly-Fair Division with Capacity Constraints | Hila Shoshan, Noam Hazon and Erel Segal-Halevi | |
Fair Allocations + Public Goods Games Chair: Hadi Hosseini |
||
Day 1 (Wed), 14:00 - 15:45 | Equitability and Welfare Maximization for Allocating Indivisible Items | Ankang Sun, Bo Chen and Xuan Vinh Doan |
Best of Both Worlds: Agents with Entitlements | Martin Hoefer, Marco Schmalhofer and Giovanna Varricchio | |
Mitigating Skewed Bidding for Conference Paper Assignment | Inbal Rozenzweig, Reshef Meir, Nicholas Mattei and Ofra Amir | |
Price of Anarchy in a Double-Sided Critical Distribution System | David Sychrovský, Jakub Černý, Sylvain Lichau and Martin Loebl | |
Improved EFX approximation guarantees under ordinal-based assumptions | Evangelos Markakis and Christodoulos Santorinaios | |
Assigning Agents to Increase Network-Based Neighborhood Diversity | Zirou Qiu, Andrew Yuan, Chen Chen, Madhav Marathe, S.S. Ravi, Daniel Rosenkrantz, Richard Stearns and Anil Vullikanti | |
Altruism, Collectivism and Egalitarianism: On a Variety of Prosocial Behaviors in Binary Networked Public Goods Games | Jichen Li, Xiaotie Deng, Yukun Cheng, Yuqi Pan, Xuanzhi Xia, Zongjun Yang and Jan Xie | |
The Role of Space, Density and Migration in Social Dilemmas | Jacques Bara, Fernando P. Santos and Paolo Turrini | |
Multi-Armed Bandits + Monte Carlo Tree Search Chair: Tom Cesari |
||
Day 2 (Thu), 14:00 - 15:45 | Indexability is Not Enough for Whittle: Improved, Near-Optimal Algorithms for Restless Bandits | Abheek Ghosh, Dheeraj Nagaraj, Manish Jain and Milind Tambe |
Avoiding Starvation of Arms in Restless Multi-Armed Bandits | Dexun Li and Pradeep Varakantham | |
Restless Multi-Armed Bandits for Maternal and Child Health: Results from Decision-Focused Learning | Shresth Verma, Aditya Mate, Kai Wang, Neha Madhiwalla, Aparna Hegde, Aparna Taneja and Milind Tambe | |
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks | Arpita Biswas, Jackson Killian, Paula Rodriguez Diaz, Susobhan Ghosh and Milind Tambe | |
On Regret-optimal Cooperative Nonstochastic Multi-armed Bandits | Jialin Yi and Milan Vojnovic | |
Equilibrium Bandits: Learning Optimal Equilibria of Unknown Dynamics | Siddharth Chandak, Ilai Bistritz and Nicholas Bambos | |
ExPoSe: Combining State-Based Exploration with Gradient-Based Online Search | Dixant Mittal, Siddharth Aravindan and Wee Sun Lee | |
Formally-Sharp DAgger for MCTS: Lower-Latency Monte Carlo Tree Search using Data Aggregation with Formal Methods | Debraj Chakraborty, Damien Busatto-Gaston, Jean-François Raskin and Guillermo Perez | |
Reinfocement and Imitation Learning Chair: Matt Taylor |
||
Day 2 (Thu), 14:00 - 15:45 | Decentralized model-free reinforcement learning in stochastic games with average-reward objective | Romain Cravic, Nicolas Gast and Bruno Gaujal |
Less Is More: Refining Datasets for Offline Reinforcement Learning with Reward Machines | Haoyuan Sun and Feng Wu | |
A Self-Organizing Neuro-Fuzzy Q-Network: Systematic Design with Offline Hybrid Learning | John Hostetter, Mark Abdelshiheed, Tiffany Barnes and Min Chi | |
Learning to Coordinate from Offline Datasets with Uncoordinated Behavior Policies | Jinming Ma and Feng Wu | |
D-Shape: Demonstration-Shaped Reinforcement Learning via Goal-Conditioning | Caroline Wang, Garrett Warnell and Peter Stone | |
How To Guide Your Learner: Imitation Learning with Active Adaptive Expert Involvement | Xuhui Liu, Feng Xu, Xinyu Zhang, Tianyuan Liu, Shengyi Jiang, Ruifeng Chen, Zongzhang Zhang and Yang Yu | |
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive Games | The Viet Bui, Tien Mai and Thanh Nguyen | |
Curriculum Offline Reinforcement Learning | Yuanying Cai, Chuheng Zhang, Hanye Zhao, Li Zhao and Jiang Bian | |
Norms Chair: Pradeep Murukannaiah |
||
Day 3 (Fri), 14:00 - 15:45 | The Importance of Credo in Multiagent Learning | David Radke, Kate Larson and Tim Brecht |
Contextual Integrity for Argumentation-based Privacy Reasoning | Gideon Ogunniye and Nadin Kokciyan | |
Predicting privacy preferences for smart devices as norms | Marc Serramia, William Seymour, Natalia Criado and Michael Luck | |
Agent-directed runtime norm synthesis | Andreasa Morris Martin, Marina De Vos, Julian Padget and Oliver Ray | |
Emergence of Norms in Interactions with Complex Rewards | Dhaminda Abeywickrama, Nathan Griffiths, Zhou Xu and Alex Mouzakitis | |
Graph Neural Networks + Transformers Chair: Ann Nowe |
||
Day 3 (Fri), 10:45 - 12:30 | User Device Interaction Prediction via Relational Gated Graph Attention Network and Intent-aware Encoder | Jingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Wenxin Tang, Runjie Zhou and Yong Jiang |
Inferring Player Location in Sports Matches: Multi-Agent Spatial Imputation from Limited Observations | Gregory Everett, Ryan Beal, Tim Matthews, Joseph Early, Timothy Norman and Sarvapali Ramchurn | |
Learning Graph-Enhanced Commander-Executor for Multi-Agent Navigation | Xinyi Yang, Shiyu Huang, Yiwen Sun, Yuxiang Yang, Chao Yu, Wei-Wei Tu, Huazhong Yang and Yu Wang | |
Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for Multi-Agent Learning | Ryan Kortvelesy, Steven Morad and Amanda Prorok | |
Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting Problem | Peiwang Tang and Xianchao Zhang | |
TransfQMix: Transformers for Leveraging the Graph Structure of Multi-Agent Reinforcement Learning Problems | Matteo Gallici, Mario Martin and Ivan Masmitja | |
Intelligent Onboard Routing in Stochastic Dynamic Environments using Transformers | Rohit Chowdhury, Raswanth Murugan and Deepak Subramani | |
Voting I Chair: Alan Tsang |
||
Day 3 (Fri), 10:45 - 12:30 | Characterizations of Sequential Valuation Rules | Chris Dong and Patrick Lederer |
Collecting, Classifying, Analyzing, and Using Real-World Ranking Data | Niclas Boehmer and Nathan Schaar | |
Margin of Victory for Weighted Tournament Solutions | Michelle Döring and Jannik Peters | |
Bribery Can Get Harder in Structured Multiwinner Approval Election | Bartosz Kusek, Robert Bredereck, Piotr Faliszewski, Andrzej Kaczmarczyk and Dušan Knop | |
Strategyproof Social Decision Schemes on Super Condorcet Domains | Felix Brandt, Patrick Lederer and Sascha Tausch | |
Separating and Collapsing Electoral Control Types | Benjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David Narváez, Conor Taliancich and Henry B. Welles | |
The Distortion of Approval Voting with Runoff | Soroush Ebadian, Mohamad Latifian and Nisarg Shah | |
Voting II Chair: Reshef Meir |
||
Day 3 (Fri), 14:00 - 15:45 | On the Complexity of the Two-Stage Majority Rule | Yongjie Yang |
Fairness in Participatory Budgeting via Equality of Resources | Jan Maly, Simon Rey, Ulle Endriss and Martin Lackner | |
Free-Riding in Multi-Issue Decisions | Martin Lackner, Jan Maly and Oliviero Nardi | |
k-prize Weighted Voting Game | Wei-Chen Lee, David Hyland, Alessandro Abate, Edith Elkind, Jiarui Gan, Julian Gutierrez, Paul Harrenstein and Michael Wooldridge | |
Computing the Best Policy That Survives a Vote | Andrei Constantinescu and Roger Wattenhofer | |
Voting by Axioms | Marie Christin Schmidtlein and Ulle Endriss | |
A Hotelling-Downs game for strategic candidacy with binary issues | Javier Maass, Vincent Mousseau and Anaëlle Wilczynski | |
Voting with Limited Energy: A Study of Plurality and Borda | Zoi Terzopoulou | |
Multi-objective Planning and Learning Chair: Gauthier Picard |
||
Day 3 (Fri), 14:00 - 15:45 | Revealed multi-objective utility aggregation in human driving | Atrisha Sarkar, Kate Larson and Krzysztof Czarnecki |
A Brief Guide to Multi-Objective Reinforcement Learning and Planning | Conor F Hayes, Roxana Radulescu, Eugenio Bargiacchi, Johan Kallstrom, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowe, Gabriel Ramos, Marcello Restelli, Peter Vamplew and Diederik M. Roijers | |
Welfare and Fairness in Multi-objective Reinforcement Learning | Ziming Fan, Nianli Peng, Muhang Tian and Brandon Fain | |
Preference-Based Multi-Objective Multi-Agent Path Finding | Florence Ho and Shinji Nakadai | |
Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization | Lucas N. Alegre, Ana L. C. Bazzan, Diederik M. Roijers, Ann Nowé and Bruno C. da Silva | |
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the Utility | Zhaori Guo, Timothy Norman and Enrico Gerding | |
Auctions + Voting Chair: Noam Hazon |
||
Day 2 (Thu), 14:00 - 15:45 | Price of Anarchy for First Price Auction with Risk-Averse Bidders | Zhiqiang Zhuang, Kewen Wang and Zhe Wang |
A Redistribution Framework for Diffusion Auctions | Sizhe Gu, Yao Zhang, Yida Zhao and Dengji Zhao | |
Sybil-Proof Diffusion Auction in Social Networks | Hongyin Chen, Xiaotie Deng, Ying Wang, Yue Wu and Dengji Zhao | |
Representing and Reasoning about Auctions | Munyque Mittelmann, Sylvain Bouveret and Laurent Perrussel | |
Revisiting the Distortion of Distributed Voting | Aris Filos-Ratsikas and Alexandros Voudouris | |
Bounded Approval Ballots: Balancing Expressiveness and Simplicity for Multiwinner Elections | Dorothea Baumeister, Linus Boes, Christian Laußmann and Simon Rey | |
On the Distortion of Single Winner Elections with Aligned Candidates | Dimitris Fotakis and Laurent Gourves | |
SAT-based Judgment Aggregation | Ari Conati, Andreas Niskanen and Matti Järvisalo | |
Learning with Humans and Robots Chair: Jonathan Gratch |
||
Day 2 (Thu), 10:45 - 12:30 | GANterfactual-RL: Understanding Reinforcement Learning Agents' Strategies through Visual Counterfactual Explanations | Tobias Huber, Maximilian Demmler, Silvan Mertes, Matthew Olson and Elisabeth André |
Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative Exploration | Chao Yu, Xinyi Yang, Jiaxuan Gao, Jiayu Chen, Yunfei Li, Jijia Liu, Yunfei Xiang, Ruixin Huang, Huazhong Yang, Yi Wu and Yu Wang | |
Dec-AIRL: Decentralized Adversarial IRL for Human-Robot Teaming | Prasanth Sengadu Suresh, Yikang Gui and Prashant Doshi | |
Structural Attention-based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection | Neeloy Chakraborty, Aamir Hasan, Shuijing Liu, Tianchen Ji, Weihang Liang, D. Livingston McPherson and Katherine Driggs-Campbell | |
Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired Skills | Maxence Hussonnois, Thommen Karimpanal George and Santu Rana | |
Learning from Multiple Independent Advisors in Multi-agent Reinforcement Learning | Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson and Mark Crowley | |
Behavioral and Algorithmic Game Theory Chair: Zoi Terzopoulou |
||
Day 1 (Wed), 14:00 - 15:45 | Non-strategic Econometrics (for Initial Play) | Daniel Chui, Jason Hartline and James Wright |
Efficient Stackelberg Strategies for Finitely Repeated Games | Natalie Collina, Eshwar Ram Arunachaleswaran and Michael Kearns | |
Learning Density-Based Correlated Equilibria for Markov Games | Libo Zhang, Yang Chen, Toru Takisaka, Bakh Khoussainov, Michael Witbrock and Jiamou Liu | |
IRS: An Incentive-compatible Reward Scheme for Algorand | Maizi Liao, Wojciech Golab and Seyed Majid Zahedi | |
Data Structures for Deviation Payoffs | Bryce Wiedenbeck and Erik Brinkman | |
Deep Learning Chair: Joydeep Biswas |
||
Day 3 (Fri), 14:00 - 15:45 | Worst-Case Adaptive Submodular Cover | Jing Yuan and Shaojie Tang |
Minimax Strikes Back | Quentin Cohen-Solal and Tristan Cazenave | |
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning | Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew Taylor, Mykola Pechenizkiy and Decebal Constantin Mocanu | |
Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep Reinforcement Learning | Woojun Kim and Youngchul Sung | |
Learning Rewards to Optimize Global Performance Metrics in Deep Reinforcement Learning | Junqi Qian, Paul Weng and Chenmien Tan | |
A Deep Reinforcement Learning Approach for Online Parcel Assignment | Hao Zeng, Qiong Wu, Kunpeng Han, Junying He and Haoyuan Hu | |
CoRaL: Continual Representation Learning for Overcoming Catastrophic Forgetting | Mohammad Yasar and Tariq Iqbal | |
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare | Ge Gao, Song Ju, Markel Sanz Ausin and Min Chi | |
Adversarial Learning + Social Networks + Causal Graphs Chair: Paolo Turrini |
||
Day 3 (Fri), 10:45 - 12:30 | Adversarial Link Prediction in Spatial Networks | Michał Tomasz Godziszewski, Yevgeniy Vorobeychik and Tomasz Michalak |
Distributed Mechanism Design in Social Networks | Haoxin Liu, Yao Zhang and Dengji Zhao | |
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks | Mohammad Mohammadi, Jonathan Nöther, Debmalya Mandal, Adish Singla and Goran Radanovic | |
How to Turn an MAS into a Graphical Causal Model | H. Van Dyke Parunak | |
FedMM: A Communication Efficient Solver for Federated Adversarial Domain Adaptation | Yan Shen, Jian Du, Han Zhao, Zhanghexuan Ji, Chunwei Ma and Mingchen Gao | |
Best Dissertation Talk Chair: Paolo Turrini |
||
Day 2 (Thu), 14:00 - 15:45 | Efficient and Effective Techniques for Large-Scale Multi-Agent Path Finding | Jiaoyang Li |
Poster Sessions
Time | Title | Authors | Theme | Poster Board ID |
Day 1 | ||||
Day 1 | Establishing Shared Query Understanding in an Open Multi-Agent System | Nikolaos Kondylidis, Ilaria Tiddi and Annette ten Teije | Human-Agent Teams | 121 |
Communicating Agent Intentions for Human-Agent Decision Making under Uncertainty | Julie Porteous, Alan Lindsay and Fred Charles | Human-Agent Teams | 122 | |
Trusting artificial agents: communication trumps performance | Marin Le Guillou, Laurent Prévot and Bruno Berberian | Human-Agent Teams | 123 | |
Nonverbal Human Signals Can Help Autonomous Agents Infer Human Preferences for Their Behavior | Kate Candon, Jesse Chen, Yoony Kim, Zoe Hsu, Nathan Tsoi and Marynel Vázquez | Human-Agent Teams | 124 | |
On Subset Selection of Multiple Humans To Improve Human-AI Team Accuracy | Sagalpreet Singh, Shweta Jain and Shashi Shekhar Jha | Human-Agent Teams | 125 | |
Do Explanations Improve the Quality of AI-assisted Human Decisions? An Algorithm-in-the-Loop Analysis of Factual & Counterfactual Explanations | Lujain Ibrahim, Mohammad M. Ghassemi and Tuka Alhanai | Human-Agent Teams | 126 | |
Automated Task-Time Interventions to Improve Teamwork using Imitation Learning | Sangwon Seo, Bing Han and Vaibhav V Unhelkar | Human-Agent Teams | 127 | |
Should my agent lie for me? A study on humans' attitudes towards deceptive AI | Stefan Sarkadi, Peidong Mei and Edmond Awad | Human-Agent Teams | 128 | |
A Logic of Only-Believing over Arbitrary Probability Distributions | Qihui Feng, Daxin Liu, Vaishak Belle and Gerhard Lakemeyer | Knowledge Representation and Reasoning I | 49 | |
A Deontic Logic of Knowingly Complying | Carlos Areces, Valentin Cassano, Pablo Castro, Raul Fervari and Andrés R. Saravia | Knowledge Representation and Reasoning I | 50 | |
Learning Logic Specifications for Soft Policy Guidance in POMCP | Giulio Mazzi, Daniele Meli, Alberto Castellini and Alessandro Farinelli | Knowledge Representation and Reasoning I | 51 | |
Strategic (Timed) Computation Tree Logic | Jaime Arias, Wojciech Jamroga, Wojciech Penczek, Laure Petrucci and Teofil Sidoruk | Knowledge Representation and Reasoning I | 52 | |
Attention! Dynamic Epistemic Logic Models of (In)attentive Agents | Gaia Belardinelli and Thomas Bolander | Knowledge Representation and Reasoning I | 53 | |
(Arbitrary) Partial Communication | Rustam Galimullin and Fernando R. Velazquez-Quesada | Knowledge Representation and Reasoning I | 65 | |
Epistemic Abstract Argumentation Framework: Formal Foundations, Computation and Complexity | Gianvincenzo Alfano, Sergio Greco, Francesco Parisi and Irina Trubitsyna | Knowledge Representation and Reasoning I | 66 | |
Actions, Continuous Distributions and Meta-Beliefs | Vaishak Belle | Knowledge Representation and Reasoning I | 67 | |
Provable Optimization of Quantal Response Leader-Follower Games with Exponentially Large Action Spaces | Jinzhao Li, Daniel Fink, Christopher Wood, Carla P. Gomes and Yexiang Xue | Knowledge Representation and Reasoning II | 68 | |
Playing to Learn, or to Keep Secret: Alternating-Time Logic Meets Information Theory | Masoud Tabatabaei and Wojciech Jamroga | Knowledge Representation and Reasoning II | 69 | |
Synthesis of Resource-Aware Controllers Against Rational Agents | Rodica Condurache, Catalin Dima, Youssouf Oualhadj and Nicolas Troquard | Knowledge Representation and Reasoning II | 81 | |
Computationally Feasible Strategies | Catalin Dima and Wojtek Jamroga | Knowledge Representation and Reasoning II | 82 | |
Towards the Verification of Strategic Properties in Multi-Agent Systems with Imperfect Information | Angelo Ferrando and Vadim Malvone | Knowledge Representation and Reasoning II | 83 | |
Ask and You Shall be Served: Representing and Solving Multi-Agent Optimization Problems with Service Requesters and Providers | Maya Lavie, Tehila Caspi, Omer Lev and Roie Zivan | Planning | 84 | |
Fairness Driven Efficient Algorithms for Sequenced Group Trip Planning Query Problem | Napendra Solanki, Shweta Jain, Suman Banerjee and Yayathi Pavan Kumar S | Planning | 85 | |
Domain-Independent Deceptive Planning | Adrian Price, Ramon Fraga Pereira, Peta Masters and Mor Vered | Planning | 86 | |
CAMS: Collision Avoiding Max-Sum for Mobile Sensor Teams | Arseni Pertzovskiy, Roie Zivan and Noa Agmon | Planning | 87 | |
Risk-Constrained Planning for Multi-Agent Systems with Shared Resources | Anna Gautier, Marc Rigter, Bruno Lacerda, Nick Hawes and Michael Wooldridge | Planning | 88 | |
Quantitative Planning with Action Deception in Concurrent Stochastic Games | Chongyang Shi, Shuo Han and Jie Fu | Planning | 89 | |
Towards Computationally Efficient Responsibility Attribution in Decentralized Partially Observable MDPs | Stelios Triantafyllou and Goran Radanovic | Planning | 90 | |
On-line Estimators for Ad-hoc Task Execution: Learning types and parameters of teammates for effective teamwork | Matheus Aparecido Do Carmo Alves, Elnaz Shafipour Yourdshahi, Amokh Varma, Leandro Soriano Marcolino, Jó Ueyama and Plamen Angelov | Planning | 91 | |
Trust Region Bounds for Decentralized PPO Under Non-stationarity | Mingfei Sun, Sam Devlin, Jacob Beck, Katja Hofmann and Shimon Whiteson | Multiagent Reinforcement Learning I | 1 | |
Multi-Agent Reinforcement Learning for Adaptive Mesh Refinement | Jiachen Yang, Ketan Mittal, Tarik Dzanic, Socratis Petrides, Brendan Keith, Brenden Petersen, Daniel Faissol and Robert Anderson | Multiagent Reinforcement Learning I | 2 | |
Adaptive Learning Rates for Multi-Agent Reinforcement Learning | Jiechuan Jiang and Zongqing Lu | Multiagent Reinforcement Learning I | 3 | |
Adaptive Value Decomposition with Greedy Marginal Contribution Computation for Cooperative Multi-Agent Reinforcement Learning | Shanqi Liu, Yujing Hu, Runze Wu, Dong Xing, Yu Xiong, Changjie Fan, Kun Kuang and Yong Liu | Multiagent Reinforcement Learning I | 4 | |
A Variational Approach to Mutual Information-Based Coordination for Multi-Agent Reinforcement Learning | Woojun Kim, Whiyoung Jung, Myungsik Cho and Youngchul Sung | Multiagent Reinforcement Learning I | 5 | |
Mediated Multi-Agent Reinforcement Learning | Dmitry Ivanov, Ilya Zisman and Kirill Chernyshev | Multiagent Reinforcement Learning I | 6 | |
EXPODE: EXploiting POlicy Discrepancy for Efficient Exploration in Multi-agent Reinforcement Learning | Yucong Zhang and Chao Yu | Multiagent Reinforcement Learning I | 7 | |
TiZero: Mastering Multi-Agent Football with Curriculum Learning and Self-Play | Fanqi Lin, Shiyu Huang, Tim Pearce, Wenze Chen and Wei-Wei Tu | Multiagent Reinforcement Learning I | 8 | |
AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent Reinforcement Learning | Xuefeng Wang, Xinran Li, Jiawei Shao and Jun Zhang | Multiagent Reinforcement Learning II | 9 | |
Learning Structured Communication for Multi-Agent Reinforcement Learning | Junjie Sheng, Xiangfeng Wang, Bo Jin, Wenhao Li, Jun Wang, Junchi Yan, Tsung-Hui Chang and Hongyuan Zha | Multiagent Reinforcement Learning II | 10 | |
Model-based Sparse Communication in Multi-agent Reinforcement Learning | Shuai Han, Mehdi Dastani and Shihan Wang | Multiagent Reinforcement Learning II | 17 | |
Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL | Phillip J.K. Christoffersen, Andreas Haupt and Dylan Hadfield-Menell | Multiagent Reinforcement Learning II | 18 | |
The Benefits of Power Regularization in Cooperative Reinforcement Learning | Michelle Li and Michael Dennis | Multiagent Reinforcement Learning II | 19 | |
MAC-PO: Multi-Agent Experience Replay via Collective Priority Optimization | Yongsheng Mei, Hanhan Zhou, Tian Lan, Guru Venkataramani and Peng Wei | Multiagent Reinforcement Learning II | 20 | |
Self-Motivated Multi-Agent Exploration | Shaowei Zhang, Jiahan Cao, Lei Yuan, Yang Yu and De-Chuan Zhan | Multiagent Reinforcement Learning II | 21 | |
Sequential Cooperative Multi-Agent Reinforcement Learning | Yifan Zang, Jinmin He, Kai Li, Haobo Fu, Qiang Fu and Junliang Xing | Multiagent Reinforcement Learning II | 22 | |
Equilibria and Convergence in Fire Sale Games | Nils Bertschinger, Martin Hoefer, Simon Krogmann, Pascal Lenzner, Steffen Schuldenzucker and Lisa Wilhelmi | Equilibria and Complexities of Games | 11 | |
Bridging the Gap Between Single and Multi Objective Games | Willem Röpke, Carla Groenland, Roxana Radulescu, Ann Nowe and Diederik M. Roijers | Equilibria and Complexities of Games | 12 | |
Is Nash Equilibrium Approximator Learnable? | Zhijian Duan, Wenhan Huang, Dinghuai Zhang, Yali Du, Jun Wang, Yaodong Yang and Xiaotie Deng | Equilibria and Complexities of Games | 13 | |
Learning the Stackelberg Equilibrium in a Newsvendor Game | Nicolò Cesa-Bianchi, Tommaso Cesari, Takayuki Osogami, Marco Scarsini and Segev Wasserkrug | Equilibria and Complexities of Games | 14 | |
Hedonic Games With Friends, Enemies, and Neutrals: Resolving Open Questions and Fine-Grained Complexity | Jiehua Chen, Gergely Csáji, Sanjukta Roy and Sofia Simola | Equilibria and Complexities of Games | 15 | |
Debt Transfers in Financial Networks: Complexity and Equilibria | Panagiotis Kanellopoulos, Maria Kyropoulou and Hao Zhou | Equilibria and Complexities of Games | 16 | |
A Study of Nash Equilibria in Multi-Objective Normal-Form Games | Willem Röpke, Diederik M. Roijers, Ann Nowe and Roxana Radulescu | Equilibria and Complexities of Games | 27 | |
Learning Properties in Simulation-Based Games | Cyrus Cousins, Bhaskar Mishra, Enrique Areyan Viqueria and Amy Greenwald | Equilibria and Complexities of Games | 28 | |
PECAN: Leveraging Policy Ensemble for Context-Aware Zero-Shot Human-AI Coordination | Xingzhou Lou, Jiaxian Guo, Junge Zhang, Jun Wang, Kaiqi Huang and Yali Du | Humans and AI Agents | 129 | |
Semi-Autonomous Systems with Contextual Competence Awareness | Saaduddin Mahmud, Connor Basich and Shlomo Zilberstein | Humans and AI Agents | 130 | |
Joint Engagement Classification using Video Augmentation Techniques for Multi-person HRI in the wild | Yubin Kim, Huili Chen, Sharifa Algohwinem, Cynthia Breazeal and Hae Won Park | Humans and AI Agents | 131 | |
Multiagent Inverse Reinforcement Learning via Theory of Mind Reasoning | Haochen Wu, Pedro Sequeira and David Pynadath | Humans and AI Agents | 132 | |
Persuading to Prepare for Quitting Smoking with a Virtual Coach: Using States and User Characteristics to Predict Behavior | Nele Albers, Mark A. Neerincx and Willem-Paul Brinkman | Humans and AI Agents | 133 | |
Think Twice: A Human-like Two-stage Conversational Agent for Emotional Response Generation | Yushan Qian, Bo Wang, Shangzhao Ma, Wu Bin, Shuo Zhang, Dongming Zhao, Kun Huang and Yuexian Hou | Humans and AI Agents | 134 | |
Generating Stylistic and Personalized Dialogues for Virtual Agents in Narratives | Weilai Xu, Fred Charles and Charlie Hargood | Humans and AI Agents | 135 | |
Reducing Racial Bias by Interacting with Virtual Agents: An Intervention in Virtual Reality | David Obremski, Ohenewa Bediako Akuffo, Leonie Lücke, Miriam Semineth, Sarah Tomiczek, Hanna-Finja Weichert and Birgit Lugrin | Humans and AI Agents | 136 | |
Online Coalitional Skill Formation | Saar Cohen and Noa Agmon | Planning + Task/Resource Allocation | 92 | |
Multi-Agent Consensus-based Bundle Allocation for Multi-mode Composite Tasks | Gauthier Picard | Planning + Task/Resource Allocation | 93 | |
Allocation Problem in Remote Teleoperation: Online Matching with Offline Reusable Resources and Delayed Assignments | Osnat Ackerman Viden, Yohai Trabelsi, Pan Xu, Karthik Abinav Sankararaman, Oleg Maksimov and Sarit Kraus | Planning + Task/Resource Allocation | 94 | |
Optimal Coalition Structures for Probabilistically Monotone Partition Function Games | Shaheen Fatima and Michael Wooldridge | Planning + Task/Resource Allocation | 95 | |
A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density | Grace Cai, Noble Harasha and Nancy Lynch | Planning + Task/Resource Allocation | 96 | |
Abstracting Noisy Robot Programs | Till Hofmann and Vaishak Belle | Planning + Task/Resource Allocation | 97 | |
Structural Credit Assignment-Guided Coordinated MCTS: An Efficient and Scalable Method for Online Multiagent Planning | Qian Che, Wanyuan Wang, Fengchen Wang, Tianchi Qiao, Xiang Liu, Jiuchuan Jiang, Bo An and Yichuan Jiang | Planning + Task/Resource Allocation | 98 | |
Strategic Planning for Flexible Agent Availability in Large Taxi Fleets | Rajiv Ranjan Kumar, Pradeep Varakantham and Shih-Fen Cheng | Planning + Task/Resource Allocation | 99 | |
Fair Allocation of Two Types of Chores | Haris Aziz, Jeremy Lindsay, Angus Ritossa and Mashbat Suzuki | Fair Allocations | 43 | |
Fairly Dividing Mixtures of Goods and Chores under Lexicographic Preferences | Hadi Hosseini, Sujoy Sikdar, Rohit Vaish and Lirong Xia | Fair Allocations | 44 | |
Graphical House Allocation | Hadi Hosseini, Justin Payan, Rik Sengupta, Rohit Vaish and Vignesh Viswanathan | Fair Allocations | 45 | |
Approximation Algorithm for Computing Budget-Feasible EF1 Allocations | Jiarui Gan, Bo Li and Xiaowei Wu | Fair Allocations | 46 | |
Yankee Swap: a Fast and Simple Fair Allocation Mechanism for Matroid Rank Valuations | Vignesh Viswanathan and Yair Zick | Fair Allocations | 47 | |
Fairness in the Assignment Problem with Uncertain Priorities | Zeyu Shen, Zhiyi Wang, Xingyu Zhu, Brandon Fain and Kamesh Munagala | Fair Allocations | 48 | |
Possible Fairness for Allocating Indivisible Resources | Haris Aziz, Bo Li, Shiji Xing and Yu Zhou | Fair Allocations | 58 | |
Efficient Nearly-Fair Division with Capacity Constraints | Hila Shoshan, Noam Hazon and Erel Segal-Halevi | Fair Allocations | 59 | |
Equitability and Welfare Maximization for Allocating Indivisible Items | Ankang Sun, Bo Chen and Xuan Vinh Doan | Fair Allocations + Public Goods Games | 60 | |
Best of Both Worlds: Agents with Entitlements | Martin Hoefer, Marco Schmalhofer and Giovanna Varricchio | Fair Allocations + Public Goods Games | 61 | |
Mitigating Skewed Bidding for Conference Paper Assignment | Inbal Rozenzweig, Reshef Meir, Nicholas Mattei and Ofra Amir | Fair Allocations + Public Goods Games | 62 | |
Price of Anarchy in a Double-Sided Critical Distribution System | David Sychrovský, Jakub Černý, Sylvain Lichau and Martin Loebl | Fair Allocations + Public Goods Games | 63 | |
Improved EFX approximation guarantees under ordinal-based assumptions | Evangelos Markakis and Christodoulos Santorinaios | Fair Allocations + Public Goods Games | 64 | |
Assigning Agents to Increase Network-Based Neighborhood Diversity | Zirou Qiu, Andrew Yuan, Chen Chen, Madhav Marathe, S.S. Ravi, Daniel Rosenkrantz, Richard Stearns and Anil Vullikanti | Fair Allocations + Public Goods Games | 74 | |
Altruism, Collectivism and Egalitarianism: On a Variety of Prosocial Behaviors in Binary Networked Public Goods Games | Jichen Li, Xiaotie Deng, Yukun Cheng, Yuqi Pan, Xuanzhi Xia, Zongjun Yang and Jan Xie | Fair Allocations + Public Goods Games | 75 | |
The Role of Space, Density and Migration in Social Dilemmas | Jacques Bara, Fernando P. Santos and Paolo Turrini | Fair Allocations + Public Goods Games | 76 | |
Non-strategic Econometrics (for Initial Play) | Daniel Chui, Jason Hartline and James Wright | Behavioral and Algorithmic Game Theory | 29 | |
Efficient Stackelberg Strategies for Finitely Repeated Games | Natalie Collina, Eshwar Ram Arunachaleswaran and Michael Kearns | Behavioral and Algorithmic Game Theory | 30 | |
Learning Density-Based Correlated Equilibria for Markov Games | Libo Zhang, Yang Chen, Toru Takisaka, Bakh Khoussainov, Michael Witbrock and Jiamou Liu | Behavioral and Algorithmic Game Theory | 31 | |
IRS: An Incentive-compatible Reward Scheme for Algorand | Maizi Liao, Wojciech Golab and Seyed Majid Zahedi | Behavioral and Algorithmic Game Theory | 32 | |
Data Structures for Deviation Payoffs | Bryce Wiedenbeck and Erik Brinkman | Behavioral and Algorithmic Game Theory | 42 | |
Evaluating a mechanism for explaining BDI agent behaviour | Michael Winikoff and Galina Sidorenko | Humans and AI / Human-Agent Interaction | 137 | |
Learning Manner of Execution from Partial Corrections | Mattias Appelgren and Alex Lascarides | Humans and AI / Human-Agent Interaction | 138 | |
What Do You Care About: Inferring Values from Emotions | Jieting Luo, Mehdi Dastani, Thomas Studer and Beishui Liao | Humans and AI / Human-Agent Interaction | 139 | |
`Why didn't you allocate this task to them?' Negotiation-Aware Explicable Task Allocation and Contrastive Explanation Generation | Zahra Zahedi, Sailik Sengupta and Subbarao Kambhampati | Humans and AI / Human-Agent Interaction | 140 | |
Explaining agent preferences and behavior: integrating reward decomposition and contrastive highlights | Yael Septon, Yotam Amitai and Ofra Amir | Humans and AI / Human-Agent Interaction | 141 | |
Explanation Styles for Trustworthy Autonomous Systems | David Robb, Xingkun Liu and Helen Hastie | Humans and AI / Human-Agent Interaction | 142 | |
Modeling the Interpretation of Animations to Help Improve Emotional Expression | Taíssa Ribeiro, Ricardo Rodrigues and Carlos Martinho | Humans and AI / Human-Agent Interaction | 143 | |
Artificial prediction markets present a novel opportunity for human-AI collaboration | Tatiana Chakravorti, Vaibhav Singh, Michael McLaughlin, Robert Fraleigh, Christopher Griffin, Anthony Kwasnica, David Pennock, C. Lee Giles and Sarah Rajtmajer | Humans and AI / Human-Agent Interaction | 144 | |
Causal Explanations for Sequential Decision Making Under Uncertainty | Samer Nashed, Saaduddin Mahmud, Claudia Goldman and Shlomo Zilberstein | Humans and AI / Human-Agent Interaction | 145 | |
Hierarchical Reinforcement Learning with Human-AI Collaborative Sub-Goals Optimization | Haozhe Ma, Thanh Vinh Vo and Tze Yun Leong | Humans and AI / Human-Agent Interaction | 146 | |
Context-aware agents based on Psychological Archetypes for Teamwork | Anupama Arukgoda, Erandi Lakshika, Michael Barlow and Kasun Gunawardana | Humans and AI / Human-Agent Interaction | 147 | |
Personalized Agent Explanations for Human-Agent Teamwork: Adapting Explanations to User Trust, Workload, and Performance | Ruben Verhagen, Mark Neerincx, Can Parlar, Marin Vogel and Myrthe Tielman | Humans and AI / Human-Agent Interaction | 148 | |
A Teachable Agent to Enhance Elderly's Ikigai | Ping Chen, Xinjia Yu, Su Fang Lim and Zhiqi Shen | Humans and AI / Human-Agent Interaction | 149 | |
Improving Human-Robot Team Performance with Proactivity and Shared Mental Models | Gwen Edgar, Matthias Scheutz and Matthew Mcwilliams | Humans and AI / Human-Agent Interaction | 150 | |
Towards Explaining Sequences of Actions in Multi-Agent Deep Reinforcement Learning Models | Khaing Phyo Wai, Minghong Geng, Budhitama Subagdja, Shubham Pateria and Ah-Hwee Tan | Humans and AI / Human-Agent Interaction | 151 | |
Learning Constraints From Human Stop-feedback in Reinforcement Learning | Silvia Poletti, Alberto Testolin and Sebastian Tschiatschek | Humans and AI / Human-Agent Interaction | 152 | |
Goal Alignment: Re-analyzing Value Alignment Problems Using Human-Aware AI | Malek Mechergui and Sarath Sreedharan | Humans and AI / Human-Agent Interaction | 153 | |
Effectiveness of Teamwork-Level Interventions through Decision-Theoretic Reasoning in a Minecraft Search-and-Rescue Task | David Pynadath, Nik Gurney, Sarah Kenny, Rajay Kumar, Stacy Marsella, Haley Matuszak, Hala Mostafa, Pedro Sequeira, Volkan Ustun and Peggy Wu | Humans and AI / Human-Agent Interaction | 154 | |
Leveraging Hierarchical Reinforcement Learning for Ad-hoc Teaming | Stéphane Aroca-Ouellette, Miguel Aroca-Ouellette, Upasana Biswas, Katharina Kann and Alessandro Roncone | Humans and AI / Human-Agent Interaction | 155 | |
Asynchronous Communication Aware Multi-Agent Task Allocation | Ben Rachmut, Sofia Amador Nelke and Roie Zivan | Knowledge Representation, Reasoning, and Planning | 100 | |
Towards Robust Contrastive Explanations for Human-Neural Multi-agent Systems | Francesco Leofante and Alessio Lomuscio | Knowledge Representation, Reasoning, and Planning | 101 | |
Visual Explanations for Defence in Abstract Argumentation | Sylvie Doutre, Théo Duchatelle and Marie-Christine Lagasquie-Schiex | Knowledge Representation, Reasoning, and Planning | 102 | |
Minimising Task Tardiness for Multi-Agent Pickup and Delivery | Saravanan Ramanathan, Yihao Liu, Xueyan Tang, Wentong Cai and Jingning Li | Knowledge Representation, Reasoning, and Planning | 103 | |
Probabilistic Deduction as a Probabilistic Extension of Assumption-based Argumentation | Xiuyi Fan | Knowledge Representation, Reasoning, and Planning | 104 | |
Bayes-Adaptive Monte-Carlo Planning for Type-Based Reasoning in Large Partially Observable, Multi-Agent Environments | Jonathon Schwartz and Hanna Kurniawati | Knowledge Representation, Reasoning, and Planning | 105 | |
Blame Attribution for Multi-Agent Pathfinding Execution Failures | Avraham Natan, Roni Stern and Meir Kalech | Knowledge Representation, Reasoning, and Planning | 106 | |
A Semantic Approach to Decidability in Epistemic Planning | Alessandro Burigana, Paolo Felli, Marco Montali and Nicolas Troquard | Knowledge Representation, Reasoning, and Planning | 107 | |
Forward-PECVaR Algorithm: Exact Evaluation for CVaR SSPs | Willy Reis, Denis Pais, Valdinei Freire and Karina Delgado | Knowledge Representation, Reasoning, and Planning | 108 | |
Explainable Ensemble Classification Model based on Argumentation | Nadia Abchiche-Mimouni, Leila Amgoud and Farida Zehraoui | Knowledge Representation, Reasoning, and Planning | 109 | |
Updating Action Descriptions and Plans for Cognitive Agents | Peter Stringer, Rafael C. Cardoso, Clare Dixon, Michael Fisher and Louise Dennis | Knowledge Representation, Reasoning, and Planning | 110 | |
Argument-based Explanation Functions | Leila Amgoud, Philippe Muller and Henri Trenquier | Knowledge Representation, Reasoning, and Planning | 111 | |
A Formal Framework for Deceptive Topic Planning in Information-Seeking Dialogues | Andreas Brännström, Virginia Dignum and Juan Carlos Nieves | Knowledge Representation, Reasoning, and Planning | 112 | |
Memoryless Adversaries in Imperfect Information Games | Dhananjay Raju, Georgios Bakirtzis and Ufuk Topcu | Knowledge Representation, Reasoning, and Planning | 113 | |
Bounded and Unbounded Verification of RNN-Based Agents in Non-deterministic Environments | Mehran Hosseini and Alessio Lomuscio | Knowledge Representation, Reasoning, and Planning | 114 | |
Methods and Mechanisms for Interactive Novelty Handling in Adversarial Environments | Tung Thai, Utkarsh Soni, Mudit Verma, Sriram Gopalakrishnan, Ming Shen, Mayank Garg, Ayush Kalani, Nakul Vaidya, Subbarao Kambhampati, Neeraj Varshney, Chitta Baral, Jivko Sinapov and Matthias Scheutz | Knowledge Representation, Reasoning, and Planning | 115 | |
One-Shot Learning from a Demonstration with Hierarchical Latent Language | Nathaniel Weir, Xingdi Yuan, Marc-Alexandre Côté, Matthew Hausknecht, Romain Laroche, Ida Momennejad, Harm Van Seijen and Benjamin Van Durme | Knowledge Representation, Reasoning, and Planning | 116 | |
Emergent Compositional Concept Communication through Mutual Information in Multi-Agent Teams | Seth Karten, Siva Kailas and Katia Sycara | Knowledge Representation, Reasoning, and Planning | 117 | |
Reasoning about Uncertainty in AgentSpeak using Dynamic Epistemic Logic | Michael Vezina, Babak Esfandiari, François Schwarzentruber and Sandra Morley | Knowledge Representation, Reasoning, and Planning | 118 | |
Towards Optimal and Scalable Evacuation Planning Using Data-driven Agent Based Models | Kazi Ashik Islam, Da Qi Chen, Madhav Marathe, Henning Mortveit, Samarth Swarup and Anil Vullikanti | Knowledge Representation, Reasoning, and Planning | 119 | |
Intention Progression with Maintenance Goals | Di Wu, Yuan Yao, Natasha Alechina, Brian Logan and John Thangarajah | Knowledge Representation, Reasoning, and Planning | 120 | |
Safety Guarantees in Multi-agent Learning via Trapping Regions | Aleksander Czechowski and Frans Oliehoek | Learning and Adaptation | 23 | |
Multi-Team Fitness Critics For Robust Teaming | Joshua Cook, Tristan Scheiner and Kagan Tumer | Learning and Adaptation | 24 | |
Multi-Agent Deep Reinforcement Learning for High-Frequency Multi-Market Making | Pankaj Kumar | Learning and Adaptation | 25 | |
TA-Explore: Teacher-Assisted Exploration for Facilitating Fast Reinforcement Learning | Ali Beikmohammadi and Sindri Magnússon | Learning and Adaptation | 26 | |
Which way is `right'?: Uncovering limitations of Vision-and-Language Navigation models | Meera Hahn, James M. Rehg and Amit Raj | Learning and Adaptation | 33 | |
Learning Individual Difference Rewards in Multi-Agent Reinforcement Learning | Chen Yang, Guangkai Yang and Junge Zhang | Learning and Adaptation | 34 | |
TiLD: Third-person Imitation Learning by Estimating Domain Cognitive Differences of Visual Demonstrations | Zixuan Chen, Wenbin Li, Yang Gao and Yiyu Chen | Learning and Adaptation | 35 | |
Off-Beat Multi-Agent Reinforcement Learning | Wei Qiu, Weixun Wang, Rundong Wang, Bo An, Yujing Hu, Svetlana Obraztsova, Zinovi Rabinovich, Jianye Hao, Yingfeng Chen and Changjie Fan | Learning and Adaptation | 36 | |
AJAR: An Argumentation-based Judging Agents Framework for Ethical Reinforcement Learning | Benoît Alcaraz, Olivier Boissier, Rémy Chaput and Christopher Leturc | Learning and Adaptation | 37 | |
Never Worse, Mostly Better: Stable Policy Improvement in Deep Reinforcement Learning | Pranav Khanna, Guy Tennenholtz, Nadav Merlis, Shie Mannor and Chen Tessler | Learning and Adaptation | 38 | |
Selectively Sharing Experiences Improves Multi-Agent Reinforcement Learning | Matthias Gerstgrasser, Tom Danino and Sarah Keren | Learning and Adaptation | 39 | |
The challenge of redundancy on multi-agent value factorisation | Siddarth Singh and Benjamin Rosman | Learning and Adaptation | 40 | |
Robust Ordinal Regression for Collaborative Preference Learning with Opinion Synergies | Mohamed Ouaguenouni, Hugo Gilbert, Meltem Ozturk and Olivier Spanjaard | Learning and Adaptation | 41 | |
Off-the-Grid MARL: Datasets and Baselines for Offline Multi-Agent Reinforcement Learning | Juan Claude Formanek, Asad Jeewa, Arnu Pretorius and Jonathan Shock | Learning and Adaptation | 54 | |
Search-Improved Game-Theoretic Multiagent Reinforcement Learning in General and Negotiation Games | Zun Li, Marc Lanctot, Kevin McKee, Luke Marris, Ian Gemp, Daniel Hennes, Kate Larson, Yoram Bachrach, Michael Wellman and Paul Muller | Learning and Adaptation | 55 | |
Grey-box Adversarial Attack on Communication in Multi-agent Reinforcement Learning | Xiao Ma and Wu-Jun Li | Learning and Adaptation | 56 | |
Reward-Machine-Guided, Self-Paced Reinforcement Learning | Cevahir Koprulu and Ufuk Topcu | Learning and Adaptation | 57 | |
Centralized Cooperative Exploration Policy for Continuous Control Tasks | Chao Li, Chen Gong, Xinwen Hou, Yu Liu and Qiang He | Learning and Adaptation | 70 | |
Do As You Teach: A Multi-Teacher Approach to Self-Play in Deep Reinforcement Learning | Chaitanya Kharyal, Tanmay Sinha, Sai Krishna Gottipati, Fatemeh Abdollahi, Srijita Das and Matthew Taylor | Learning and Adaptation | 71 | |
PORTAL: Automatic Curricula Generation for Multiagent Reinforcement Learning | Jizhou Wu, Tianpei Yang, Xiaotian Hao, Jianye Hao, Yan Zheng, Weixun Wang and Matthew E. Taylor | Learning and Adaptation | 72 | |
AI-driven Prices for Externalities and Sustainability in Production Markets | Panayiotis Danassis, Aris Filos-Ratsikas, Haipeng Chen, Milind Tambe and Boi Faltings | Learning and Adaptation | 73 | |
For One and All: Individual and Group Fairness in the Allocation of Indivisible Goods | Jonathan Scarlett, Nicholas Teh and Yair Zick | Social Choice and Cooperative Game Theory | 78 | |
Matching Algorithms under Diversity-Based Reservations | Haris Aziz, Sean Morota Chu and Zhaohong Sun | Social Choice and Cooperative Game Theory | 79 | |
Social Mechanism Design: A Low-Level Introduction | Benjamin Abramowitz and Nicholas Mattei | Social Choice and Cooperative Game Theory | 80 | |
Online 2-stage Stable Matching | Evripidis Bampis, Bruno Escoffier and Paul Youssef | Social Choice and Cooperative Game Theory | 161 | |
Strategic Play By Resource-Bounded Agents in Security Games | Xinming Liu and Joseph Halpern | Markets, Auctions, and Non-Cooperative Game Theory | 157 | |
Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid Methodology | Zijian Shi and John Cartlidge | Markets, Auctions, and Non-Cooperative Game Theory | 158 | |
Regularization for Strategy Exploration in Empirical Game-Theoretic Analysis | Yongzhao Wang and Michael Wellman | Markets, Auctions, and Non-Cooperative Game Theory | 159 | |
A Scalable Opponent Model Using Bayesian Learning for Automated Bilateral Multi-Issue Negotiation | Shengbo Chang and Katsuhide Fujita | Markets, Auctions, and Non-Cooperative Game Theory | 160 | |
Modeling Robustness in Decision-Focused Learning as a Stackelberg Game | Sonja Johnson-Yu, Kai Wang, Jessie Finocchiaro, Aparna Taneja and Milind Tambe | Markets, Auctions, and Non-Cooperative Game Theory | 156 | |
Fair Facility Location for Socially Equitable Representation | Helen Sternbach and Sara Cohen | Coordination, Organisations, Institutions, and Norms | 77 | |
Day 2 | ||||
Day 2 | Kiko: Programming Agents to Enact Interaction Models | Samuel Christie, Munindar P. Singh and Amit Chopra | Engineering Multiagent Systems | 149 |
CraftEnv: A Flexible Collective Robotic Construction Environment for Multi-Agent Reinforcement Learning | Rui Zhao, Xu Liu, Yizheng Zhang, Minghao Li, Cheng Zhou, Shuai Li and Lei Han | Engineering Multiagent Systems | 150 | |
Feedback-Guided Intention Scheduling for BDI Agents | Michael Dann, John Thangarajah and Minyi Li | Engineering Multiagent Systems | 151 | |
A Behaviour-Driven Approach for Testing Requirements via User and System Stories in Agent Systems | Sebastian Rodriguez, John Thangarajah and Michael Winikoff | Engineering Multiagent Systems | 152 | |
ML-MAS: a Hybrid AI Framework for Self-Driving Vehicles | Hilal Al Shukairi and Rafael C. Cardoso | Engineering Multiagent Systems | 153 | |
Signifiers as a First-class Abstraction in Hypermedia Multi-Agent Systems | Danai Vachtsevanou, Andrei Ciortea, Simon Mayer and Jérémy Lemée | Engineering Multiagent Systems | 154 | |
MAIDS - a Framework for the Development of Multi-Agent Intentional Dialogue Systems | Débora Cristina Engelmann, Alison R. Panisson, Renata Vieira, Jomi Fred Hübner, Viviana Mascardi and Rafael H. Bordini | Engineering Multiagent Systems | 155 | |
Mandrake: Multiagent Systems as a Basis for Programming Fault-Tolerant Decentralized Applications | Samuel Christie, Amit Chopra and Munindar P. Singh | Engineering Multiagent Systems | 156 | |
Anonymous Multi-Agent Path Finding with Individual Deadlines | Gilad Fine, Dor Atzmon and Noa Agmon | Multiagent Path Finding | 101 | |
Learn to solve the min-max multiple traveling salesmen problem with reinforcement learning | Junyoung Park, Changhyun Kwon and Jinkyoo Park | Multiagent Path Finding | 102 | |
Counterfactual Fairness Filter for Fair-Delay Multi-Robot Navigation | Hikaru Asano, Ryo Yonetani, Mai Nishimura and Tadashi Kozuno | Multiagent Path Finding | 106 | |
Improved Complexity Results and an Efficient Solution for Connected Multi-Agent Path Finding | Isseïnie Calviac, Ocan Sankur and Francois Schwarzentruber | Multiagent Path Finding | 107 | |
Optimally Solving the Multiple Watchman Route Problem with Heuristic Search | Yaakov Livne, Dor Atzmon, Shawn Skyler, Eli Boyarski, Amir Shapiro and Ariel Felner | Multiagent Path Finding | 108 | |
Distributed Planning with Asynchronous Execution with Local Navigation for Multi-agent Pickup and Delivery Problem | Yuki Miyashita, Tomoki Yamauchi and Toshiharu Sugawara | Multiagent Path Finding | 109 | |
Energy-aware UAV Path Planning with Adaptive Speed | Jonathan Diller and Qi Han | Multiagent Path Finding | 110 | |
Coordination of Multiple Robots along Given Paths with Bounded Junction Complexity | Mikkel Abrahamsen, Tzvika Geft, Dan Halperin and Barak Ugav | Multiagent Path Finding | 111 | |
Efficient Interactive Recommendation with Huffman Tree-based Policy Learning | Longxiang Shi, Zilin Zhang, Shoujin Wang, Binbin Zhou, Minghui Wu, Cheng Yang and Shijian Li | Innovative Applications | 131 | |
HOPE: Human-Centric Off-Policy Evaluation for E-Learning and Healthcare | Ge Gao, Song Ju, Markel Sanz Ausin and Min Chi | Innovative Applications | 52 | |
ShelfHelp: Empowering Humans to Perform Vision-Independent Manipulation Tasks with a Socially Assistive Robotic Cane | Shivendra Agrawal, Suresh Nayak, Ashutosh Naik and Bradley Hayes | Innovative Applications | 132 | |
Preference-Aware Delivery Planning for Last-Mile Logistics | Qian Shao and Shih-Fen Cheng | Innovative Applications | 133 | |
Multi-Agent Reinforcement Learning with Safety Layer for Active Voltage Control | Yufeng Shi, Mingxiao Feng, Minrui Wang, Wengang Zhou and Houqiang Li | Innovative Applications | 134 | |
Multi-agent Signalless Intersection Management with Dynamic Platoon Formation | Phuriwat Worrawichaipat, Enrico Gerding, Ioannis Kaparias and Sarvapali Ramchurn | Innovative Applications | 135 | |
SocialLight: Distributed Cooperation Learning towards Network-Wide Traffic Signal Control | Harsh Goel, Yifeng Zhang, Mehul Damani and Guillaume Sartoretti | Innovative Applications | 136 | |
Model-Based Reinforcement Learning for Auto-Bidding in Display Advertising | Shuang Chen, Qisen Xu, Liang Zhang, Yongbo Jin, Wenhao Li and Linjian Mo | Innovative Applications | 137 | |
Follow your Nose: Using General Value Functions for Directed Exploration in Reinforcement Learning | Durgesh Kalwar, Omkar Shelke, Somjit Nath, Hardik Meisheri and Harshad Khadilkar | Reinforcement Learning | 1 | |
FedFormer: Contextual Federation with Attention in Reinforcement Learning | Liam Hebert, Lukasz Golab, Pascal Poupart and Robin Cohen | Reinforcement Learning | 2 | |
Diverse Policy Optimization for Structured Action Space | Wenhao Li, Baoxiang Wang, Shanchao Yang and Hongyuan Zha | Reinforcement Learning | 3 | |
Enhancing Reinforcement Learning Agents with Local Guides | Paul Daoudi, Bogdan Robu, Christophe Prieur, Ludovic Dos Santos and Merwan Barlier | Reinforcement Learning | 4 | |
Scalar reward is not enough | Peter Vamplew, Ben Smith, Johan Källström, Gabriel Ramos, Roxana Rădulescu, Diederik Roijers, Conor Hayes, Friedrik Hentz, Patrick Mannion, Pieter Libin, Richard Dazeley and Cameron Foale | Reinforcement Learning | 5 | |
Targeted Search Control in AlphaZero for Effective Policy Improvement | Alexandre Trudeau and Michael Bowling | Reinforcement Learning | 6 | |
Out-of-Distribution Detection for Reinforcement Learning Agents with Probabilistic Dynamics Models | Tom Haider, Karsten Roscher, Felippe Schmoeller da Roza and Stephan Günnemann | Reinforcement Learning | 7 | |
Knowledge Compilation for Constrained Combinatorial Action Spaces in Reinforcement Learning | Jiajing Ling, Moritz Lukas Schuler, Akshat Kumar and Pradeep Varakantham | Reinforcement Learning | 17 | |
Best of Both Worlds Fairness under Entitlements | Haris Aziz, Aditya Ganguly and Evi Micha | Matching | 16 | |
Probabilistic Rationing with Categorized Priorities: Processing Reserves Fairly and Efficiently | Haris Aziz | Matching | 24 | |
Semi-Popular Matchings and Copeland Winners | Telikepalli Kavitha and Rohit Vaish | Matching | 25 | |
Host Community Respecting Refugee Housing | Dušan Knop and Šimon Schierreich | Matching | 26 | |
Online matching with delays and stochastic arrival times | Mathieu Mari, Michał Pawłowski, Runtian Ren and Piotr Sankowski | Matching | 27 | |
Adapting Stable Matchings to Forced and Forbidden Pairs | Niclas Boehmer and Klaus Heeger | Matching | 28 | |
Stable Marriage in Euclidean Space | Yinghui Wen, Zhongyi Zhang and Jiong Guo | Matching | 29 | |
A Map of Diverse Synthetic Stable Roommates Instances | Niclas Boehmer, Klaus Heeger and Stanisław Szufa | Matching | 30 | |
Empirical Game-Theoretic Analysis for Mean Field Games | Yongzhao Wang and Michael Wellman | Learning in Games | 18 | |
Differentiable Arbitrating in Zero-sum Markov Games | Jing Wang, Meichen Song, Feng Gao, Boyi Liu, Zhaoran Wang and Yi Wu | Learning in Games | 19 | |
Learning Parameterized Families of Games | Madelyn Gatchel and Bryce Wiedenbeck | Learning in Games | 20 | |
Fictitious Cross-Play: Learning Global Nash Equilibrium in Mixed Cooperative-Competitive Games | Zelai Xu, Yancheng Liang, Chao Yu, Yu Wang and Yi Wu | Learning in Games | 21 | |
Cost Inference for Feedback Dynamic Games from Noisy Partial State Observations and Incomplete Trajectories | Jingqi Li, Chih-Yuan Chiu, Lasse Peters, Somayeh Sojoudi, Claire Tomlin and David Fridovich-Keil | Learning in Games | 22 | |
Multiplicative Weights Updates for Extensive Form Games | Chirag Chhablani, Michael Sullins and Ian Kash | Learning in Games | 23 | |
A Hybrid Framework of Reinforcement Learning and Physics-Informed Deep Learning for Spatiotemporal Mean Field Games | Xu Chen, Shuo Liu and Xuan Di | Learning in Games | 33 | |
Adversarial Inverse Reinforcement Learning for Mean Field Games | Yang Chen, Libo Zhang, Jiamou Liu and Michael Witbrock | Learning in Games | 34 | |
Indexability is Not Enough for Whittle: Improved, Near-Optimal Algorithms for Restless Bandits | Abheek Ghosh, Dheeraj Nagaraj, Manish Jain and Milind Tambe | Multi-Armed Bandits + Monte Carlo Tree Search | 52 | |
Avoiding Starvation of Arms in Restless Multi-Armed Bandits | Dexun Li and Pradeep Varakantham | Multi-Armed Bandits + Monte Carlo Tree Search | 53 | |
Restless Multi-Armed Bandits for Maternal and Child Health: Results from Decision-Focused Learning | Shresth Verma, Aditya Mate, Kai Wang, Neha Madhiwalla, Aparna Hegde, Aparna Taneja and Milind Tambe | Multi-Armed Bandits + Monte Carlo Tree Search | 54 | |
Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks | Arpita Biswas, Jackson Killian, Paula Rodriguez Diaz, Susobhan Ghosh and Milind Tambe | Multi-Armed Bandits + Monte Carlo Tree Search | 55 | |
On Regret-optimal Cooperative Nonstochastic Multi-armed Bandits | Jialin Yi and Milan Vojnovic | Multi-Armed Bandits + Monte Carlo Tree Search | 65 | |
Equilibrium Bandits: Learning Optimal Equilibria of Unknown Dynamics | Siddharth Chandak, Ilai Bistritz and Nicholas Bambos | Multi-Armed Bandits + Monte Carlo Tree Search | 66 | |
ExPoSe: Combining State-Based Exploration with Gradient-Based Online Search | Dixant Mittal, Siddharth Aravindan and Wee Sun Lee | Multi-Armed Bandits + Monte Carlo Tree Search | 67 | |
Formally-Sharp DAgger for MCTS: Lower-Latency Monte Carlo Tree Search using Data Aggregation with Formal Methods | Debraj Chakraborty, Damien Busatto-Gaston, Jean-François Raskin and Guillermo Perez | Multi-Armed Bandits + Monte Carlo Tree Search | 68 | |
Curriculum Offline Reinforcement Learning | Yuanying Cai, Chuheng Zhang, Hanye Zhao, Li Zhao and Jiang Bian | Reinfocement and Immitation Learning | 35 | |
Decentralized model-free reinforcement learning in stochastic games with average-reward objective | Romain Cravic, Nicolas Gast and Bruno Gaujal | Reinfocement and Immitation Learning | 36 | |
Less Is More: Refining Datasets for Offline Reinforcement Learning with Reward Machines | Haoyuan Sun and Feng Wu | Reinfocement and Immitation Learning | 37 | |
A Self-Organizing Neuro-Fuzzy Q-Network: Systematic Design with Offline Hybrid Learning | John Hostetter, Mark Abdelshiheed, Tiffany Barnes and Min Chi | Reinfocement and Immitation Learning | 38 | |
Learning to Coordinate from Offline Datasets with Uncoordinated Behavior Policies | Jinming Ma and Feng Wu | Reinfocement and Immitation Learning | 39 | |
D-Shape: Demonstration-Shaped Reinforcement Learning via Goal-Conditioning | Caroline Wang, Garrett Warnell and Peter Stone | Reinfocement and Immitation Learning | 49 | |
How To Guide Your Learner: Imitation Learning with Active Adaptive Expert Involvement | Xuhui Liu, Feng Xu, Xinyu Zhang, Tianyuan Liu, Shengyi Jiang, Ruifeng Chen, Zongzhang Zhang and Yang Yu | Reinfocement and Immitation Learning | 50 | |
Imitating Opponent to Win: Adversarial Policy Imitation Learning in Two-player Competitive Games | The Viet Bui, Tien Mai and Thanh Nguyen | Reinfocement and Immitation Learning | 51 | |
Price of Anarchy for First Price Auction with Risk-Averse Bidders | Zhiqiang Zhuang, Kewen Wang and Zhe Wang | Auctions + Voting | 8 | |
A Redistribution Framework for Diffusion Auctions | Sizhe Gu, Yao Zhang, Yida Zhao and Dengji Zhao | Auctions + Voting | 9 | |
Sybil-Proof Diffusion Auction in Social Networks | Hongyin Chen, Xiaotie Deng, Ying Wang, Yue Wu and Dengji Zhao | Auctions + Voting | 10 | |
Representing and Reasoning about Auctions | Munyque Mittelmann, Sylvain Bouveret and Laurent Perrussel | Auctions + Voting | 11 | |
Revisiting the Distortion of Distributed Voting | Aris Filos-Ratsikas and Alexandros Voudouris | Auctions + Voting | 12 | |
Bounded Approval Ballots: Balancing Expressiveness and Simplicity for Multiwinner Elections | Dorothea Baumeister, Linus Boes, Christian Laußmann and Simon Rey | Auctions + Voting | 13 | |
On the Distortion of Single Winner Elections with Aligned Candidates | Dimitris Fotakis and Laurent Gourves | Auctions + Voting | 14 | |
SAT-based Judgment Aggregation | Ari Conati, Andreas Niskanen and Matti Järvisalo | Auctions + Voting | 15 | |
GANterfactual-RL: Understanding Reinforcement Learning Agents' Strategies through Visual Counterfactual Explanations | Tobias Huber, Maximilian Demmler, Silvan Mertes, Matthew Olson and Elisabeth André | Learning with Humans and Robots | 69 | |
Asynchronous Multi-Agent Reinforcement Learning for Efficient Real-Time Multi-Robot Cooperative Exploration | Chao Yu, Xinyi Yang, Jiaxuan Gao, Jiayu Chen, Yunfei Li, Jijia Liu, Yunfei Xiang, Ruixin Huang, Huazhong Yang, Yi Wu and Yu Wang | Learning with Humans and Robots | 70 | |
Dec-AIRL: Decentralized Adversarial IRL for Human-Robot Teaming | Prasanth Sengadu Suresh, Yikang Gui and Prashant Doshi | Learning with Humans and Robots | 71 | |
Structural Attention-based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection | Neeloy Chakraborty, Aamir Hasan, Shuijing Liu, Tianchen Ji, Weihang Liang, D. Livingston McPherson and Katherine Driggs-Campbell | Learning with Humans and Robots | 81 | |
Controlled Diversity with Preference : Towards Learning a Diverse Set of Desired Skills | Maxence Hussonnois, Thommen Karimpanal George and Santu Rana | Learning with Humans and Robots | 82 | |
Learning from Multiple Independent Advisors in Multi-agent Reinforcement Learning | Sriram Ganapathi Subramanian, Matthew E. Taylor, Kate Larson and Mark Crowley | Learning with Humans and Robots | 83 | |
Benchmarking Robustness and Generalization in Multi-Agent Systems: A Case Study on Neural MMO | Yangkun Chen, Joseph Suarez, Junjie Zhang, Chenghui Yu, Bo Wu, Hanmo Chen, Hengman Zhu, Rui Du, Shanliang Qian, Shuai Liu, Weijun Hong, Jinke He, Yibing Zhang, Liang Zhao, Clare Zhu, Julian Togelius, Sharada Mohanty, Jiaxin Chen, Xiu Li, Xiaolong Zhu and Phillip Isola | Engineering Multiagent Systems | 157 | |
SE4AI issues on Designing a Social Media Agent: Agile Use Case design for Behavioral Game Theory | Francisco Marcondes, José João Almeida and Paulo Novais | Engineering Multiagent Systems | 158 | |
Modeling Application Scenarios for Responsible Autonomy using Computational Transcendence | Jayati Deshmukh, Nikitha Adivi and Srinath Srinivasa | Engineering Multiagent Systems | 159 | |
Domain-Expert Configuration of Hypermedia Multi-Agent Systems in Industrial Use Cases | Jérémy Lemée, Samuele Burattini, Simon Mayer and Andrei Ciortea | Engineering Multiagent Systems | 160 | |
Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads | Vincent Mai, Philippe Maisonneuve, Tianyu Zhang, Hadi Nekoei, Liam Paull and Antoine Lesage Landry | Innovative Applications | 138 | |
The Swiss Gambit | Ágnes Cseh, Pascal Führlich and Pascal Lenzner | Innovative Applications | 139 | |
An Adversarial Strategic Game for Machine Learning as a Service using System Features | Guoxin Sun, Tansu Alpcan, Andrew Cullen, Seyit Camtepe and Benjamin Rubinstein | Innovative Applications | 140 | |
Optimizing Crop Management with Reinforcement Learning and Imitation Learning | Ran Tao, Pan Zhao, Jing Wu, Nicolas F. Martin, Matthew T. Harrison, Carla Ferreira, Zahra Kalantari and Naira Hovakimyan | Innovative Applications | 141 | |
A Novel Aggregation Framework for the Efficient Integration of Distributed Energy Resources in the Smart Grid | Stavros Orfanoudakis and Georgios Chalkiadakis | Innovative Applications | 142 | |
Near Optimal Strategies for Honeypots Placement in Dynamic and Large Active Directory Networks | Quang Huy Ngo, Mingyu Guo and Hung Nguyen | Innovative Applications | 143 | |
A Novel Demand Response Model and Method for Peak Reduction in Smart Grids -- PowerTAC | Sanjay Chandlekar, Arthik Boroju, Shweta Jain and Sujit Gujar | Innovative Applications | 144 | |
Shopping Assistance for Everyone: Dynamic Query Generation On a Semantic Digital Twin As a Basis for Autonomous Shopping Assistance | Michaela Kümpel, Jonas Dech, Alina Hawkin and Michael Beetz | Innovative Applications | 145 | |
Counterfactually Fair Dynamic Assignment: A Case Study on Policing | Tasfia Mashiat, Xavier Gitiaux, Huzefa Rangwala and Sanmay Das | Innovative Applications | 146 | |
A Cloud-Based Solution for Multi-Agent Traffic Control Systems | Chikadibia Ihejimba, Behnan Torabi and Rym Z. Wenkstern | Innovative Applications | 147 | |
Balancing Fairness and Efficiency in Transport Network Design through Reinforcement Learning | Dimitris Michailidis, Sennay Ghebreab and Fernando Santos | Innovative Applications | 148 | |
From Abstractions to Grounded Languages for Robust Coordination of Task Planning Robots | Yu Zhang | Robotics | 112 | |
Idleness Estimation for Distributed Multiagent Patrolling Strategies | Mehdi Othmani-Guibourg, Jean-Loup Farges and Amal El Fallah Seghrouchni | Robotics | 113 | |
Simpler rather than Challenging: Design of Non-Dyadic Human-Robot Collaboration to Mediate Concurrent Human-Human Tasks | Francesco Semeraro, Jon Carberry and Angelo Cangelosi | Robotics | 114 | |
Learning to Self-Reconfigure for Freeform Modular Robots via Altruism Multi-Agent Reinforcement Learning | Lei Wu, Bin Guo, Qiuyun Zhang, Zhuo Sun, Jieyi Zhang and Zhiwen Yu | Robotics | 115 | |
Learning Multiple Tasks with Non-stationary Interdependencies in Autonomous Robots | Alejandro Romero, Gianluca Baldassarre, Richard Duro and Vieri Giuliano Santucci | Robotics | 116 | |
A Lattice Model of 3D Environments For Provable Manipulation | John Harwell, London Lowmanstone and Maria Gini | Robotics | 117 | |
HoLA Robots: Mitigating Plan-Deviation Attacks in Multi-Robot Systems with Co-Observations and Horizon-Limiting Announcements | Kacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru and Wenchao Li | Robotics | 118 | |
Online Re-Planning and Adaptive Parameter Update for Multi-Agent Path Finding with Stochastic Travel Times | Atsuyoshi Kita, Nobuhiro Suenari, Masashi Okada and Tadahiro Taniguchi | Robotics | 119 | |
RTransNav:Relation-wise Transformer Network for More Successful Object Goal Navigation | Kang Zhou, Chi Guo, Huyin Zhang and Wenfei Guo | Robotics | 120 | |
Multi-Agent Pickup and Delivery in Presence of Another Team of Robots | Benedetta Flammini, Davide Azzalini and Francesco Amigoni | Robotics | 121 | |
Reward Relabelling for combined Reinforcement and Imitation Learning on sparse-reward tasks | Jesús Bujalance Martín and Fabien Moutarde | Robotics | 122 | |
Connectivity Enhanced Safe Neural Network Planner for Lane Changing in Mixed Traffic | Xiangguo Liu, Ruochen Jiao, Bowen Zheng, Dave Liang and Qi Zhu | Robotics | 123 | |
Bringing Diversity to Autonomous Vehicles: An Interpretable Multi-vehicle Decision-making and Planning Framework | Licheng Wen, Pinlong Cai, Daocheng Fu, Song Mao and Yikang Li | Robotics | 124 | |
Loss of Distributed Coverage Using Lazy Agents Operating Under Discrete, Local, Event-Triggered Communication | Edward Vickery and Aditya Paranjape | Robotics | 125 | |
Multi-Agent Path Finding via Reinforcement Learning with Hybrid Reward | Cheng Zhao, Liansheng Zhuang, Haonan Liu, Yihong Huang and Yang Jian | Robotics | 126 | |
Multi-Agent Pickup and Delivery with Task Probability Distribution | Andrea Di Pietro, Nicola Basilico and Francesco Amigoni | Robotics | 127 | |
Minimally Constraining Line-of-Sight Connectivity Maintenance for Collision-free Multi-Robot Networks under Uncertainty | Yupeng Yang, Yiwei Lyu and Wenhao Luo | Robotics | 128 | |
Multi-Agent Path Finding with Time Windows: Preliminary Results | Jianqi Gao, Qi Liu, Shiyu Chen, Kejian Yan, Xinyi Li and Yanjie Li | Robotics | 129 | |
Two Level Actor-Critic Using Multiple Teachers | Su Zhang, Srijita Das, Sriram Ganapathi Subramanian and Matthew E. Taylor | Learning and Adaptation | 84 | |
Provably Efficient Offline RL with Options | Xiaoyan Hu and Ho-fung Leung | Learning and Adaptation | 85 | |
Learning to Perceive in Deep Model-Free Reinforcement Learning | Gonçalo Querido, Alberto Sardinha and Francisco Melo | Learning and Adaptation | 86 | |
SCRIMP: Scalable Communication for Reinforcement- and Imitation-Learning-Based Multi-Agent Pathfinding | Yutong Wang, Bairan Xiang, Shinan Huang and Guillaume Sartoretti | Learning and Adaptation | 87 | |
Learning Group-Level Information Integration in Multi-Agent Communication | Xiangrui Meng and Ying Tan | Learning and Adaptation | 88 | |
Learnability with PAC Semantics for Multi-agent Beliefs | Ionela Mocanu, Vaishak Belle and Brendan Juba | Learning and Adaptation | 89 | |
Improving Cooperative Multi-Agent Exploration via Surprise Minimization and Social Influence Maximization | Mingyang Sun, Yaqing Hou, Jie Kang, Haiyin Piao, Yifeng Zeng, Hongwei Ge and Qiang Zhang | Learning and Adaptation | 90 | |
Learning to Operate in Open Worlds by Adapting Planning Models | Wiktor Piotrowski, Roni Stern, Yoni Sher, Jacob Le, Matthew Klenk, Johan de Kleer and Shiwali Mohan | Learning and Adaptation | 91 | |
End-to-End Optimization and Learning for Multiagent Ensembles | James Kotary, Vincenzo Di Vito and Ferdinando Fioretto | Learning and Adaptation | 92 | |
Optimal Decoy Resource Allocation for Proactive Defense in Probabilistic Attack Graphs | Haoxiang Ma, Shuo Han, Nandi Leslie, Charles Kamhoua and Jie Fu | Learning and Adaptation | 93 | |
Referential communication in heterogeneous communities of pre-trained visual deep networks | Matéo Mahaut, Roberto Dessì, Francesca Franzon and Marco Baroni | Learning and Adaptation | 94 | |
A Learning Approach to Complex Contagion Influence Maximization | Haipeng Chen, Bryan Wilder, Wei Qiu, Bo An, Eric Rice and Milind Tambe | Learning and Adaptation | 95 | |
Analyzing the Sensitivity to Policy-Value Decoupling in Deep Reinforcement Learning Generalization | Nasik Muhammad Nafi, Raja Farrukh Ali and William Hsu | Learning and Adaptation | 96 | |
Reinforcement Learning with Depreciating Assets | Taylor Dohmen and Ashutosh Trivedi | Learning and Adaptation | 97 | |
Matching Options to Tasks using Option-Indexed Hierarchical Reinforcement Learning | Kushal Chauhan, Soumya Chatterjee, Akash Reddy, Aniruddha S, Balaraman Ravindran and Pradeep Shenoy | Learning and Adaptation | 98 | |
DGPO: Discovering Multiple Strategies with Diversity-Guided Policy Optimization | Wenze Chen, Shiyu Huang, Yuan Chiang, Ting Chen and Jun Zhu | Learning and Adaptation | 99 | |
Accelerating Neural MCTS Algorithms using Neural Sub-Net Structures | Prashank Kadam, Ruiyang Xu and Karl Lieberherr | Learning and Adaptation | 100 | |
Provably Efficient Convergence of Primal-Dual Actor-Critic with Nonlinear Function Approximation | Jing Dong, Li Shen, Yinggan Xu and Baoxiang Wang | Learning and Adaptation | 103 | |
Achieving near-optimal regrets in confounded contextual bandits | Xueping Gong and Jiheng Zhang | Learning and Adaptation | 104 | |
Towards multi-agent learning of causal networks | Stefano Mariani, Franco Zambonelli and Pasquale Roseti | Learning and Adaptation | 105 | |
Proportional Fairness in Obnoxious Facility Location | Haris Aziz, Alexander Lam, Bo Li, Fahimeh Ramezani and Toby Walsh | Social Choice and Cooperative Game Theory | 31 | |
Distortion in Attribute Approval Committee Elections | Dorothea Baumeister and Linus Boes | Social Choice and Cooperative Game Theory | 32 | |
Relaxations of Envy-Freeness Over Graphs | Justin Payan, Rik Sengupta and Vignesh Viswanathan | Social Choice and Cooperative Game Theory | 40 | |
Fairly Allocating (Contiguous) Dynamic Indivisible Items with Few Adjustments | Mingwei Yang | Social Choice and Cooperative Game Theory | 41 | |
Measuring a Priori Voting Power - Taking Delegations Seriously | Rachael Colley, Théo Delemazure and Hugo Gilbert | Social Choice and Cooperative Game Theory | 42 | |
Sampling-Based Winner Prediction in District-Based Elections | Debajyoti Kar, Palash Dey and Swagato Sanyal | Social Choice and Cooperative Game Theory | 43 | |
Cedric: A Collaborative DDoS Defense System Using Credit | Jiawei Li, Hui Wang and Jilong Wang | Social Choice and Cooperative Game Theory | 44 | |
Social Aware Coalition Formation with Bounded Coalition Size | Chaya Levinger, Amos Azaria and Noam Hazon | Social Choice and Cooperative Game Theory | 45 | |
Repeatedly Matching Items to Agents Fairly and Efficiently | Shivika Narang and Ioannis Caragiannis | Social Choice and Cooperative Game Theory | 46 | |
The complexity of minimizing envy in house allocation | Jayakrishnan Madathil, Neeldhara Misra and Aditi Sethia | Social Choice and Cooperative Game Theory | 47 | |
Error in the Euclidean Preference Model | Luke Thorburn, Maria Polukarov and Carmine Ventre | Social Choice and Cooperative Game Theory | 48 | |
Distance Hypergraph Polymatrix Coordination Games | Alessandro Aloisio | Social Choice and Cooperative Game Theory | 56 | |
Search versus Search for Collapsing Electoral Control Types | Benjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David Narváez, Conor Taliancich and Henry B. Welles | Social Choice and Cooperative Game Theory | 57 | |
Does Delegating Votes Protect Against Pandering Candidates? | Xiaolin Sun, Jacob Masur, Benjamin Abramowitz, Nicholas Mattei and Zizhan Zheng | Social Choice and Cooperative Game Theory | 58 | |
Resilient Fair Allocation of Indivisible Goods | Dolev Mutzari, Yonatan Aumann and Sarit Kraus | Social Choice and Cooperative Game Theory | 59 | |
Stability of Weighted Majority Voting under Estimated Weights | Shaojie Bai, Dongxia Wang, Muller Tim, Peng Cheng and Jiming Chen | Social Choice and Cooperative Game Theory | 60 | |
Indivisible Participatory Budgeting with Multiple Degrees of Sophistication of Projects | Gogulapati Sreedurga | Social Choice and Cooperative Game Theory | 61 | |
Incentivizing Sequential Crowdsourcing Systems | Yuan Luo | Markets, Auctions, and Non-Cooperative Game Theory | 62 | |
No-regret Learning Dynamics for Sequential Correlated Equilibria | Hugh Zhang | Markets, Auctions, and Non-Cooperative Game Theory | 63 | |
Fair Pricing for Time-Flexible Smart Energy Markets | Roland Saur, Han La Poutré and Neil Yorke-Smith | Markets, Auctions, and Non-Cooperative Game Theory | 64 | |
Budget-Feasible Mechanism Design for Cost-Benefit Optimization in Gradual Service Procurement | Farzaneh Farhadi, Maria Chli and Nicholas R. Jennings | Markets, Auctions, and Non-Cooperative Game Theory | 73 | |
Analysis of a Learning Based Algorithm for Budget Pacing | Max Springer and Mohammadtaghi Hajiaghayi | Markets, Auctions, and Non-Cooperative Game Theory | 74 | |
Finding Optimal Nash Equilibria in Multiplayer Games via Correlation Plans | Youzhi Zhang, Bo An and V.S. Subrahmanian | Markets, Auctions, and Non-Cooperative Game Theory | 75 | |
Diffusion Multi-unit Auctions with Diminishing Marginal Utility Buyers | Haolin Liu, Xinyuan Lian and Dengji Zhao | Markets, Auctions, and Non-Cooperative Game Theory | 76 | |
Improving Quantal Cognitive Hierarchy Model Through Iterative Population Learning | Yuhong Xu, Shih-Fen Cheng and Xinyu Chen | Markets, Auctions, and Non-Cooperative Game Theory | 77 | |
A Nash-Bargaining-Based Mechanism for One-Sided Matching Markets under Dichotomous Utilities | Jugal Garg, Thorben Tröbst and Vijay Vazirani | Markets, Auctions, and Non-Cooperative Game Theory | 78 | |
Differentially Private Diffusion Auction: The Single-unit Case | Fengjuan Jia, Mengxiao Zhang, Jiamou Liu and Bakh Khoussainov | Markets, Auctions, and Non-Cooperative Game Theory | 79 | |
Learning in teams: peer evaluation for fair assessment of individual contributions | Fedor Duzhin | Markets, Auctions, and Non-Cooperative Game Theory | 80 | |
Day 3 | ||||
Day 3 | Models of Anxiety for Agent Deliberation: The Benefits of Anxiety-Sensitive Agents | Arvid Horned and Loïs Vanhée | Blue Sky | 111 |
Social Choice Around Decentralized Autonomous Organizations: On the Computational Social Choice of Digital Communities | Nimrod Talmon | Blue Sky | 112 | |
Value Inference in Sociotechnical Systems | Enrico Liscio, Roger Lera-Leri, Filippo Bistaffa, Roel I. J. Dobbe, Catholijn M. Jonker, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar and Pradeep K. Murukannaiah | Blue Sky | 113 | |
Presenting Multiagent Challenges in Team Sports Analytics | David Radke and Alexi Orchard | Blue Sky | 114 | |
Communication Meaning: Foundations and Directions for Systems Research | Amit Chopra and Samuel Christie | Blue Sky | 115 | |
The Rule–Tool–User Nexus in Digital Collective Decisions | Zoi Terzopoulou, Marijn A. Keijzer, Gogulapati Sreedurga and Jobst Heitzig | Blue Sky | 116 | |
Epistemic Side Effects: An AI Safety Problem | Toryn Q. Klassen, Parand Alizadeh Alamdari and Sheila A. McIlraith | Blue Sky | 117 | |
Citizen-Centric Multiagent Systems | Sebastian Stein and Vahid Yazdanpanah | Blue Sky | 118 | |
Non-Obvious Manipulability for Single-Parameter Agents and Bilateral Trade | Thomas Archbold, Bart de Keijzer and Carmine Ventre | Mechanism Design | 126 | |
Mechanism Design for Improving Accessibility to Public Facilities | Hau Chan and Chenhao Wang | Mechanism Design | 127 | |
Explicit Payments for Obviously Strategyproof Mechanisms | Diodato Ferraioli and Carmine Ventre | Mechanism Design | 128 | |
Bilevel Entropy based Mechanism Design for Balancing Meta in Video Games | Sumedh Pendurkar, Chris Chow, Luo Jie and Guni Sharon | Mechanism Design | 129 | |
IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social Dilemmas | Bengisu Guresti, Abdullah Vanlioglu and Nazim Kemal Ure | Mechanism Design | 130 | |
Settling the Distortion of Distributed Facility Location | Aris Filos-Ratsikas, Panagiotis Kanellopoulos, Alexandros Voudouris and Rongsen Zhang | Mechanism Design | 131 | |
Cost Sharing under Private Valuation and Connection Control | Tianyi Zhang, Junyu Zhang, Sizhe Gu and Dengji Zhao | Mechanism Design | 132 | |
Facility Location Games with Thresholds | Houyu Zhou, Guochuan Zhang, Lili Mei and Minming Li | Mechanism Design | 133 | |
Decentralised and Cooperative Control of Multi-Robot Systems through Distributed Optimisation | Yi Dong, Zhongguo Li, Xingyu Zhao, Zhengtao Ding and Xiaowei Huang | Robotics | 81 | |
Byzantine Resilience at Swarm Scale: A Decentralized Blocklist from Inter-robot Accusations | Kacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru and Wenchao Li | Robotics | 82 | |
Stigmergy-based, Dual-Layer Coverage of Unknown Regions | Ori Rappel, Michael Amir and Alfred Bruckstein | Robotics | 83 | |
Mitigating Imminent Collision for Multi-robot Navigation: A TTC-force Reward Shaping Approach | Jinlin Chen, Jiannong Cao, Zhiqin Cheng and Wei Li | Robotics | 84 | |
Gathering of Anonymous Agents | John Augustine, Arnhav Datar and Nischith Shadagopan M N | Robotics | 85 | |
Safe Deep Reinforcement Learning by Verifying Task-Level Properties | Enrico Marchesini, Luca Marzari, Alessandro Farinelli and Christopher Amato | Robotics | 86 | |
Decentralized Safe Navigation for Multi-agent Systems via Risk-aware Weighted Buffered Voronoi Cells | Yiwei Lyu, John Dolan and Wenhao Luo | Robotics | 87 | |
Heterogeneous Multi-Robot Reinforcement Learning | Matteo Bettini, Ajay Shankar and Amanda Prorok | Robotics | 88 | |
Random Majority Opinion Diffusion: Stabilization Time, Absorbing States, and Influential Nodes | Ahad N. Zehmakan | Social Networks | 134 | |
Axiomatic Analysis of Medial Centrality Measures | Wiktoria Kosny and Oskar Skibski | Social Networks | 135 | |
Online Influence Maximization under Decreasing Cascade Model | Fang Kong, Jize Xie, Baoxiang Wang, Tao Yao and Shuai Li | Social Networks | 136 | |
Node Conversion Optimization in Multi-hop Influence Networks | Jie Zhang, Yuezhou Lv and Zihe Wang | Social Networks | 137 | |
Decentralized core-periphery structure in social networks accelerates cultural innovation in agent-based modeling | Jesse Milzman and Cody Moser | Social Networks | 138 | |
Being an Influencer is Hard: The Complexity of Influence Maximization in Temporal Graphs with a Fixed Source | Argyrios Deligkas, Eduard Eiben, Tiger-Lily Goldsmith and George Skretas | Social Networks | 139 | |
Enabling Imitation-Based Cooperation in Dynamic Social Networks | Jacques Bara, Paolo Turrini and Giulia Andrighetto | Social Networks | 140 | |
The Grapevine Web: Analysing the Spread of False Information in Social Networks with Corrupted Sources | Jacques Bara, Charlie Pilgrim, Paolo Turrini and Stanislav Zhydkov | Social Networks | 141 | |
Differentiable Agent-based Epidemiology | Ayush Chopra, Alexander Rodríguez, Jayakumar Subramanian, Arnau Quera-Bofarull, Balaji Krishnamurthy, B. Aditya Prakash and Ramesh Raskar | Simulations | 89 | |
Social Distancing via Social Scheduling | Deepesh Kumar Lall, Garima Shakya and Swaprava Nath | Simulations | 90 | |
Don't Simulate Twice: one-shot sensitivity analyses via automatic differentiation | Arnau Quera-Bofarull, Ayush Chopra, Joseph Aylett-Bullock, Carolina Cuesta-Lazaro, Ani Calinescu, Ramesh Raskar and Mike Wooldridge | Simulations | 91 | |
Markov Aggregation for Speeding Up Agent-Based Movement Simulations | Bernhard Geiger, Alireza Jahani, Hussain Hussain and Derek Groen | Simulations | 92 | |
Agent-Based Modeling of Human Decision-makers Under Uncertain Information During Supply Chain Shortages | Nutchanon Yongsatianchot, Noah Chicoine, Jacqueline Griffin, Ozlem Ergun and Stacy Marsella | Simulations | 93 | |
Simulating panic amplification in crowds via a density-emotion interaction | Erik van Haeringen and Charlotte Gerritsen | Simulations | 94 | |
Modelling Agent Decision Making in Agent-based Simulation - Analysis Using an Economic Technology Uptake Model | Franziska Klügl and Hildegunn Kyvik Nordås | Simulations | 95 | |
Emotion contagion in agent-based simulations of crowds: a systematic review | Erik van Haeringen, Charlotte Gerritsen and Koen Hindriks | Simulations | 96 | |
Learning Inter-Agent Synergies in Asymmetric Multiagent Systems | Gaurav Dixit and Kagan Tumer | Multiagent Reinforcement Learning III | 23 | |
Asymptotic Convergence and Performance of Multi-Agent Q-learning Dynamics | Aamal Hussain, Francesco Belardinelli and Georgios Piliouras | Multiagent Reinforcement Learning III | 8 | |
Model-based Dynamic Shielding for Safe and Efficient Multi-agent Reinforcement Learning | Wenli Xiao, Yiwei Lyu and John Dolan | Multiagent Reinforcement Learning III | 24 | |
Toward Risk-based Optimistic Exploration for Cooperative Multi-Agent Reinforcement Learning | Jihwan Oh, Joonkee Kim, Minchan Jeong and Se-Young Yun | Multiagent Reinforcement Learning III | 9 | |
Counter-Example Guided Policy Refinement in Multi-agent Reinforcement Learning | Briti Gangopadhyay, Pallab Dasgupta and Soumyajit Dey | Multiagent Reinforcement Learning III | 25 | |
Prioritized Tasks Mining for Multi-Task Cooperative Multi-Agent Reinforcement Learning | Yang Yu, Qiyue Yin, Junge Zhang and Kaiqi Huang | Multiagent Reinforcement Learning III | 10 | |
M3: Modularization for Multi-task and Multi-agent Offline Pre-training | Linghui Meng, Jingqing Ruan, Xuantang Xiong, Xiyun Li, Xi Zhang, Dengpeng Xing and Bo Xu | Multiagent Reinforcement Learning III | 26 | |
The Importance of Credo in Multiagent Learning | David Radke, Kate Larson and Tim Brecht | Norms | 121 | |
Contextual Integrity for Argumentation-based Privacy Reasoning | Gideon Ogunniye and Nadin Kokciyan | Norms | 122 | |
Predicting privacy preferences for smart devices as norms | Marc Serramia, William Seymour, Natalia Criado and Michael Luck | Norms | 123 | |
Agent-directed runtime norm synthesis | Andreasa Morris Martin, Marina De Vos, Julian Padget and Oliver Ray | Norms | 124 | |
Emergence of Norms in Interactions with Complex Rewards | Dhaminda Abeywickrama, Nathan Griffiths, Zhou Xu and Alex Mouzakitis | Norms | 125 | |
User Device Interaction Prediction via Relational Gated Graph Attention Network and Intent-aware Encoder | Jingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Wenxin Tang, Runjie Zhou and Yong Jiang | Graph Neural Networks + Transformers | 4 | |
Inferring Player Location in Sports Matches: Multi-Agent Spatial Imputation from Limited Observations | Gregory Everett, Ryan Beal, Tim Matthews, Joseph Early, Timothy Norman and Sarvapali Ramchurn | Graph Neural Networks + Transformers | 20 | |
Learning Graph-Enhanced Commander-Executor for Multi-Agent Navigation | Xinyi Yang, Shiyu Huang, Yiwen Sun, Yuxiang Yang, Chao Yu, Wei-Wei Tu, Huazhong Yang and Yu Wang | Graph Neural Networks + Transformers | 5 | |
Permutation-Invariant Set Autoencoders with Fixed-Size Embeddings for Multi-Agent Learning | Ryan Kortvelesy, Steven Morad and Amanda Prorok | Graph Neural Networks + Transformers | 21 | |
Infomaxformer: Maximum Entropy Transformer for Long Time-Series Forecasting Problem | Peiwang Tang and Xianchao Zhang | Graph Neural Networks + Transformers | 6 | |
TransfQMix: Transformers for Leveraging the Graph Structure of Multi-Agent Reinforcement Learning Problems | Matteo Gallici, Mario Martin and Ivan Masmitja | Graph Neural Networks + Transformers | 22 | |
Intelligent Onboard Routing in Stochastic Dynamic Environments using Transformers | Rohit Chowdhury, Raswanth Murugan and Deepak Subramani | Graph Neural Networks + Transformers | 7 | |
Characterizations of Sequential Valuation Rules | Chris Dong and Patrick Lederer | Voting I | 11 | |
Collecting, Classifying, Analyzing, and Using Real-World Ranking Data | Niclas Boehmer and Nathan Schaar | Voting I | 27 | |
Margin of Victory for Weighted Tournament Solutions | Michelle Döring and Jannik Peters | Voting I | 12 | |
Bribery Can Get Harder in Structured Multiwinner Approval Election | Bartosz Kusek, Robert Bredereck, Piotr Faliszewski, Andrzej Kaczmarczyk and Dušan Knop | Voting I | 28 | |
Strategyproof Social Decision Schemes on Super Condorcet Domains | Felix Brandt, Patrick Lederer and Sascha Tausch | Voting I | 13 | |
Separating and Collapsing Electoral Control Types | Benjamin Carleton, Michael C. Chavrimootoo, Lane A. Hemaspaandra, David Narváez, Conor Taliancich and Henry B. Welles | Voting I | 29 | |
The Distortion of Approval Voting with Runoff | Soroush Ebadian, Mohamad Latifian and Nisarg Shah | Voting I | 14 | |
On the Complexity of the Two-Stage Majority Rule | Yongjie Yang | Voting II | 43 | |
Fairness in Participatory Budgeting via Equality of Resources | Jan Maly, Simon Rey, Ulle Endriss and Martin Lackner | Voting II | 59 | |
Free-Riding in Multi-Issue Decisions | Martin Lackner, Jan Maly and Oliviero Nardi | Voting II | 44 | |
k-prize Weighted Voting Game | Wei-Chen Lee, David Hyland, Alessandro Abate, Edith Elkind, Jiarui Gan, Julian Gutierrez, Paul Harrenstein and Michael Wooldridge | Voting II | 60 | |
Computing the Best Policy That Survives a Vote | Andrei Constantinescu and Roger Wattenhofer | Voting II | 45 | |
Voting by Axioms | Marie Christin Schmidtlein and Ulle Endriss | Voting II | 61 | |
A Hotelling-Downs game for strategic candidacy with binary issues | Javier Maass, Vincent Mousseau and Anaëlle Wilczynski | Voting II | 46 | |
Voting with Limited Energy: A Study of Plurality and Borda | Zoi Terzopoulou | Voting II | 62 | |
Revealed multi-objective utility aggregation in human driving | Atrisha Sarkar, Kate Larson and Krzysztof Czarnecki | Multi-objective Planning and Learning | 1 | |
A Brief Guide to Multi-Objective Reinforcement Learning and Planning | Conor F Hayes, Roxana Radulescu, Eugenio Bargiacchi, Johan Kallstrom, Matthew Macfarlane, Mathieu Reymond, Timothy Verstraeten, Luisa Zintgraf, Richard Dazeley, Fredrik Heintz, Enda Howley, Athirai A. Irissappane, Patrick Mannion, Ann Nowe, Gabriel Ramos, Marcello Restelli, Peter Vamplew and Diederik M. Roijers | Multi-objective Planning and Learning | 2 | |
Welfare and Fairness in Multi-objective Reinforcement Learning | Ziming Fan, Nianli Peng, Muhang Tian and Brandon Fain | Multi-objective Planning and Learning | 3 | |
Preference-Based Multi-Objective Multi-Agent Path Finding | Florence Ho and Shinji Nakadai | Multi-objective Planning and Learning | 17 | |
Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization | Lucas N. Alegre, Ana L. C. Bazzan, Diederik M. Roijers, Ann Nowé and Bruno C. da Silva | Multi-objective Planning and Learning | 18 | |
MADDM: Multi-Advisor Dynamic Binary Decision-Making by Maximizing the Utility | Zhaori Guo, Timothy Norman and Enrico Gerding | Multi-objective Planning and Learning | 19 | |
Worst-Case Adaptive Submodular Cover | Jing Yuan and Shaojie Tang | Deep Learning | 33 | |
Minimax Strikes Back | Quentin Cohen-Solal and Tristan Cazenave | Deep Learning | 49 | |
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning | Bram Grooten, Ghada Sokar, Shibhansh Dohare, Elena Mocanu, Matthew Taylor, Mykola Pechenizkiy and Decebal Constantin Mocanu | Deep Learning | 34 | |
Parameter Sharing with Network Pruning for Scalable Multi-Agent Deep Reinforcement Learning | Woojun Kim and Youngchul Sung | Deep Learning | 50 | |
Learning Rewards to Optimize Global Performance Metrics in Deep Reinforcement Learning | Junqi Qian, Paul Weng and Chenmien Tan | Deep Learning | 35 | |
A Deep Reinforcement Learning Approach for Online Parcel Assignment | Hao Zeng, Qiong Wu, Kunpeng Han, Junying He and Haoyuan Hu | Deep Learning | 51 | |
CoRaL: Continual Representation Learning for Overcoming Catastrophic Forgetting | Mohammad Yasar and Tariq Iqbal | Deep Learning | 36 | |
FedMM: A Communication Efficient Solver for Federated Adversarial Domain Adaptation | Yan Shen, Jian Du, Han Zhao, Zhanghexuan Ji, Chunwei Ma and Mingchen Gao | Adversarial Learning + Social Networks + Causal Graphs | 146 | |
Adversarial Link Prediction in Spatial Networks | Michał Tomasz Godziszewski, Yevgeniy Vorobeychik and Tomasz Michalak | Adversarial Learning + Social Networks + Causal Graphs | 142 | |
Distributed Mechanism Design in Social Networks | Haoxin Liu, Yao Zhang and Dengji Zhao | Adversarial Learning + Social Networks + Causal Graphs | 143 | |
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks | Mohammad Mohammadi, Jonathan Nöther, Debmalya Mandal, Adish Singla and Goran Radanovic | Adversarial Learning + Social Networks + Causal Graphs | 144 | |
How to Turn an MAS into a Graphical Causal Model | H. Van Dyke Parunak | Adversarial Learning + Social Networks + Causal Graphs | 145 | |
Agent-based Simulation of District-based Elections with Heterogeneous Populations | Adway Mitra | Modelling and Simulation of Societies | 97 | |
Deep Learning-based Spatially Explicit Emulation of an Agent-Based Simulator for Pandemic in a City | Varun Madhavan, Adway Mitra and Partha Pratim Chakrabarti | Modelling and Simulation of Societies | 98 | |
A Decentralized Agent-Based Task Scheduling Framework for Handling Uncertain Events in Fog Computing | Yikun Yang, Fenghui Ren and Minjie Zhang | Modelling and Simulation of Societies | 99 | |
Co-evolution of social and non-social guilt in structured populations | Theodor Cimpeanu, Luís Moniz Pereira and The Anh Han | Modelling and Simulation of Societies | 100 | |
Phantom - A RL-driven Multi-Agent Framework to Model Complex Systems | Leo Ardon, Jared Vann, Deepeka Garg, Thomas Spooner and Sumitra Ganesh | Modelling and Simulation of Societies | 101 | |
Simulation Model with Side Trips at a Large-Scale Event | Ryo Niwa, Shunki Takami, Shusuke Shigenaka, Masaki Onishi, Wataru Naito and Tetsuo Yasutaka | Modelling and Simulation of Societies | 102 | |
The Price of Algorithmic Pricing: Investigating Collusion in a Market Simulation with AI Agents | Michael Schlechtinger, Damaris Kosack, Heiko Paulheim, Thomas Fetzer and Franz Krause | Modelling and Simulation of Societies | 103 | |
Crowd simulation incorporating a route choice model and similarity evaluation using real large-scale data | Ryo Nishida, Masaki Onishi and Koichi Hashimoto | Modelling and Simulation of Societies | 104 | |
Capturing Hiders with Moving Obstacles | Ayushman Panda and Kamalakar Karlapalem | Modelling and Simulation of Societies | 105 | |
COBAI : a generic agent-based model of human behaviors centered on contexts and interactions | Maëlle Beuret, Irene Foucherot, Christian Gentil and Joël Savelli | Modelling and Simulation of Societies | 106 | |
Learning Solutions in Large Economic Networks using Deep Multi-Agent Reinforcement Learning | Michael Curry, Alexander Trott, Soham Phade, Yu Bai and Stephan Zheng | Modelling and Simulation of Societies | 107 | |
Opinion Dynamics in Populations of Converging and Polarizing Agents | Anshul Toshniwal and Fernando P. Santos | Modelling and Simulation of Societies | 108 | |
On a Voter Model with Context-Dependent Opinion Adoption | Luca Becchetti, Vincenzo Bonifaci, Emilio Cruciani and Francesco Pasquale | Modelling and Simulation of Societies | 109 | |
Cognitive Bias-Aware Dissemination Strategies for Opinion Dynamics with External Information Sources | Abdullah Al Maruf, Luyao Niu, Bhaskar Ramasubramanian, Andrew Clark and Radha Poovendran | Modelling and Simulation of Societies | 110 | |
Representation-based Individual Fairness in k-clustering | Debajyoti Kar, Mert Kosan, Debmalya Mandal, Sourav Medya, Arlei Silva, Palash Dey and Swagato Sanyal | Coordination, Organisations, Institutions, and Norms | 147 | |
S&F: Sources and Facts Reliability Evaluation Method | Quentin Elsaesser, Patricia Everaere and Sébastien Konieczny | Coordination, Organisations, Institutions, and Norms | 148 | |
Offline Multi-Agent Reinforcement Learning with Coupled Value Factorization | Xiangsen Wang and Xianyuan Zhan | Coordination, Organisations, Institutions, and Norms | 149 | |
Learning Optimal “Pigovian Tax” in Sequential Social Dilemmas | Yun Hua, Shang Gao, Wenhao Li, Bo Jin, Xiangfeng Wang and Hongyuan Zha | Coordination, Organisations, Institutions, and Norms | 150 | |
PACCART: Reinforcing Trust in Multiuser Privacy Agreement Systems | Daan Di Scala and Pinar Yolum | Coordination, Organisations, Institutions, and Norms | 151 | |
Explain to Me: Towards Understanding Privacy Decisions | Gonul Ayci, Arzucan Ozgur, Murat Sensoy and Pinar Yolum | Coordination, Organisations, Institutions, and Norms | 152 | |
The Resilience Game: A New Formalization of Resilience for Groups of Goal-Oriented Autonomous Agents | Michael A. Goodrich, Jennifer Leaf, Julie A. Adams and Matthias Scheutz | Coordination, Organisations, Institutions, and Norms | 153 | |
Differentially Private Network Data Collection for Influence Maximization | M. Amin Rahimian, Fang-Yi Yu and Carlos Hurtado | Coordination, Organisations, Institutions, and Norms | 154 | |
Inferring Implicit Trait Preferences from Demonstrations of Task Allocation in Heterogeneous Teams | Vivek Mallampati and Harish Ravichandar | Coordination, Organisations, Institutions, and Norms | 155 | |
From Scripts to RL Environments: Towards Imparting Commonsense Knowledge to RL Agents | Abhinav Joshi, Areeb Ahmad, Umang Pandey and Ashutosh Modi | Learning and Adaptation | 37 | |
Hierarchical Reinforcement Learning with Attention Reward | Sihong Luo, Jinghao Chen, Zheng Hu, Chunhong Zhang and Benhui Zhuang | Learning and Adaptation | 38 | |
FedHQL: Federated Heterogeneous Q-Learning | Flint Xiaofeng Fan, Yining Ma, Zhongxiang Dai, Cheston Tan and Bryan Kian Hsiang Low | Learning and Adaptation | 39 | |
Know Your Enemy: Identifying and Adapting to Adversarial Attacks in Deep Reinforcement Learning | Seán Caulfield Curley, Karl Mason and Patrick Mannion | Learning and Adaptation | 40 | |
Transformer Actor-Critic with Regularization: Automated Stock Trading using Reinforcement Learning | Namyeong Lee and Jun Moon | Learning and Adaptation | 41 | |
Model-Based Actor-Critic for Multi-Objective Reinforcement Learning with Dynamic Utility Functions | Johan Källström and Fredrik Heintz | Learning and Adaptation | 53 | |
Relaxed Exploration Constrained Reinforcement Learning | Shahaf Shperberg, Bo Liu and Peter Stone | Learning and Adaptation | 54 | |
Causality Detection for Efficient Multi-Agent Reinforcement Learning | Rafael Pina, Varuna De Silva and Corentin Artaud | Learning and Adaptation | 55 | |
Diversity Through Exclusion (DTE): Niche Identification for Reinforcement Learning through Value-Decomposition | Peter Sunehag, Alexander Vezhnevets, Edgar Duéñez-Guzmán, Igor Mordatch and Joel Leibo | Learning and Adaptation | 56 | |
Temporally Layered Architecture for Adaptive, Distributed and Continuous Control | Devdhar Patel, Joshua Russell, Francesca Walsh, Tauhidur Rahman, Terrence Sejnowski and Hava Siegelmann | Learning and Adaptation | 57 | |
Multi-objective Reinforcement Learning in Factored MDPs with Graph Neural Networks | Marc Vincent, Amal El Fallah Seghrouchni, Vincent Corruble, Narayan Bernardin, Rami Kassab and Frédéric Barbaresco | Learning and Adaptation | 65 | |
An Analysis of Connections Between Regret Minimization and Actor Critic Methods in Cooperative Settings | Chirag Chhablani and Ian Kash | Learning and Adaptation | 66 | |
Attention-Based Recurrency for Multi-Agent Reinforcement Learning under State Uncertainty | Thomy Phan, Fabian Ritz, Jonas Nüßlein, Michael Kölle, Thomas Gabor and Claudia Linnhoff-Popien | Learning and Adaptation | 67 | |
A Theory of Mind Approach as Test-Time Mitigation Against Emergent Adversarial Communication | Nancirose Piazza and Vahid Behzadan | Learning and Adaptation | 68 | |
Defensive Collaborative Learning: Protecting Objective Privacy in Data Sharing | Cynthia Huang and Pascal Poupart | Learning and Adaptation | 69 | |
Neuro-Symbolic World Models for Adapting to Open World Novelty | Jonathan Balloch, Zhiyu Lin, Robert Wright, Mustafa Hussain, Aarun Srinivas, Xiangyu Peng, Julia Kim and Mark Riedl | Learning and Adaptation | 70 | |
Modeling Dynamic Environments with Scene Graph Memory | Andrey Kurenkov, Michael Lingelbach, Tanmay Agarwal, Chengshu Li, Emily Jin, Ruohan Zhang, Fei-Fei Li, Jiajun Wu, Silvio Savarese and Roberto Martín-Martín | Learning and Adaptation | 71 | |
Group Fair Clustering Revisited -- Notions and Efficient Algorithm | Shivam Gupta, Ganesh Ghalme, Narayanan C. Krishnan and Shweta Jain | Learning and Adaptation | 72 | |
LTL-Based Non-Markovian Inverse Reinforcement Learning | Alvaro Velasquez, Ashutosh Gupta, Ashutosh Trivedi, Krishna S, Mohammad Afzal and Sankalp Gambhir | Learning and Adaptation | 73 | |
The Parameterized Complexity of Welfare Guarantees in Schelling Segregation | Argyrios Deligkas, Eduard Eiben and Tiger-Lily Goldsmith | Social Choice and Cooperative Game Theory | 15 | |
Fair Chore Division under Binary Supermodular Costs | Siddharth Barman, Vishnu Narayan and Paritosh Verma | Social Choice and Cooperative Game Theory | 16 | |
Deliberation as Evidence Disclosure: A Tale of Two Protocol Types | Julian Chingoma and Adrian Haret | Social Choice and Cooperative Game Theory | 30 | |
How Does Fairness Affect the Complexity of Gerrymandering? | Sandip Banerjee, Rajesh Chitnis and Abhiruk Lahiri | Social Choice and Cooperative Game Theory | 31 | |
Individual-Fair and Group-Fair Social Choice Rules under Single-Peaked Preferences | Gogulapati Sreedurga, Soumyarup Sadhukhan, Souvik Roy and Yadari Narahari | Social Choice and Cooperative Game Theory | 32 | |
Maximin share Allocations for Assignment Valuations | Pooja Kulkarni, Rucha Kulkarni and Ruta Mehta | Social Choice and Cooperative Game Theory | 47 | |
Computational Complexity of Verifying the Group No-show Paradox | Farhad Mohsin, Qishen Han, Sikai Ruan, Pin-Yu Chen, Francesca Rossi and Lirong Xia | Social Choice and Cooperative Game Theory | 48 | |
Optimal Capacity Modification for Many-To-One Matching Problems | Jiehua Chen and Gergely Csáji | Social Choice and Cooperative Game Theory | 63 | |
Learning to Explain Voting Rules | Inwon Kang, Qishen Han and Lirong Xia | Social Choice and Cooperative Game Theory | 64 | |
MMS Allocations of Chores with Connectivity Constraints: New Methods and New Results | Mingyu Xiao, Guoliang Qiu and Sen Huang | Social Choice and Cooperative Game Theory | 75 | |
Group Fairness in Peer Review | Haris Aziz, Evi Micha and Nisarg Shah | Social Choice and Cooperative Game Theory | 76 | |
Altruism in Facility Location Problems | Houyu Zhou, Hau Chan and Minming Li | Social Choice and Cooperative Game Theory | 77 | |
Transfer Learning based Agent for Automated Negotiation | Siqi Chen, Qisong Sun, Heng You, Tianpei Yang and Jianye Hao | Markets, Auctions, and Non-Cooperative Game Theory | 78 | |
Single-Peaked Jump Schelling Games | Tobias Friedrich, Pascal Lenzner, Louise Molitor and Lars Seifert | Markets, Auctions, and Non-Cooperative Game Theory | 89 | |
Defining deception in structural causal games | Francis Rhys Ward, Francesco Belardinelli and Francesca Toni | Markets, Auctions, and Non-Cooperative Game Theory | 80 | |
Game Model Learning for Mean Field Games | Yongzhao Wang and Michael Wellman | Markets, Auctions, and Non-Cooperative Game Theory | 156 | |
Two-phase security games | Andrzej Nagórko, Paweł Ciosmak and Tomasz Michalak | Markets, Auctions, and Non-Cooperative Game Theory | 157 | |
Stationary Equilibrium of Mean Field Games with Congestion-dependent Sojourn Times | Costas Courcoubetis and Antonis Dimakis | Markets, Auctions, and Non-Cooperative Game Theory | 158 | |
Last-mile Collaboration: A Decentralized Mechanism with Bounded Performance Guarantees and Implementation Strategies | Keyang Zhang, Jose Javier Escribano Macias, Dario Paccagnan and Panagiotis Angeloudis | Markets, Auctions, and Non-Cooperative Game Theory | 159 | |
Deep Learning-Powered Iterative Combinatorial Auctions with Active Learning | Benjamin Estermann, Stefan Kramer, Roger Wattenhofer and Ye Wang | Markets, Auctions, and Non-Cooperative Game Theory | 160 | |
Revenue Maximization Mechanisms for an Uninformed Mediator with Communication Abilities | Zhikang Fan and Weiran Shen | Markets, Auctions, and Non-Cooperative Game Theory | 161 |
Demo Sessions
Time | Title | Authors |
Day 1 (Wed) Demo Sessions: Morning | ||
Day 1 (Wed) Morning | TDD for AOP: Test-Driven Development for Agent-Oriented Programming | Cleber Amaral, Jomi Fred Hubner and Timotheus Kampik |
Demonstrating Performance Benefits of Human-Swarm Teaming | William Hunt, Jack Ryan, Ayodeji O Abioye, Sarvapali D Ramchurn and Mohammad D Soorati | |
Robust JaCaMo Applications via Exceptions and Accountability | Matteo Baldoni, Cristina Baroglio, Roberto Micalizio and Stefano Tedeschi | |
Real Time Gesturing in Embodied Agents for Dynamic Content Creation | Hazel Watson-Smith, Felix Marcon Swadel, Jo Hutton, Kirstin Marcon, Mark Sagar, Shane Blackett, Tiago Ribeiro, Travers Biddle and Tim Wu | |
A Web-based Tool for Detecting Argument Validity and Novelty | Sandrine Chausson, Ameer Saadat-Yazdi, Xue Li, Jeff Z. Pan, Vaishak Belle, Nadin Kokciyan and Bjorn Ross | |
Day 1 (Wed) Demo Sessions: Afternoon | ||
Day 1 (Wed) Afternoon | TDD for AOP: Test-Driven Development for Agent-Oriented Programming | Cleber Amaral, Jomi Fred Hubner and Timotheus Kampik |
Demonstrating Performance Benefits of Human-Swarm Teaming | William Hunt, Jack Ryan, Ayodeji O Abioye, Sarvapali D Ramchurn and Mohammad D Soorati | |
The influence maximisation game | Sukankana Chakraborty, Sebastian Stein, Ananthram Swami, Matthew Jones and Lewis Hill | |
Interaction-Oriented Programming: Intelligent, Meaning-Based Multiagent Systems | Amit Chopra, Samuel Christie and Munindar P. Singh | |
Improvement and Evaluation of the Policy Legibility in Reinforcement Learning | Yanyu Liu, Yifeng Zeng, Biyang Ma, Yinghui Pan, Huifan Gao and Xiaohan Huang. | |
Visualizing Logic Explanations for Social Media Moderation | Marc Roig Vilamala, Dave Braines, Federico Cerutti and Alun Preece | |
Day 2 (Thu) Demo Sessions: Morning | ||
Day 2 (Thu) Morning | Multi-Robot Warehouse Optimization: Leveraging Machine Learning for Improved Performance | Mara Cairo, Graham Doerksen, Bevin Eldaphonse, Johannes Gunther, Nikolai Kummer, Jordan Maretzki, Gupreet Mohhar, Payam Mousavi, Sean Murphy, Laura Petrich, Sahir, Jubair Sheikh, Talat Syed and Matthew E. Taylor |
Hiking up that HILL with Cogment-Verse: Train & operate multi-agent systems learning from Humans | Sai Krishna Gottipati, Luong-Ha Nguyen, Clodéric Mars and Matthew E. Taylor | |
The influence maximisation game | Sukankana Chakraborty, Sebastian Stein, Ananthram Swami, Matthew Jones and Lewis Hill | |
Interaction-Oriented Programming: Intelligent, Meaning-Based Multiagent Systems | Amit Chopra, Samuel Christie and Munindar P. Singh | |
Improvement and Evaluation of the Policy Legibility in Reinforcement Learning | Yanyu Liu, Yifeng Zeng, Biyang Ma, Yinghui Pan, Huifan Gao and Xiaohan Huang. | |
Visualizing Logic Explanations for Social Media Moderation | Marc Roig Vilamala, Dave Braines, Federico Cerutti and Alun Preece | |
Day 2 (Thu) Demo Sessions: Afternoon | ||
Day 2 (Thu) Afternoon | Multi-Robot Warehouse Optimization: Leveraging Machine Learning for Improved Performance | Mara Cairo, Graham Doerksen, Bevin Eldaphonse, Johannes Gunther, Nikolai Kummer, Jordan Maretzki, Gupreet Mohhar, Payam Mousavi, Sean Murphy, Laura Petrich, Sahir, Jubair Sheikh, Talat Syed and Matthew E. Taylor |
Hiking up that HILL with Cogment-Verse: Train & operate multi-agent systems learning from Humans | Sai Krishna Gottipati, Luong-Ha Nguyen, Clodéric Mars and Matthew E. Taylor | |
Robust JaCaMo Applications via Exceptions and Accountability | Matteo Baldoni, Cristina Baroglio, Roberto Micalizio and Stefano Tedeschi | |
Real Time Gesturing in Embodied Agents for Dynamic Content Creation | Hazel Watson-Smith, Felix Marcon Swadel, Jo Hutton, Kirstin Marcon, Mark Sagar, Shane Blackett, Tiago Ribeiro, Travers Biddle and Tim Wu | |
A Web-based Tool for Detecting Argument Validity and Novelty | Sandrine Chausson, Ameer Saadat-Yazdi, Xue Li, Jeff Z. Pan, Vaishak Belle, Nadin Kokciyan and Bjorn Ross |