Home » Node » 24943

Programming Agents via Evaluative Feedback - Michael Littman

Speaker: 
Michael Littman
Data dell'evento: 
Thursday, 17 November, 2022 - 16:30
Luogo: 
Online (Zoom): https://uniroma1.zoom.us/j/99323809469?pwd=QklieXJtMkFzZkQvRUhvMWNON2tSUT09
Contatto: 
Antonio Di Stasio: [email protected]

Abstract: Reinforcement-learning agents optimize their behavior from evaluative feedback. This talk focuses on how we're improving reinforcement-learning algorithms by building a better understanding of how people can provide this feedback through telling, training, and teaching. Specifically, it will cover linear temporal logic as a task representation, arguing that traditional rewards can make behaviors difficult to express. It will also summarize work that attempts to characterize the ways human users deliver evaluative feedback and novel algorithms that can use this feedback effectively as a way of enabling end-user training of AI systems.


Bio: Michael L. Littman is a University Professor of Computer Science at Brown University, studying machine learning and decision making under uncertainty. He has earned multiple university-level awards for teaching and his research on reinforcement learning, probabilistic planning, and automated crossword-puzzle solving has been recognized with three best-paper awards and three influential paper awards. Littman is co-director of Brown's Humanity Centered Robotics Initiative and a Fellow of the Association for the Advancement of Artificial Intelligence and the Association for Computing Machinery. He is also a Fellow of the American Association for the Advancement of Science Leshner Leadership Institute for Public Engagement with Science,
focusing on Artificial Intelligence.

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma