Human-in-the-Loop Bandit Learning (English Speech)

星期四, 五月 16, 2019 -
15:40 to 17:00
台灣大學 德田館 102 / 台北市羅斯福路四段1號

 

Topic : Human-in-the-Loop Bandit Learning (English Speech)

 

Speaker:  Prof. Chien-Ju Ho (Washington University, USA)

 

Date: Thursday, May 16th, 2019

Time: 15:40 - 17:00 

Venue: R102, CSIE - DerTian Hall, NTU 臺灣大學 德田館 102會議室

 

Abstract

Bandit learning is a sequential decision-making framework when only partial feedback is observable. In standard stochastic bandit settings, the learner chooses an action at each time step and observes a reward independently drawn from some distribution associated with the taken action. The goal of the learner is to maximize the sum of the rewards obtained through the taken actions over time. In the past few decades, there has been extensive literature for bandit problems. However, bandit learning has been increasingly used to help make decisions in human-in-the-loop systems, such as online advertisements and user-generated content, where the common assumption of independent reward may not hold anymore. In this talk, I discuss my recent work investigating the design of bandit algorithms with humans in the loop, in the context of crowdsourcing markets and user-generated content platforms. 

Biography

Chien-Ju is an assistant professor in Computer Science & Engineering at Washington University in St. Louis. Previously, he was a postdoctoral associate at Cornell University. He earned his PhD in Computer Science from the University of California, Los Angeles in 2015 and spent three years visiting the EconCS group at Harvard from 2012 to 2015. He is the recipient of the Google Outstanding Graduate Research Award at UCLA in 2015. His work was nominated for Best Paper Award at WWW 2015. His research centers on the design and analysis of human-in-the-loop systems, using techniques drawn from the fields of machine learning, algorithmic economics, optimization, and online behavioral social science. He is interested in developing realistic human behavior models and studying how the models influence the design of machine learning algorithms and incentive mechanisms.