Economic Hierarchical Reinforcement Learning

 

My senior thesis, advised by David C Parkes, applied economic approaches to the problem of hierarchical reinforcement learning.  The thesis won a Thomas T. Hoopes prize for outstanding undergraduate research.  We published a conference paper based on the work, entitled Economic Hierarchical Q-Learning, at AAAI’08.

Abstract

Hierarchical state decompositions address the curse-of-dimensionality in Q-learning methods for reinforcement learning (RL) but can suffer from sub-optimality. In addressing this, we introduce the Economic Hierarchical Q-Learning (EHQ) algorithm for hierarchical RL. The EHQ algorithm uses subsidies to align interests such that agents that would otherwise converge to a recursively optimal policy will instead be motivated to act hierarchically optimally. The essential idea is that a parent will pay a child for the relative value to the rest of the system for “returning the world” in one state over another state. The resulting learning framework is simple compared to other algorithms that obtain hierarchical optimality. Additionally, EHQ encapsulates relevant information about value tradeoffs faced across the hierarchy at each node and requires minimal data exchange between nodes. We provide no theoretical proof of hierarchical optimality but demonstrate success with EHQ in empirical results.

Full Text: Economic Hierarchical Q-Learning