Straightforward hierarchical RL for concurrent discovery of sub-policies and their controller.
I wonder if that happens every time...
Expanding DQN to produce estimates of return distributions, and an exploration into why this helps learning
Better imitation learning with self-correcting policies by negative sampling.
CNNs trained in "the usual way" tend to learn something different than you might expect. They learn to recognize textures (local structure) rather than shapes (global structure).
Pre-training using a generative model of pre-recorded trajectories and bias correction.
Beginning a new series highlighting a few interesting RL papers on the arXiv each week. This week: Simple curriculum learning, learning to interact with humans, and warm starting RL with propositional logic.
My differentiation work while reading Ilya Sutskever on the biological plausibility of Boltzmann machines.
The purpose statement and introduction to Computable AI.