- Talks
- Today's Events
- Conferences
Activities
Talks
A Computational Theory of Learning Flexible Reward-Seeking Behavior with Place Cells
- Title A Computational Theory of Learning Flexible Reward-Seeking Behavior with Place Cells
- Speaker 高远翔 (UESTC)
- Date 2022年5月12日 10:30
- Venue https://meeting.tencent.com/dm/D5ppqaeVTt6n Meeting number:247-976-505 password:02215
Despite topics about AI have drawn much attention nowadays, I believe the ``golden key'' toward better AI is still hidden in biological brains. In this talk, I will present my recent attempts to integrate various neuroscientific observations about, i.e., hippocampus, striatum, ventral tegmental area, etc., toward a computational theory to reveal the coordination of various underlying mechanisms within rodent brains to determine their acquisition of flexible strategies in behaviors like maze-running. Specifically, the proposed computational theory explains how synaptic plasticity of place cells and medium spiny neurons supports the efficient learning of reward-seeking behavior through hippocampal replay. The proposed theory is implemented into a high-fidelity maze-running virtual rat in the MuJoCo physics simulator. The learning efficiency of the rat is over two orders of magnitude better than a rat using a neuroscience-inspired reinforcement learning algorithm, deep Q-network (DQN). Through presenting this work, I will show that incorporating state-of-the-art neuroscientific understandings about learning, memory and motivation into the design loop of an artificial agent is important for building artificial intelligence that matches the behavioral performance of animals.
About the speaker:
Dr. Yuanxiang Gao is researcher on communication and information system at University of Electronic Science and Technology of China (UESTC), Chengdu. He received a B.E. degree and a Ph.D. degree from UESTC in 2014 and 2021, respectively. From 2017 to 2018, he was a visiting researcher at University of Toronto (UoT), Toronto, Ontario, Canada. Dr. Gao's research interest spans different frontier topics, for example, the development of system-level computational models regarding the neural basis of learning reward-seeking behavior, or the proposal of deep reinforcement learning algorithms for handling the complexity in miscellaneous application scenarios. Two of his previous work were published on top artificial intelligence (AI) conferences, NeurIPS and ICML, respectively. Currently, Dr. Gao is actively serve as reviewers of these conferences.