• Top page
  • Timetable
  • Per session
  • Per presentation
  • How to
  • Meeting Planner

演題詳細

Poster

ニューラルネットワークモデリング
Neural Network Modeling

開催日 2014/9/11
時間 11:00 - 12:00
会場 Poster / Exhibition(Event Hall B)

強化学習を行う神経回路における情報量最大化学習
Information Maximization of Neural Networks in Reinforcement Learning

  • P1-371
  • 早川 隆 / Takashi Hayakawa:1,3 金子 武嗣 / Takeshi Kaneko:1 青柳 富誌生 / Toshio Aoyagi:2,3 
  • 1:京都大院医高次脳形態 / Dept Morphol Brain Sci, Kyoto Univ, Kyoto, Japan 2:京都大院情報複雑系数理 / Dept App Analysis & Comp Dynamics, Kyoto Univ, Kyoto, Japan 3:JST CREST / JST CREST, Saitama, Japan 

The neural circuits in mammalian brains interact with environment, receiving sensory signals from environment and sending motor outputs for behavioral actions. Particularly, in the fields of reinforcement learning, it has been long considered how such an interacting neural system acquires reward from environment. In the reinforcement learning of neural systems, it has been recognized that the neural system must explore the environment for novel experiences of reward acquisition as well as exploiting reward predicted based on past experiences. Although biologically plausible learning mechanisms for the exploitation of predicted reward have been intensively studied, there is little literature on biologically plausible learning mechanisms for the exploration in environment. In the present study, we derive a novel biologically implementable learning rule for the exploration, which maximizes mutual information between neural networks and environments. Then, we numerically show that the neural network after learning displays persistent firing activity similar to that observed in the real cortical circuits. We also discuss the similarity between the derived learning rule and the spike-timing-dependent plasticity of real cortical neurons.

Copyright © Neuroscience2014. All Right Reserved.