• Top page
  • Timetable
  • Per session
  • Per presentation
  • How to
  • Meeting Planner

演題詳細

Poster

報酬・意思決定
Reward and Decision Making

開催日 2014/9/12
時間 14:00 - 15:00
会場 Poster / Exhibition(Event Hall B)

二光子顕微鏡法によるマウス頭頂葉の行動依存的な状態予測の解明
Investigation of action-dependent state prediction in the mouse parietal cortex with two-photon microscopy

  • P2-226
  • 船水 章大 / Akihiro Funamizu:1 Kuhn Bernd / Bernd Kuhn:1 銅谷 賢治 / Kenji Doya:1 
  • 1:沖縄科学技術大学院大学 / Okinawa Inst Sci Tech, Okinawa, Japan 

Model-based decision making requires representation of predicted states that are updated by action-dependent state transition models. To investigate their neural implementation, we conducted a virtual sound navigation task with mice and recorded the neuronal activities of posterior parietal cortex (PPC) with 2-photon microscopy.
A mouse was head restrained for 2-photon imaging and maneuvered a spherical treadmill. 12 speakers around the treadmill provided a virtual sound environment. The direction and the amplitude of sound pulses emulated the location of the sound source, which was moved according to the mouse's locomotion on the treadmill. When the mouse reached the sound source and licked a spout, it got a water reward. The task consisted of two conditions: continuous condition in which the guiding sound was presented continuously and intermittent condition in which the sound was presented intermittently.
In both conditions, mice increased the licking as they approached the sound source. This indicates that (i) mice recognized the sound-source position in the virtual environment and (ii) they predicted the reward given at the sound source. In the intermittent condition, the anticipatory licking was increased even when the sound was omitted, suggesting that mice updated the predicted sound-source position without auditory feedback based on their own actions.
We optically recorded calcium transients of layer 2/3 neurons in the PPC with the genetically encoded indicator GCaMP6f transfected by AAV. From the population activities, we decoded the distance to the sound source by the least absolute shrinkage and selection operator (LASSO). The decoder trained with the data in continuous condition could successfully decode the sound-source distance during no-sound periods in intermittent condition. We also trained a decoder of time to reach the sound source and found that the PPC neurons better represented the distance than the timing. LASSO extracted the relevant neurons for distance coding; the neurons were homogeneously distributed in the PPC. These results suggest that the PPC neurons represent and update the distance of sound source not only from present auditory inputs but also by dynamic update of the estimate using an action-dependent state transition model.

Copyright © Neuroscience2014. All Right Reserved.