Vikram Gadagkar is a postdoctoral fellow with Jesse Goldberg at the Department of Neurobiology and Behavior, Cornell University. He has a B.S. in physics, chemistry and mathematics from Bangalore University, an M.S. in physics from the Indian Institute of Science, and a Ph.D. in physics from Cornell University. For his graduate research, he worked with Séamus Davis to investigate the existence of a new state of matter – the putative supersolid state. Solid helium taken close to absolute zero of temperature was proposed to be a supersolid, simultaneously a solid and a superfluid. Using a Superconducting Quantum Interference Device (SQUID)-based torsional oscillator and ultra-low vibration techniques, Gadagkar and his colleagues demonstrated that solid helium, while exhibiting many interesting properties, is not a supersolid. He then developed a model based on lattice defects (dislocations) to explain the experimental observations. During his graduate years, he became interested in how networks of neurons in the brain produce behavior, akin to how networks of dislocations produce emergent physical phenomena. He found a kindred spirit in Jesse Goldberg, who also saw computation as the key intermediate link between neural circuits and behavior. After graduating, he teamed up with Jesse to help set up a brand-new systems neuroscience lab at Cornell. Using a combination of awake-behaving electrophysiology, advanced cellular-resolution imaging and network models, Gadagkar aims to identify computational principles underlying trial-and-error learning. Dopamine neurons are known to mediate trial-and-error learning in animals seeking rewards by encoding reward prediction error – they are activated by better-than-predicted reward outcomes and suppressed by worse-than-predicted ones. Do dopamine neurons also play a role in learning to speak or play an instrument, skills that are not learned for external rewards? He is investigating the generality of neural-reinforcement mechanisms by recording dopamine neurons in singing birds to test if performance-prediction error is encoded like reward-prediction error.