Harvard University - Department of Molecular & Cellular Biology

DAVID COX SEEKS ORDER IN THE CHAOS OF FLEETING IMAGES

by Cathryn Delude

November 18th, 2012

David Cox

If you wanted to learn how a car works, you could look at that marvel of hybrid engineering, the Toyota Prius. But you might learn more fundamentals about automobiles from a carburetor-and-spark-plugs Ford, says David Cox, a new assistant professor in Harvard’s Department of Molecular and Cellular Biology (MCB). Cox studies how animals develop the ability to perceive objects as stable entities in a world of fleeting and ever-varying images. Object recognition is a form of cognition since the brain must learn general identities and categories from individual examples. To Cox, primates are Priuses and rodents are old Fords. For most visual neuroscientists, rodents are blind things navigating their world by sniffs and whiskers alone. But Cox believes rodents have visual abilities that can help unlock a key question of visual perception.

The Invariance Problem

Cox began studying high-level vision in monkeys and humans as the first graduate student of James DiCarlo, who in 2002 was a new member of MIT’s new McGovern Institute for Brain Research and today heads the MIT Department of Brain and Cognitive Sciences. Together, they made important discoveries about brain’s solution to the problem of “invariance,” which still plagues scientists trying to design artificial vision systems.

Invariance refers to the fact that we can identify an object, even when it appears in a view we have never seen. For example, a coffee cup casts infinitely many different images on our retina, depending on its distance, angle, orientation, illumination, and relationship to other objects. The problem is compounded for articulated or flexible (jointed or spongy) objects. Yet we effortlessly recognize these views as the same object – as do many animals down the evolutionary family tree. Early artificial intelligence researchers did not appreciate the enormity of this task until they tried to mimic it.

Cox’s approach is an iterative process of deciphering how biological systems achieve invariance and then reverse-engineering that system for computers. “The ultimate test for demonstrating that we understand how the visual system works is to reproduce its capabilities – including invariance – artificially,” he explains. He has trained monkeys on tasks designed to investigate visual processing, and recorded from neurons in the inferotemporal cortex (IT, where visual object recognition occurs) as they performed these visual tasks. In complementary human studies and extensive computer simulations, he tests hypotheses about how the visual system might be working.

How and When Do We Learn Invariance?

“We wanted to know whether invariance is innate, or if it is learned through experience,” says Cox. He hypothesized that the brain might need particular kinds of experience to build a visual system that exhibits invariance.  To test this hypothesis, he exploited the fact that we continually move our eyes in rapid shifts, called saccades, hundreds of times per minute (though we are rarely aware of it). In the middle of one of these frequent saccades, we are effectively blind to changes in the visual scene. So Cox devised a method for “tricking” the brain by carefully tracking eye movements in realtime and making objects disappear or surrepticiously swapping objects in mid-saccade.  In monkeys, he found that IT neurons did not recognize an object if its image fell on part of the retina that had not seen the image before. In humans, when he swapped objects in mid-saccade, the volunteers confused two different objects as the same, which was later shown in monkeys too.

Those experiments indicate that invariance is learned – but at what stage of life? Understanding this requires developmental studies. But baby monkeys are too dependent on maternal care to control their visual environment, and they are expensive and hard to train. Cox wanted an easier to control, and faster breeding and developing animal that was amenable to the ever-growing armamentarium of molecular and genetic tools. On all scores, rodents fit the bill.

Rebooting Assumptions

After Cox received his PhD from MIT in 2007, he accepted a 5-year appointment as a Principal Investigator at the Rowland Institute at Harvard, near Kendall Square. He set up his lab, attracted graduate students and postdoctoral fellows, and bought some rats.

At that time, most neuroscientists assumed that rats are primarily good for studying other senses – like how tactile stimulation from whiskers creates a neural map of the world. Although many lab strains of rats are heavily inbred, with pink eyes and poor vision, other strains have normal, pigmented eyes.  However, few researchers had investigated rodents’ visual cortex and how it parses the high-level structure of their visual world. Cox now did so, and in a 2009 paper in PNAS he demonstrated that rats possess surprisingly sophisticated visual systems, and he proposed that rodent vision is a fresh and powerful model with tremendous experimental potential.

“The rodent gives us a chance to reboot,” he says. “It forces us to reassess our prior assumptions and relicenses us to ask questions we should have asked before.” Cox developed libraries of images to serve as visual stimuli and devised setups for tracking eye saccades and recording from neurons in rats. He began scaling up the labor-intensive approach of having graduate students listen for hours to the static of single neurons, to a high-throughput robotic setup for recording from a 32-channel silicon electron array that can autonomously run multiple experiments in parallel.

Thrilled to Be At MCB

Cox brought this setup to MCB, joined by five of his lab members (several previous members now have their own labs). His lab is continuing to build computational models inspired by visual biology to test their understanding of the biology, leveraging advances in parallel computing to begin to approach the scale of biological systems. He hopes that combining reverse- and forward-engineering approaches will accelerate both domains.

In addition, Cox is developing advanced tools the precise placement of multiple electrodes, coupled with an intra-operative structured illumination 3D-imaging technique. This system can generate detailed online maps of the rodent’s skull surface, so that he can place electrodes precisely at those locations for experimental sessions days and even weeks later. He will also be using a fully automated infrared video eye-tracking system that he developed to quickly and accurately calibrate eye movements and random saccades even in an uncooperative rodent. This tool uses the optical geometry of the cornea and computer-controlled motorized stages to rapidly estimate the geometry of the eye relative to the camera, which will be invaluable in rodent vision research.

Cox ultimately wants to unlock the computational and developmental underpinnings of rodent object recognition to shed a brighter light on how we humans recognize visual objects with such unconscious ease, despite the impressive computational feat this requires of our brain.

But his first duty at MCB was to play jazz piano at Marcus Meister’s going away party in July. Meister, now at California Institute of Technology, also studied vision, but with a different focus.

Cox loves being part of MCB.  He is excited to be surrounded by people who are doing a wide range of research different from his own. “My new colleagues each see problems differently and have different technical solutions, and that widens my perspective. It’s a huge opportunity for me to leverage a wealth of molecular techniques enabled by a rodent model and to collaborate with world-class colleagues.”

It is also a homecoming for Cox. As an undergraduate concentrating in Biology and Psychology, he took many classes with professors who are now his senior colleagues.  He also married his lab partner in his undergraduate introductory biology class. They now have two children, aged 3 years and 5 months.

His older child uses an iPad, and the slick interfaces and seamless interaction of that device are a world apart from the computers Cox played with as a child. His father brought home a computer from work with floppy disks the size of dinner plates and a green phosphor screen with a blinking command prompt. “I learned to program and read at the same time because of that computer,” says Cox.  However, as we learn more about the brain and how to reproduce its remarkable abilities, Cox foresees a world where computers become even more intuitive and “aware” of their surroundings. “It’s hard to even imagine the computers that my grandchildren grow up using.”
 
While sorting through boxes from Cox’s own childhood, his mother recently dug up a picture he drew in second grade. The caption ran, “When I grow up I want to be a scientist because then I will know everything and will get all the computers I want.” His lab now has a lot of computers, but he does not yet know everything. For instance, he would love to know what his new baby’s visual cortex is making of the world and how it is developing over time. But he’ll just have to make do with baby rats.

Read more in SEAS press release