“…We have nothing like brains found in the wild yet, but we have truly complex neural systems solving complex behaviors.”

 

An interview with Timothy Lillicrap

Adjunct Professor, CoMPLEX, University College London

Staff Research Scientist, Google DeepMind

By: Michelle Johnson (11/8/19)

How did you come to this particular field of research?

Originally, I had a fascination with the natural world and being surprised by nature. Then I slowly became more and more interested in brains, which are maybe one of the most interesting parts of the natural world. Even though I’m looking mostly at artificial brains now, I’m still interested in brains in general and the question of how the tools that you use for understanding artificial systems can be used to help us understand real brains.

What are some of the questions you’re thinking about right now?

At a high level, in the last couple of years we’re starting to build complex neural systems that can perform complex behaviors. I would say we have nothing like brains found in the wild yet, but we have truly complex neural systems solving complex behaviors. I think part of what’s exciting is we’re starting to see more concretely for those systems what it means to understand them and what it means to build them. For me it says something about what our explanations ought to look like in neuroscience in the long run.

When we describe how we build an agent that’s trained with machine learning, in some sense we have a global, synoptic view of how the whole thing works. I think looking at that and thinking about it carefully might help us clarify what kinds of explanations we should be seeking for animal and human brains. The kinds of explanations that we’re seeing pop up over and over when we try to explain the agents that we build in machine learning are often developed in the context of a particular behavior. But when you pan out, most of the interesting descriptions of these agents are somewhat behavior agnostic.

Was there a specific point when you decided to enter the industry sector of research?

Definitely. At the time when I switched, it was relatively early on in DeepMind, and I was certainly thinking about going into academics. That was certainly my default; I was finishing a postdoc and starting to look around at academic positions. [The switch] did feel pretty by chance and weird. First of all, I thought, “I am more interested in machine learning, but there is still a lot of academic labs to go do that.” I think the thing that convinced me was that there was a small but growing set of people in industry who believed a little more about what we could accomplish in the near future using machine learning; they were serious about that and wanted to get serious about it.

What advice would you give to young researchers?

At a low level I would encourage to do more math. If anything, I wish I had done more, but I think having mathematical competency does two things: it both enables you to be good at certain kinds of analysis in science, and it frees you of being afraid and confused by parts of the field that are important and have a bunch of mathematics. One of the good things about having a decent level of mathematical proficiency is you can often say, “This thing, which is confusing because it’s quite mathematical, actually likely isn’t all that important.” I think that’s hard to determine if you’re not fluent.

 

To learn more about Dr. Lillicrap and his work, click here.

Click here to go back to the “Interviews with Scientists” page.

Skip to toolbar