This is the third post in our our weekly series of interviews with the Rotman Institute’s postdoctoral fellows. Last week’s interview with Tommaso Bruni can be found here, and the interview with Alida Liberman can be found here.

Catherine Stinson is a postdoctoral fellow at the Rotman Institute of Philosophy. Prior to joining the Rotman Institute, she completed her PhD at the University of Pittsburgh in 2013 and then went on to work at Ryerson University. During her PhD she held a postdoctoral fellowship at the Werner Reichardt Centre for Integrative Neuroscience at the University of Tübingen, and a predoctoral fellowship at the Max Planck Institute for the History of Science. She specializes in the philosophy of science, with a focus on psychology, neuroscience, computational sciences, and recently psychiatry.

Amy Wuest: Many of your areas of expertise are familiar subjects to philosophers. But the philosophy of computation stands out as it is an emerging field of study. Can you introduce our readers to this subject and explain some of the questions that motivate this area of research?

Catherine Stinson: There is a fairly long history of philosophical work on computation in areas like logic and philosophy of mind, where the focus is on questions like, “What is computable?” and “Are minds (like) computers?” As an undergrad I was very interested in those questions.

In the last 15 years or so, there has been increasing interest in computational models and simulations as experimental methods or types of scientific models. This is what I’m more interested in these days. Some of the basic questions are: “How can we learn about the real world by studying an abstract, simplified computer model? Can simulations tell us anything new about the world the way experiments can? What role do models play in scientific discovery and theorizing?”

Some of the practical challenges are figuring out whether we can trust what climate simulations tell us about climate change, whether models of how disease epidemics spread are reliable, and whether models of how new drugs act on diseased tissues are correct. I think that in many cases we can trust these models just as much as we can trust experiments, but nobody seems to have a good answer for why computer models are just as good, or how exactly they manage to latch onto what’s true in these real world systems.

AW: How does this research connect to your interest in scientific explanation?

CS: Almost without exception, what scientists use to explain things are models of one kind or another. These can be physical models, like ball and stick models of chemical structure, or tubs of glycerin and water as models of fluid dynamics in galaxy formation; mathematical models like the ideal gas law, or ecological models of population growth; or computational models, which are used for just about everything these days. In one obvious sense, models explain by helping to communicate knowledge in a simplified and more visceral way than a few paragraphs of text can do. What I want to know is how it is that they manage to capture what’s really going on. Sometimes they probably don’t capture what’s really going on, especially if they’re not intended for a scientific audience, but in the cases I’m interested in, models explain in the sense of demonstrating some of the forces or mechanisms or causes or properties that make the thing we’re explaining happen.

Some people think that models are purely metaphorical or fictional, and that we interact with them the same way we would when we’re drawing a conclusion about our own lives from the moral in a novel or a fable. I find this idea very unsatisfying. I think there’s something deeper going on than just literary interpretation (not that that can’t be deep too).

I think that when a model really explains something, it does so by instantiating some of the same forces or mechanisms or causes or properties that are operating in the system being explained. The ball and stick model of a chemical’s structure can partially explain how the chemical behaves in virtue of being made up of the same shapes in the same arrangement, with congruent angles. That might mean that they can be fit together into the same geometric structures, for example, despite the difference of scale.

AW: How does this play out in neuroscience?

CS: One thing that’s really challenging about neuroscience is that the brain is this extremely complex system with many parts connected in complicated ways, that all fit together at multiple scales, from molecules, cells, circuits, up to systems, and then social and cultural influences on top of all that. And it’s also really hard to study for practical reasons, because it’s this very delicate structure that it’s really hard to experiment on without killing or seriously harming the animal or person who is using that brain.

Over the last couple of decades we’ve been discovering that complex biological systems like the brain are a lot more difficult to understand than non-biological systems, and might require different sorts of explanations. All of the working parts in a brain, or any other biological organ, are dependent on and interconnected with a bunch of other parts, so it’s not enough to just figure out what one of the parts, like say a single neuron, does in isolation. When that neuron is connected to other neurons it does totally different things, and sometimes when it’s stimulated by one of its neighbours, it acts very differently than when it’s stimulated by by another neighbour. So explanations can’t be built up by first understanding the smallest parts in isolation, then simply adding together their effects. It’s not that there aren’t any regularities to be found. It’s that the regularities (and exceptions) at multiple scales need to be combined. The big problem, I think, is to figure out how to combine many partial models that explain different bits of the problem at different scales. This is one place where computational models can probably help a lot.

AW: Much of your research addresses the role of mechanisms in neuroscience and psychiatry. What are mechanisms, in the context of these fields of study, and how do they relate to computational models?

CS: Mechanisms are a newly revived way of understanding scientific explanation, particularly in biology and neuroscience. The idea is that how biologists explain things is by describing the working parts involved, and how the activities of those parts all fit together into an organized unit called a mechanism. Mechanisms are typically made up of smaller parts that are themselves mechanisms, and work together to form lager mechanisms. This is seen as a better way of capturing the complex interactions between parts in biological phenomena, as well as the fact that the components of these don’t usually act in entirely predictable, regular ways. Masses fall according to strict laws, but muscles contract and organisms reproduce in more higgledly piggledy ways that are somewhat regular, but admit of many exceptions.

I think this is a pretty good way of thinking about explanation in neuroscience, although there are still lots of disagreements to be ironed out among the people supporting this sort of view. I’m really not sure yet whether this will turn out to be a useful way of thinking about psychiatry, but it’s an option that is starting to be considered.

So far I’m one of only 2 or 3 people that I know of who are working on how mechanistic explanation relates to computational models. What I’ve argued so far is that what was going on in a particular approach to artificial intelligence called connectionism starting around the 1980s was an attempt to use computational models to discover the mechanisms of the mind and brain. This work predates the revival of mechanistic explanation, and the people using this approach had a lot of trouble describing exactly what it was that they were up to, and why anyone should believe that it might work. So I’m retrospectively offering a description of how that approach to artificial intelligence is supposed to work in terms of the discovery of mechanisms.

But the details of that case also tell us something important about mechanistic explanation. The prototypical examples of mechanisms in biology are highly detailed; they might explain how synaptic transmission occurs as the result of depolarizing pulses causing calcium ions to be released into the space between cells, then those ions binding to postsynaptic terminals, for example. Connectionist models, on the other hand, are often highly idealized, abstract models; for example, they might have a single unit stand in for a population of hundreds of neurons. Earlier accounts of mechanistic explanation have trouble dealing with abstract, idealized models like that, but there is increasing awareness that not all mechanistic explanations can be highly detailed ones.

AW: In a recent paper—”Mechanisms in psychology: ripping nature at its seams”—you discuss mental mechanisms and argue against an integrative account of explanation in psychology and neuroscience. Can you describe that work for our readers?

CS: The account I argue against is one that suggests that we can build up from neural mechanisms into higher and higher levels, and simultaneously decompose mental mechanisms into tasks and subtasks, such that eventually the two will meet in the middle, forming one unified hierarchy of mechanisms going from molecules up to mind. I agree that that’s something like what cognitive neuroscience is aiming at, and I’m on board with there not being a strict metaphysical barrier between brain and mind, but I’m skeptical that it can work as described. For one thing, I’m doubtful that there is just one unified hierarchy of mechanisms to be discovered. It’s true that some mechanisms have parts that are mechanisms too, but in other cases, what counts as a mechanism won’t neatly match onto the parts of a different mechanism. Relatedly, there is no reason to believe that all of the partial explanations that models provide will all fall into line without cross-cutting one another. The main issue is that entities at multiple scales all work together in complex organizations. Which entities are involved can change over time, and many (if not all) of those entities will also play roles in totally different mechanisms. When things are complexly interconnected in this way, there isn’t going to be one unified story about how it all works. There will be many overlapping models that can be partially connected together, but not completely. For instance, under certain restricted conditions, some parts of the complex mess might act as though there are neat part-whole relationships between mechanisms and sub-mechanisms, but those are always going to get messy near the edges.

Some of the cases I’m most interested in are where there are mechanisms that seem to work in more or less the same way in several very different contexts. I’m tempted to say that what is making a difference is not just the particular details in those cases, but also something like an abstract mechanism.

AW: Can you illustrate that with an example?

CS: Sure. One simple example is how there are geometric facts about how spheres and cylinders can be most closely packed. The way armies pile up cannonballs are based on those facts. The way fibre optic cables are bundled are based on those facts. In the brain there are also things a lot like cylindrical wires in bundles, such as tracts of axon collaterals in white matter. We can model those bundles as perfectly cylindrical shapes, and approximate the number of nearest neighbours, and the distances between fibres; from that we can get estimates of things like electric charge, the chemical composition of extracellular fluid, etc. Although the axon collaterals aren’t perfect cylinders, and there are many other specific details about them that may also be relevant to how they behave, part of the story, I think, comes from the mathematical facts about close packing of cylinders. One thing I’m working on is how to combine abstract mathematical models like that with the specific details of the situation in a formal account of scientific explanation. Existing accounts of explanation prefer just one or the other type of explanation.

AW: At the Rotman Institute, you’ve collaborated with both Professor Jackie Sullivan and Professor Tim Bayne. How do these partnerships enhance your research?

CS: Jackie and I just finished writing a book chapter together, about mechanisms in neuroscience. I wouldn’t have had that opportunity without her, and we ended up bringing together topics that we hadn’t thought of in that light before. Each of us was familiar with a different set of historical cases, and when we brought them all together in order, we ended up with a surprisingly coherent story. That story ended up illustrating some points that are not well understood in the literature about mechanisms, and also clarifying some historical points. It was one of those ideal collaborations where when one of us got stuck, the other took over and found a way out. We work on closely related topics, but have pretty different perspectives on it. It’s really helpful to hear what someone else thinks sometimes, instead of being stuck in my own head. It’s impossible to know which parts of your argument are convincing and which aren’t without hearing another perspective.

Tim and I haven’t written any papers together, but he helped me put together the syllabus for a course I just finished teaching at U of T about delusions. His papers are some of the best things on the topic for undergrads to read, because he’s a very skilled, clear writer. We have also worked together organizing events with visiting speakers, and he has been one of the most regular members of a reading group I’m running, so we end up in heated (but friendly) discussions fairly regularly. He’s definitely more of a philosopher of mind than I am, but a rare one who is genuinely concerned with how mind relates to brain. Talking to Tim has made me realize more about what philosophy of mind has to offer to philosophy of neuroscience.

Both Jackie and Tim have also been invaluable in offering career advice, and have very generously promoted my interests.