Research Spotlight: Victor Barranca and Applied Mathematics (Part I)

This is the ninth interview in the series Research Spotlight, in which I share conversations that I have with faculty regarding their research, their journeys within their fields, and their fields in a broader context.

Victor Barranca is an assistant professor in the Department of Mathematics and Statistics.

Aidan Reddy: Generally, how would you describe your research?

Victor Barranca: My field is applied mathematics. In particular, I mostly study mathematical neuroscience, a subset of applied math.

Applied math research  generally involves three main components: a mathematical modeling phase, some mathematical analysis, and then often computer simulations. Since the research is “applied,” often it requires taking some problem arising in the sciences and phrasing it in a mathematical language. Once you have a mathematical phrasing of the problem, you can use all sorts of mathematical tools to try to come up with an interesting solution.

Applied math research is interdisciplinary in the sense that it allows you to look at interesting problems that lie outside the realm of mathematics, and then use a fairly diverse mathematical toolbox to arrive at and ultimately interpret a quantitative solution. In mathematical neuroscience in particular, the area of application is, of course, neuroscience, and generally that involves using math to understand the nature of cognition.  In some sense, math neuroscience is still a frontier much like classical mechanics in the seventeenth century. The laws of physics can describe the visible world around us extraordinarily well. We can look at a spring-mass system, and write down a differential equation pretty concisely to describe how that mass is moving around given a certain forcing. We can then generalize these same ideas in describing the dynamics of a more complicated system, such as a roller coaster. The analogous question in math neuroscience is, “Can we use physical laws to describe the way our brain works?” On the one hand, we can certainly still use differential equations, just like in classical mechanics, to describe, for example, the voltage dynamics of a neuron (brain cell). However, the mathematical description becomes much hairier when we look at a large network of neurons that are all communicating with one another. How can we develop a mathematical framework to decipher such a complex system? How can we translate physical observations into concise and useful mathematical expressions? One mathematical modeling tool is differential equations, but there are all sorts of other frameworks that we can use too. We can view the brain as a network of interacting neurons, akin to friends communicating in a social network,  and then we can use graph theory to examine, for example, how neuronal clusters in the brain might have specific functional roles. Or, instead we might use probability to describe the uncertain nature of perception. Optical illusions suggest that our senses aren’t completely accurate. So, it is natural to also ask whether there is a probabilistic component to information processing in the brain and how to concisely quantify it. These are but a few different mathematical perspectives we might provide in the realm of neuroscience.

AR: You’ve mentioned graph theory a few times. What is graph theory?

VB: Graph theory in broad brush strokes treats objects as dots. Let’s assume I don’t necessarily care about the detailed properties of these objects. So, when I look at a neuron, if I’m going to take a graph theory approach, I’m going to abstract over its spatial dimensions. Neurons are small, so I can approximate them as points in space. I can draw lines between these dots if the neurons interact. Representing all such objects and all such interactions gives us a graph. These graphs have really elegant mathematical characterizations, and, in turn, we can learn interesting properties about physical systems based off these graph theoretical measures. Now, if I we were to make the problem more complicated, I could put dynamics on this graph, which is what a lot of my research does. Neurons are objects in an interconnected network, but these objects do stuff that changes with time, so we can describe their actions with nonlinear systems of differential equations. Graph theory, in essence, allows us to boil these kinds of problems down into just the key components of who’s interacting with whom.

AR: So, would you say it’s mostly about the relationships between things, as opposed to the things themselves?

VB: Exactly! The theory itself is often about the relationships between the things. We can also put labels on these objects or assign functional roles, but then we have new questions and alternative mathematical tools we can apply. Abstractly, with graph theory, we can answer network science questions like, “Can I identify communities of highly interconnected objects? Similarly, can I split these objects into functional groups?”

AR: What research projects are you working on now, specifically?

VB: Broadly speaking, my research largely seeks to connect the structure of a network to its function. It doesn’t have to be a brain network necessarily; the same tools can be applied to similar scientific problems. Let me give you a specific example. It is currently very difficult to measure the connectivity of a neuronal network — that is, who’s talking to whom. Knowing that information can be helpful in diagnosing and treating brain disorders, for example. What turns out to be a lot easier is measuring the dynamics of these neurons. For example, measuring their voltages. So, one facet of my research asks the question, “Can I use measurements of neuronal dynamics (say voltage) to reconstruct the connectivity of a neuronal network?” Taking advantage of sparse structure in the network connectivity and mathematically tangible approximations of neuronal dynamics, my research suggests the answer is “yes”. I can also switch my perspective a bit and answer related questions like, “Can I take measurements of the dynamics of these neurons and figure out thought, for example, or reconstruct visual stimuli?” Using a mathematical framework, I seek to describe a sort of input-output mapping in the brain — such that if I undergo a specific sensation, I then can infer the evoked voltage dynamics, or vice versa. If I have an accurate mapping, then I can answer all sorts of questions like, “Can I predict, based off neuronal dynamics, a certain brain disorder? Can I develop a prosthetic device that can input an image and evoke the appropriate voltage dynamics to allow us to be able to “see” that image?”

AR: Besides vision and image processing, what are other big areas in mathematical neuroscience that people are working on?

VB: Like in applied math in general, the areas people work in are extremely diverse. Now, while I may study vision,  you could use these same tools — these same dynamical systems, nonlinear dynamics, and network science ideas — to look at almost any sensory system. So, I might ask the question, “How can I explain a particular optical illusion?” Well you can also ask a similar question, “How might I explain, for example, the way humans localize a particular sound based on the dynamics of certain neurons?” Using an alternative mathematical model and an adapted mathematical toolbox can provide a very elegant and intuitive answer.

Completely distinct mathematical tools provide other types of insights. One I believe was even discussed in a geometry class here at Swarthmore. Computational geometry or topological data analysis can provide interesting characterizations of both the unreasonable efficiency and complexity of the brain.

Such tools allow us to look at the structure of a particular network of neurons, and be able to answer questions like, “Are there hubs of neurons in the brain that have certain biological functions? Can I provide informative classifications of brain architecture?”

On yet another end of the mathematical neuroscience spectrum is the theory of learning. Here we ask questions such as, “How does the connectivity of our brain change over time as we process new information and can we provide mathematical rules to describe these changes?” The implications here are limitless, ranging from the treatment of brain damage to developing effective artificial intelligence.

Stay tuned for the second half of this interview, to be published tomorrow.

Leave a Reply

Your email address will not be published. Required fields are marked *