Thursday, November 17, 2005

NSF Grant: Background

In this series of blog postings, I'm going to summarize my process of going from early draft to final product in preparing a grant proposal for the National Science Foundation's Mathematical Biology Program. We'll see if there is any value in doing this after the fact. In this post, I'll go over the background behind the proposal, including both a vague outline of the science (non-technical, so I'll dispense with references) and the basics of my proposal preparation process (such as it is) to date.

It's often said that preparing a grant proposal can take six months to a year of work. I'm sure that some people actually spend that much time focused purely on developing a proposal, but that doesn't seem a very attractive way to spend my time. Instead, I've spent my time (well, yes, more than 6-12 months) actually doing research. At this point, I've got a firm understanding of the biological, dynamical, and computing/theoretical basis for my work, I have preliminary results that demonstrate that I can do the work and that this is at least a feasible line of inquiry, and I enjoyed the process. One of the reasons you go into academia is to do research; why spend maybe 1% of your life merely working on proposing to do research? Of course, if you need a billion dollar spacecraft to do your research, then things (such as your planning time horizon) are a bit different. Computer scientists are, by comparison, cheap dates.

So, what's my research on? I want to understand brains as computing devices. Now, this is a tall order, so I've personally "settled" for trying to understand how very small networks of nerve cells (neurons) do their thing. You may have heard of all sorts of exciting results in the neurosciences, and how researchers are closing in on the secrets of the brain. Couple this with the first, real non-industrial commercial/near-commercial robots, and it really looks like we're close. I'm not so confident of that.

First of all, progress in robotics is misleading, because most of it is in sensors, actuators, and processing power. Little of it derives from any deep understanding of biological information processing. On the other hand, it is certainly true that we have entered a golden age in the neurosciences, with an amazing array of tools that allow us to gather all sorts of information about brain, network, and cell function. You may have seen "pictures of the brain working": functional magnetic resonance imaging (fMRI) that shows how active different areas of the brain are while a person does some task. This is often accompanied (for non-technical consumption) by an article that talks about how much this tells us about how the brain works. OK, I'm contrary by nature, but in my opinion this tells us very little about how the brain works. Yes, it narrows the focus of our inquiry from the entire brain to maybe 10% of it. Yes, it tells us something about which areas are active at the same time, and even sequences of activation in time. But we're still talking about the activity of hundreds of millions, even billions, of neurons. To me, this is just the peeling of the outer skin of the onion.

I don't want it to seem that I'm saying that these investigations aren't worth it or that they don't produce great data; they do. It's just that the brain is so incredibly complex: the most complex object in the known universe. A research data flow of petabytes per year over decades may only scratch the surface of what we need to learn before we understand how the brain works (assuming we're capable of assimilating so great a flux). This complexity may very well extend to the smallest level: while many researchers consider individual neurons to be simple devices (in a computational sense), this is really just an assumption. The fact is that a good simulation of a neuron, including its shape and the interactions of its internal molecular machinery with its external electrical and chemical activity, is a job for a hefty supercomputer. If this structural complexity shows through to a neuron's computational complexity, then we're suddenly dealing a brain as a complex network of billions of supercomputers.

There's so much complexity in nervous systems that entire aspects of them are almost ignored, or at least given second-class status in the search for understanding. Just one example: I've mentioned nerve cells, or neurons, as making up nervous systems and brains. But there's another class of cells that's actually more numerous in our brains: glial cells. They're usually dismissed as serving only structural and physiological functions: scaffolding and waste disposal. But they can produce external electrical and chemical activity. What does it do to our view of neural computation if glial cells play an important role?

Anyway, my research focus for this grant proposal is on error correction coding: looking at how characteristics of the output of one neuron could be used to allow other neurons to recover faster if an error occurs. Think of it as the neural equivalent of a CD still being playable despite the fact that it's scratched. This is small enough scale for me to be able to wrap my mind around it. Whether this is reflective of the true complexity of the subject matter, or merely the complexity of my mind, is another story.

So, back to the grant proposal. I know what I want to do, I can explain why I think it's important, I can relate it to biology and mathematics, and I have a lot of material I've already written about the subject (four conference papers and a journal paper). Writing should be a piece of cake, right? I give myself three months to do it, working part-time, of course (I still have teaching and a journal paper to finish writing while I'm doing this). More on this to come...

Topics: , .


  1. What's an error from a neuron point of view? How does a neuron knows it has made an error?

  2. Just like the CD (or any transmitter), the source neuron doesn't "know" about the error; the receiver does. In engineered systems, there are explicit algorithms that detect and correct errors. My hypothesis is that error correction may be implicit in the fundamental behavior of neurons, at least under certain conditions. So, my definition of correction is the time it takes the receiver to return to the behavior it would have shown if the error hadn't occurred. In other words, errors have aftereffects at the receiving end, and error correction means these aftereffects have a shorter duration.

    As far as what is an error, without going too much into the biology, I'm considering it to be a transient transmission failure. There is some literature that supports the idea that this would be the most common kind of "error". Of course, that assumes that these failures aren't "purposeful".