Biocurious is a weblog about biology, quantified.

First OpenWorm article published

by Andre on 13 November 2014

worm nervous system

The OpenWorm team is trying to simulate an entire animal. It’s a big goal, but C. elegans is the right place to start. I had the pleasure of meeting a good portion of the team last week in London at their meeting and I’m definitely impressed with what they’ve managed to do so far. You can read their latest (and I believe first!) update in Frontiers in Computational Neuroscience here.

You can find out more on the page of their (extremely successful) Kickstarter campaign. The emphasis there is on something concrete that they can give back to contributors, but the longer term goal remains the development of a flexible platform for worm simulations. The most exciting aspect for me is the behavioural validation. Basically, to check whether the simulation is working and to optimize parameters, they will compare the output to known worm behaviour.

In principle, that’s not much different from comparing a mutant worm to the wild type, which I spend a lot of my time thinking about. We’ve recently sent them the data from Ev’s behavioural database paper and they’ve kindly agreed to serve the data, including the videos, which we are currently only doing for the segmented videos via our YouTube channel.

Our next generation tracker is going to generate some even better data both for our purposes in behavioural genetics, but also for OpenWorm’s simulation because we’re going to include a few stimuli rather than just looking at spontaneous behaviour. That kind of input-output data will be more useful for model optimization. Of course, what they really want is comprehensive optogenetics and imaging data coupled with quantitative behavioural data. That’s not quite available yet, but some groups are getting very close.

Comment [1]



Worm Art

by Andre on 30 September 2014

worm tracks on a agar plate

Comment [2]



Bill Bialek on theory in the age of Big Data

by Andre on 29 September 2014

One of the most talked about Big Data projects in the last year or so has been the BRAIN initiative in the US. It was prompted by the incredible advances that are being made in technologies to image and manipulate neural activity optically. In its early incarnation, the goal was to record the activity of every neuron in a brain over time. With Billions and Billions of neurons, that would most assuredly lead to a big pile of data.

In this context, Bill Bialek was invited to give some opening remarks on the second day of the Kavli/NSF symposium on the BRAIN initiative. Thankfully, they’ve made the video available, and it’s one of the most lucid expositions of the essential role that theory still plays in the age of Big Data that I’ve seen. He focusses on the brain and behaviour, but you could substitute cell biology and ‘omics without changing the message much.

Highlights from the talk:

Data mining [is] popular. But miners know gold when the see it. Theory is the source of explicit, testable hypotheses about what is golden in your data.

He goes on to make an important related point: even the algorithms you choose to apply represent implicit hypotheses about the data and strongly shape what you will find and how you will be able to interpret it.

If the goal is to explain behaviour in terms of neurons, synapses, molecules… we run the risk that the ingredients of explanation will outstrip the phenomena we are trying to explain.

So we need better quantitative characterisations of behaviour and better theories of what brains do.

Suppose I showed you a movie of what 10 000 water molecules are doing as the wander around in that liquid. I do not believe that by staring at hours of that movie you would ever induce the concept of wet. You can’t just look! It doesn’t work.

Bigger data will never solve this.

Comment [1]



‹ Newer posts Older posts ›