Biocurious is a weblog about biology, quantified.

TAGC Automated Tracking for Quantitative Phenotyping Workshop: Summary and Themes

by Andre on 18 July 2016

The workshop presenters, in order of appearance:

Harold Burgess, NIH

Megan Carey, Champalimaud

Gordon Berman, Emory

Ben de Bivort, Harvard

The goal of the workshop was to bring together researchers from different model organism communities who are working on tracking to do quantitative phenotyping. Niko Tinbergen defined behaviour as “the total movements made by the intact animal” in 1955 (review here), but this definition doesn’t tell us directly how animal movement should be measured, nor which aspects to focus on. Now is an interesting time to revisit these basic ethological questions in light of exciting progress in imaging and computer vision. Each model species has some idiosyncrasies but there are several challenges that are shared by us all and these emerged as themes of the workshop in the presentations and discussion afterwards. Here is my view of a few of these:

Tracking and dimensionality reduction
All of the workshop participants use some form of dimensionality reduction in their representations of animal behaviour. For me, this has been either based on PCA of worm shapes (Stephens et al.) or a direct discretisation into a set of template postures (Schwarz et al.), followed by a search for behavioural motifs (Brown et al. and Gomez-Marin et al.). Zebrafish larvae show bursty behaviour with fairly distinct starts and ends (part of a rich behavioural repertoire [pdf]). Harry uses the kinematics of these bouts to classify them into different kinds of locomotion (Burgess et al.) which can then be used to understand how fish phototax (Fernandes et al.) or perform escape manoeuvres when startled (Yokogawa et al.).

For limbed animals like flies, detailed tracking of body parts remains challenging. Gordon gets around this by directly analysing pixel values in movies followed by a couple of steps of dimensionality reduction (first on the pixel values and then on their variation). The resulting map of adult fly behaviour space shows clear evidence of stereotypy and has made it possible to analyse the structure of fly behaviour by looking at the transitions between stereotyped states (Berman et al.). Ben has also studied the walking of adult flies in detail, but using fluorescent particles to tag and track the legs (Kain et al.). He has also recently done an extensive study of unsupervised methods to find stereotyped patterns in this data (Todd et al.). Megan uses an SVM coupled to temporal information to accurately track the paws, tail, and nose of mice. Using a mirror they collect both a side and bottom view of the mice from a single camera (Machado et al.). Precise tracking of multiple points was essential for their studies of uncoordination in pcd mutants (more on that below). They’ve now added a (transparent!) split treadmill system (each side of the mouse needs to move at a different speed) to study how they respond to perturbation and how they learn to compensate, or not.

Finally, as Ben nicely illustrated in his work on flies in Y-mazes, sometimes low resolution is fine and can allow for much higher throughput. Collecting truly massive datasets (over 100 million decisions recorded so far!) has allowed them to look in detail at variability in behaviour which they’ve recently published in two papers focussing on the neural and genetic mechanisms (Buchanan et al. and Ayroles et al.).

Handling large data sets
Everyone is processing large video streams and using different approaches to deal with the data. Harry takes perhaps the most efficient approach and processes larval body shape in real-time so that the volume of data actually saved to disk is relatively small (just a few angles rather than millions of pixel intensities). This is similar to the approach taken in Swierczek et al. for worm tracking. This is especially important for fish, since they’re recording at 500-1000 fps when looking at the detailed kinematics of the tail. For some high-resolution experiments, following a single animal with a motorised stage makes it possible to record behaviour in a large area without a large field of view. This is something both Gordon (Berman et al) and I (Yemini et al.) talked about briefly. Megan reported two variations of restricted locomotion, one in which mice walk along a short track for imaging (Machado et al.) and the other using a treadmill. Ben has immobilised flies and allowed them to crawl on a floating ball, basically a treadmill for flies (Kain et al.).

We are currently taking a slightly different approach where we record multiple individuals in a large field of view, but zero the background around the worms. We can thus keep high-fidelity images of the worms for later analysis or manual review, while greatly reducing the volume of data that needs to be saved. You can see the current version and progress on Avelino Javer’s GitHub page.

Multidimensional phenotyping and being surprised by your data
One of the clearest illustrations of the importance of collecting multiple independent dimensions of behaviour came from Megan’s talk. When trying to understand how pcd mice differ from the wild type, they found that they were slower and smaller, raising the possibility that this was the main kind of difference between them. But, by using a careful characterisation of the wild type speed and size relationship, they were able to show that the pcd mice were not simply on a different part of the speed/size curve, but were indeed uncoordinated. Other places where multiple independent behavioural dimensions were critical was in Harry’s use of multiple kinematic features to classify fish behaviours.

Having a multidimensional view of behaviour is also great for using unsupervised approaches to behaviour analysis. Gordon summed up the usefulness of this as giving you the ability to be surprised by your data, a bit like a forward-behaviour screen (analogous to a forward genetic screen).

Enabling technology and accessibility
After the talks, one of the discussion points was about what technology is currently enabling tracking and analysis. The main ones were the availability of open source software for computer vision (and, increasingly, for behaviour analysis itself), the low cost of accurate sensors, and the possibility of rapidly prototyping hardware components using 3D printing to make an instrument that does just what you want. This is being strikingly illustrated to me today at a wonderful Champalimaud summer school where Isaac Bianco is currently getting the students to build fish tracking setups from scratch. They’ll be tracking centroid and orientation in real time by sundown!

I do still think it’s important to increase the accessibility of tracking technology though because many labs that would like to take advantage of these approaches aren’t always setup to implement them even if software and designs are open source. I think the way around this is a commercial tracker based on open source hardware and software, the idea being that it can be adapted to specific groups’ needs if they like, but it will work out of the box for groups that just want to get some tracking data. The reason I believe this must be commercially available is the much bemoaned state of academic technology more generally: the incentives are not right to make things useable or provide support, and it’s difficult to get funding for this unglamorous task. See, for example here.

Overall, it was an exciting workshop and I would like to thank the speakers again, as well as the audience for taking part and the GSA for organising TAGC. I thought it was great to have the different model organisms in the same place.

  Textile help