Biocurious is a weblog about biology, quantified.

Persistence length

by PhilipJ on 15 July 2006

I’m home in Newfoundland for a much-needed break from the lab, and given my distance from school right now (Google Maps says around 7500 km, if I were to try and drive back), I’ve been thinking a lot about life in the lab and graduate school in general, and what it is we’re meant to get out of this experience, particularly given my issues trying to re-transform my pPIU1 plasmid a month or so ago. I had an extremely strange result: DNA which I had previously harvested from a bunch of cells which contained antibiotic resistance genes against ampicillin no longer seemed to give cells antibiotic resistance to ampicillin when attempting to transform some E. coli cells again. This is assuming the plasmid DNA was able to enter the cells during the transformation protocol, but a control with another plasmid showed it should have worked just fine.

This takes me back to the biology is weird theme. Things which work just fine today can stop working tomorrow, and it is rarely obvious why. This is particularly true when you’re working in an interdisciplinary environment like a biophysics lab, where you have the optics junkie playing with cells. What I’ve been thinking about in particular is, how “long” (in the time domain) should your tracking-down-oddities persistence length be?

I’ve been given two very different answers from two very different people. The PI of the group we collaborate with said that one of the things she thought was critical to graduate school was learning when to give up on something that doesn’t make sense, when it isn’t central to your project. My end goal is to have an endless source of this particular piece of DNA, and if I’ve tried all the standard controls to make sure I’m not doing something wrong, or the cells are “bad” (ambiguous as that is), etc, and if I have a backup at her lab, then by all means stop trying to understand why, get some cells from frozen stocks left in her lab, and get back on track.

A fairly different take on the problem comes from Isaac, a commenter on the Pain in the USS peice who said (and I’m combining a couple of comments here),

Debugging is a pretty key skill. It’s very tempting to move on, but catching problems and being observant will help you in the long run. Experiential knowledge will tell you when some new observation is important, even in some of the most prosaic of techniques.

I’m not quite done with graduate school, but looking back on things, the most important thing to me so far has been figuring out why things haven’t worked. Ideally, grad school is a training ground, where you are supposed to screw up. Maybe in a few years my opinions will change.

I’ll be the first to agree that debugging is an extremely important skill, since I find that most of the work we experimental scientists do is repeat experiments over and over again. Being able to understand why something isn’t behaving the same way yesterday as today is critical when the instruments we use are complicated, and living cells make an optical tweezers instrument look simple. But I’m going to have to disagree with Isaac for a moment, since debugging a problem is often not an easy task, and the “persistence length” for this process should depend almost entirely on the importance of the confusion. As Isaac mentioned, graduate school is a place where we’re “supposed” to screw up, but if screwing up is all I end up doing (and it is extremely easy to spend a lot of time trying to track down problems), I don’t think I’ll feel like I’ve accomplished anything, degree in hand or not.



  1. Fred Ross    3937 days ago    #

    I’m also trying to get genetic engineering working after a background in physics, but I’m actually working in a biology lab where they’re trying to start doing really quantitative microscopy.

    The postdoc charged to getting me up to speed has me producing an enormous number of backups: the DNA from a miniprep goes into the freezer. Once it’s confirmed and transformed into E. coli, some of those cells get plated out on agar, and kept at four degrees while I maxiprep the rest. From the overnight culture of E. coli from the maxiprep, a couple Eppendorfs of cells go into -20 with glycerol as an additional buffer. And then the DNA goes into minus twenty, and I transfect and clone from that. If anything goes wrong with that, I start stepping back through the backups.

    Finding out why things don’t work is certainly an important part of grad school, but at the same time molecular biology is essentially a game of chance. Every step of the process is probabilistic, and the sum total of steps just has to drive the probability high enough where it will work. Given this framework, debugging a program isn’t the proper way to look at it. Instead the analogy is to maintaining a data center: what kind of resources do you have to throw at it to reach a certain level of confidence that it’s not all going down the tubes? And when a particular copy goes bad, you don’t desparately try to fix that copy, you just check it carefully to make sure it is dead, then throw it away and replace it from your other copies.


  2. kstrna    3934 days ago    #

    I think part of the learning process is figuring out what to debug and what not to. As Fred pointed out where to devote your resources and where not to and for how long.

    You think molecular biology is fun, try protein purification especially overexpression. Evolution is usually not your friend.


Name
Email
http://
Message
  Textile help