Who Needs Labs?
Whenever a new drug molecule comes along, industrious biochemists set to work figuring out its mechanism of action. The path of least resistance is for every specialized lab to toss some of the drug molecule into their favorite bioassay and see if anything happens. Usually something does, thereby making it hard to know what the critical target is leading to desired pharmacology.
My personal favorite is the anticancer drug doxorubicin because I spent the better part of my scientific life trying to figure out how it works. Checking PubMed recently, I found 47,563 papers on this drug, thus rendering a precise definition of mechanism quite difficult.
My own view is that any effective drug has one (and usually only one) mode of action, and all the other things it does to cellular biochemistry are noise at best and a source of toxicity at worst. Alas, not everyone agrees, and proponents of “polypharmacology” posit that a suite of effects brings off the desired result on human health.
If the issue of drug mechanism can’t be settled here, newly published work at least makes it easier to predict potential toxicities. Best of all, you don’t even need to dirty your hands doing experiments, just calculations on a computer.
Keiser et al. examined the structures of 3,665 FDA-approved drugs and then compared these computationally to a library of other structures that bind to defined target molecules (Nature 462 [12 November 2009], 175–181). This generated thousands of predicted new drug-target interactions, a few of which were tested in the laboratory and found mostly to be accurate.
The meaning? Computational biology truly can drive socially useful experiments in defining side effects of therapeutic agents. Oh shucks, I guess you do need to do actual experiments.