We rely on a multitude of chemicals for nearly every aspect of contemporary life. In fact it’s hard to conceive of life in the 21st century without the drugs, materials, foods, fuels, and other chemicals that make it all happen.
There is also general agreement that society needs to protect citizens from the toxic repercussions of chemicals. Hence, in the U.S. we have TSCA (the Toxic Substances Control Act) and in the European Union there is REACH (Registration, Evaluation, Authorization, and Restriction of Chemicals).
But how do we decide which chemicals are dangerous and which are not? A recent thought piece by Thomas Hartung of Johns Hopkins University shines helpful light on the nature of the scientific problem (Nature 460 [July 9, 2009], 208–212). The essence:
- Experiments on human beings are unethical.
- Animal experiments often don’t mimic human responses.
- Current animal testing strategies yield too many false positives.
- Incorrect choice of end point can bias the results and conclusions.
- Acute toxicity is relatively rare and hard to detect with imperfect models.
So what to do?
Hartung doubts that traditional toxicity testing will radically improve our ability to make wise choices. Thus, he proposes an “integrated testing strategy” that relies less on animals and more on cell-culture, computer modeling, and emerging insights from systems biology. This would require a complete retooling of traditional toxicology, but would be high throughput, cheaper than current methodologies, and build on the immense knowledge base of current chemistry and biology.
Alas, regulation is not only about science, but also about politics. My colleagues Jody Roberts and Gwen Ottinger have written about this aspect.
So perhaps together we can harmonize good policy and hard science. Worth a try, don’t you think?