March 14, 2012

REFORMING EPA’S HUMAN HEALTH RISK ASSESSMENTS

Posted on March 14, 2012 by Angus Macbeth

Risk assessments carried out under EPA’s IRIS program have been the subject of critical notice in recent months. The human health risk assessments which EPA performs across a range of programs merit attention, given their broad impacts in practical contexts; for instance, they form the basis for Superfund cleanups and RCRA corrective actions. But because they constitute guidance, they are not subject to judicial review at the time they are published and have not received much scrutiny by lawyers. Here are four aspects of how EPA typically conducts human health risk assessments that deserve attention and reform:

1. Publication Bias. In conducting a human health risk assessment, EPA starts by conducting a literature search  and assembling the scientific papers that report a chemical’s effects or lack of effects on humans and relevant animal species. This appears to be a fair way to review the scientific understanding of the chemical’s possible effects on humans and animals, but it fails to take account of publication bias. This well known phenomenon favors publication of studies finding “positive” results – an association between the chemical and a biological effect – over those that do not. In risk assessments, the determination of a dose below which there is no observable effect is very important. Reviewing the published literature can be highly misleading on that central issue. See, e.g., Sena et al., “Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy,”  PLos Biol 8(3) e1000344 (2010)(“published results of interventions in animal models of stroke overstate their efficacy by around one third.”). EPA needs to capture the results of research showing, at given doses, that a chemical has no effect on human or animal biological systems. A start in that direction would be to require researchers who receive government support to report such results.

2. Multiple Comparisons. A researcher on, say, the neurodevelopmental effect of a chemical on children or rats can have the treated subjects perform 20 different tests; at a 95% confidence level, the researcher finds one association which is written up and published without reporting on other tests that did not show an association. Having made 20 comparisons at the 95% confidence level, at least one association is likely to be spurious – the result of random chance. But if one does not know how many tests or comparisons were made, there is no basis for making a fair judgment as to the strength or value to give to the reported positive result. There is no requirement in law or custom that directs researchers to report the number of comparisons they made, and publication bias discourages the ambitious academic from reporting a large number of comparisons which would result in sober analysts putting lesser weight on the positive results reported. EPA needs to know how many comparisons a researcher made and what the results were. This could be achieved in large measure by requiring that government-supported researchers report such data; in addition, EPA could simply ask the researchers to provide this information before it relied on the published results in a weight-of-the-evidence review.

3. Meta analysis. In a weight of the evidence review, replication of results has great weight in persuading the reviewer that the results are sound; conversely, failure to replicate results detracts markedly from the weight that a study will be given. Being able to tell whether results are replicated or not replicated depends on having common metrics used in the studies; e.g., administering the same dose under the same conditions at the same age. This is very rarely done, thereby erecting barriers to accurate determination of the weight that should be given to experimental results. See, e.g., Goodman et al, “Using Systematic Reviews and Meta-Analyses to Support Regulatory Decision Making for Neurotoxicants: Lessons Learned from a Case Study of PCBs,” 118 Env. Health Perspectives 728 (2010). Again the federal agencies that support research financially should require that experiments be conducted and reported with sufficient common metrics to allow effective meta-analysis. Of course, this would not preclude measuring and reporting whatever else the authors chose.

4. Review of data relied on in critical studies. EPA typically relies on one or a few “critical studies” in performing its analysis and reaching conclusions as to the risks to human health that are presented by a chemical. EPA reviews the printed reports found in the peer reviewed journals carefully, but it very rarely asks to see the underlying data. To a lawyer, this seems perverse – a bias against examining the actual data that is said to support the Agency’s conclusion. With no falsification, there are a number of ways to present data that will affect such data’s ultimate implications. Statistical treatment is the most obvious example. Human health risk assessments are of major importance to the public health and frequently result in many millions of dollars of expenditure by companies guarding against the risks that EPA identifies. It is clearly important to make these judgments as accurate as possible. In these circumstances, at least for the critical studies, the Agency should routinely ask that the data underlying the printed article should be produced; it should then examine the data and the reported results should only be relied on where they are fully supported by the data.

Dealing with these four issues should contribute significantly to producing human health risk assessments that would command the respect of the knowledgeable public.

Tags: IRIShuman healthrisk assessment

Environmental Protection Agency

Permalink | Comments (0)