House / Blowback 423
says, “We have to do something” (Armstrong, 2006; Mathews, 2005; Zimmerman & Tomsho,
2005). A dozen journal editors have proposed new standards for publication, like full data dis-
closure from clinical trials, transparency about financial interests, submission of original designs
so reviewers can see how the study has been modified, and signed statements from authors that
they have written the report.
Professional medical associations have proposed codes of ethics that include restrictions
on gifts and financial ties, and some universities have issued rules about authorship and drug
company ties (Mayor, 2003). As usual, student groups have been strongest in advocating eth-
ical behavior (Moynihan, 2003b). One recommendation is that sponsors not be allowed to
overrule researchers when it comes to publishing negative findings (Garattini, Bertele, &
Bassi, 2003). Of course, drug companies are resistant to such changes (Moynihan, 2003b).
Finally, our conception of rigor in evaluation studies is far too weak to contend with these
potent threats to impartiality. In reviewing the drug literature, I encountered several comments
that the clinical trials were highly biased but that the methodologies were good. Here’s one
statement: “A recent systematic review of the impact of financial conflicts on biomedical
research found that studies financed by industry, although as rigorous as other studies, always
found outcomes favourable to the sponsoring company” (Lexchin et al., 2003, p. 1167). How
can you have a rigorous methodology that yields biased results?
Apparently, the researchers mean that the study is randomized, has hidden allocation, and
is double blinded. No doubt these are important. But they do little to handle deliberate biases
that arise from opportunistic choice of comparator, dosage, administration, time frame, selec-
tion of surrogate endpoints, cherry-picking data analyses, selective reporting, sponsorship,
and financial ties. Our notion of rigor and good method seems to be based on a narrow, almost
formulaic conception. The companies are deliberately designing studies to take on the trap-
pings of rigor and biasing the study by other means. In other words, they are gaming our con-
ceptions of validity and bias control and the FDA approval process.
When evaluation started as a professional practice, it was natural that evaluators would
conceive biases in field studies as resulting from the same sources that social psychologists
encountered in lab experiments. In Campbell and Stanley’s (1963) seminal work on experi-
mental design, the sources of invalidity are maturation, mortality, instrumentation, selection,
repeated testing, and so on, all of which are important. However, such a list is inadequate for
what we face in a less innocent time.
Lab studies occur in contexts not heavily infused with politics; evaluations occur in set-
tings with powerful political forces. Drug evaluation is riddled with politics, and those poli-
tics profoundly affect the findings. We need conceptions of bias control that address these
sources of bias. We need to acknowledge vested interests in evaluations and quit pretending
that they don’t affect findings. Otherwise, we look foolish or complicit.
Why should we bother? There have always been those who say we should conduct studies
the way those who pay for them want. I find that course unacceptable for professional, ethi-
cal, and personal reasons. I believe the profession is at serious risk here. The world doesn’t
need evaluators who have no credibility any more than it needs auditors who have none.
R.I.P., Arthur Anderson. As for ethics, what about those 100,000 people who had heart attacks
and strokes because the Vioxx evaluation was handled improperly? Surely, we must have eth-
ical concern for the welfare of patients.
Recently, Alan Ryan (2004) introduced me at the Canadian Evaluation Society by saying that
over the years I have reminded evaluators of their moral responsibility and alerted them to the
dangers of being seduced by the agendas of those in power (Ryan, 2004). Those are some of the
kindest things anyone has said about me. Others say I am just a pain in the ass—a dull one at that.
at University of Victoria on December 10, 2008
http://aje.sagepub.com
Downloaded from