'Observational' Studies Can Produce Skewed Results

Caution is called for when doctors read results of such trials, new research suggests

TUESDAY, Jan. 16, 2007 (HealthDay News) -- The results of so-called observational studies can vary greatly, depending on the type of statistical analysis the researchers use.

So caution needs to be exercised when interpreting the results of these types of studies, claim the authors of new research published in the Jan. 17 issue of the Journal of the American Medical Association.

Randomized, controlled studies are considered the gold standard of medical science. In such trials, patients are randomly assigned to receive either the treatment or a placebo. The participants are then monitored for a certain period of time to determine the results.

But such studies are expensive, difficult to conduct and often involve ethical challenges.

Observational studies are a less expensive, less cumbersome alternative; patients are simply enrolled in the trial and observed in a natural setting, not a research setting, such as a hospital.

"We need to be more skeptical" of observational studies, said Therese A. Stukel, lead author of the new research. "You can't just can't throw a standard model at it and assume you're going to get a correct result. None of this stuff is written in stone."

"The patients haven't been randomized" in an observational study, added Stukel, a professor of community and family medicine at Dartmouth Medical School, and a senior scientist at the Institute for Clinical Evaluative Sciences in Toronto. "They have been selected by physicians and differences in outcome could be due to treatment or due to the patients you selected."

For example, physicians often select healthier patients for surgery. That could skew the results, she said.

Generally, factors that can be measured -- such as income or age -- are accounted for in observational studies. But standard statistical models can't account for unmeasurable factors such as a physician's own selection bias, Stukel said.

For the new study, the authors used four different analytic methods on the same set of research data to see if and how the results varied. The methods were: "multivariable model risk adjustment"; "propensity score risk adjustment"; "propensity-based matching"; and "instrumental variable analysis."

The first three methods are standard statistical tools. Instrumental variable analysis attempts to adjust for unmeasurable factors. "The key is that it behaves like randomization," Stukel explained.

The study included 122,124 elderly patients on Medicare who had been hospitalized with a heart attack in 1994 or 1995 and were eligible for cardiac catheterization -- a procedure in which a tube or catheter is inserted into a vessel in the arm or leg and then on into the heart or coronary arteries.

The patients who underwent cardiac catheterization were younger and had had a less severe heart attack than those who did not. All participants were followed for seven years.

The three standard statistical models showed a 50 percent decrease in mortality within 30 days of the procedure among those undergoing cardiac catheterization.

"This mortality is too favorable," Stukel said. "No cardiologist believes it. In fact, randomized trials very recently show an 8-to-21-percent mortality decrease. We knew that 50 percent was completely off the scale."

But the instrumental variable analysis showed only a 16 percent relative decrease in mortality, which was well within the range of the randomized studies.

"The bottom line is there are plenty of situations where standard methods work, and typically they work when we're selecting patients to two treatment groups where the groups are the same and the risks are the same," Stukel said. "The classic situation where they don't work is where you're looking at surgical vs. non-surgical treatments where you need to be healthier to survive surgery and you need to survive long enough to get the surgery so, if you die early, it may look like you weren't chosen for the trial."

"It would be nice to have a bottom line that every study should be a randomized, controlled study. But there are financial and ethical impediments so we still need observational studies," she continued. "But we need to be cautious and we need to think hard about whether that comparison is a fair comparison."

More information

To find out more on how clinical trials are conducted, visit the U.S. National Institutes of Health.

Related Stories

No stories found.
logo
www.healthday.com