Study Challenges Cancer Screening Tests' Ability to Save Lives

But cancer expert says the study's conclusions are flawed

TUESDAY, Feb. 5, 2002 (HealthDayNews) -- In the wake of controversy over the validity of mammography comes a new study that challenges the way in which all cancer screenings are analyzed -- including those used to detect the earliest signs of breast, colon and lung cancer.

The study, conducted at Dartmouth-Hitchcock Medical Center in New Hampshire, claims that the way in which death rates are measured during studies of cancer screenings could be wrong. As a result, some of those screenings may be credited with saving more lives than they actually do.

"What we found is that the basic formula used to calculate the effectiveness of a screening to save lives may be flawed, due to one of two possible biases that are not being taken into account," says Dr. William C. Black, professor of radiology at Dartmouth Medical School.

If Black is right, then it's possible that a good deal of the preventive health recommendations made over the past 20 years may be misleading -- or even wrong.

But some cancer experts believe it's Black's findings that are wrong.

"His study raises some interesting points. However, the research documented in this paper fails to support the bias theory that he claims undermines the studies -- the data just isn't there," says Dr. Alfred Neugut, co-director of the Cancer Prevention Center at Columbia Presbyterian Medical Center.

Black's theory asserts that the accepted procedure for almost all studies of cancer screenings relies on death rates caused by the disease being studied. As a result, researchers are in danger of either overestimating or underestimating the number of deaths attributed to the disease.

The problem, he says, can be traced to one of two types of biases not previously identified.

In the first one, which Black calls the "sticky-diagnosis bias," deaths are falsely attributed to the cancer being studied simply because that cancer was discovered by the screening.

In the reverse situation, a problem Black identifies as "slippery linkage bias," deaths from the cancer being studied are wrongly attributed to a procedure performed to treat the disease. For instance, if a woman dies of a heart attack after surgery for breast cancer, she ends up with "heart attack" as her cause of death -- rather than breast cancer.

It is this second problem of "slippery linkage bias" that Black believes occurs most frequently. And it accounts for a good deal of the erroneous information garnered from studies of screening results.

Neugut, however, points out that almost all major studies are required to have extremely accurate classifications of the cause of any death, making it very difficult for an error to slip through the cracks.

What's more, he argues, "slippery linkage bias" is a far-fetched scenario.

"What the [Dartmouth] study is suggesting here is that a woman might die of an invasive treatment for breast cancer and that this would not be counted as a breast-cancer death," Neugut says.

"I don't know of very many women that die after having a biopsy or even a lumpectomy. So, it's hard to believe that this could be occurring enough times to skew study results examining the effectiveness of screening for breast cancer," he adds.

Black's study looked at 12 cancer screening trials -- six involving mammography, three focusing on fecal occult blood screenings for colon cancer, and three that looked at X-ray screening as a means of preventing death from lung cancer.

Of the 12 trials, five where shown to have discrepancies when comparing the number of overall deaths to the amount of deaths due to the disease being studied -- the tip-off, Black says, that the study results were skewed.

"If the number of disease-specific deaths go up in a specific group, then the all-cause mortality rates should rise as well," says Black.

Likewise, he says, if the number of disease-specific deaths is lower in one group than another, then the all-cause mortality rates should be lower as well.

"If the numbers don't match -- as they did not in five of these trials -- then you know you have a problem, either a flaw in the study or a flaw in the way the results were tabulated," Black says.

Black's study results are published today in the Journal of the National Cancer Institute.

Black does offer a solution to the problem. That is, make certain that all cancer screening studies include the overall number of deaths that occurred during the study as well as the deaths attributed to the disease being studied -- and that the numbers match.

What would also help, he adds, is more careful determination of the exact cause of death at the hospital level, where death certificates are generated.

His ultimate goal: "To make sure that people know whether or not a screening is truly valuable to them, and to only have those screenings that are going to make a difference in their lives," he says.

Neugut agrees with this objective.

"It's important to realize that just because a screening is available, it doesn't mean it's mandatory. Whether or not a cancer screen will add years to your life depends on your personal health history and your risk profile for disease, and not just the overall effectiveness of the screen," Neugut says.

What to Do: To learn more about screening for specific cancers, visit The National Cancer Institute. For a list of screening recommendations by gender and age, visit The University of California cancer screening site.

Related Stories

No stories found.
logo
www.healthday.com