TUESDAY, Dec. 29, 2009 (HealthDay News) -- The U.S. Food and Drug Administration may not be as stringent in evaluating devices as it is in approving drugs.
According to a report in the Dec. 23/30 issue of the Journal of the American Medical Association, approval of cardiovascular devices often sails through based on studies that are not randomized or blinded, and sometimes even on the basis of one study alone.
"While we did not expect that all studies would be or should be randomized and blinded, we were surprised that the numbers of such studies were as low as they were and that there were many devices approved on the basis of a single study," said study senior author Dr. Rita Redberg, a professor of medicine at the University of California, San Francisco.
Redberg is also a member of the FDA Circulatory System Devices Panel, which advises the FDA on regulatory issues pertaining to these kinds of devices.
For its part, the FDA on Tuesday issued a statement in which it said the study presented an "inaccurate picture" of the clinical trials used in the agency's heart device review process, noting that devices and drugs are not the same thing and require different processes for approval.
One outside expert agreed that the researchers' conclusions were too harsh.
"I have been through the [device approval] process often enough with the FDA to know that these types of criticism don't reflect the typical standard of FDA behavior," said Dr. Kirk Garratt, clinical director of interventional cardiovascular research at Lenox Hill Hospital in New York City.
The problem with this study, Garratt pointed out, may be that the authors tried to look at too many different types of devices, not all of which have the same impact on the patient and not all of which can be randomized or blinded.
Still, there might be room for improvement, another expert added.
"The FDA has a tough job, and they do a pretty good job," added Dr. Charles Lowenstein, head of cardiology at the University of Rochester Medical Center. "This article is concerning because the results suggest that the FDA sometimes accepts studies that aren't as rigorous as possible. . . The FDA should rely on the best studies possible, not the best studies available."
According to the study authors, device safety has largely slipped through the cracks, even as attention on drug safety mounts.
Meanwhile, there has been an explosion in the number of cardiac devices out there. According to background information in the article, some 350,000 pacemakers, 140,000 implantable cardioverter-defibrillators and more than 1 million stents were implanted in the United States in 2008.
And a recent U.S. Supreme Court decision ruled that once a device is approved, consumers can no longer sue for safety or effectiveness problems.
The authors of this study reviewed 123 summaries on safety and effectiveness for 78 premarket approvals of high-risk devices (those that are life-sustaining and usually permanent) that took place between January 2000 and December 2007.
Many studies did not meet gold standards: only 27 percent were randomized and only 14 percent were blinded.
Also, some two-thirds of premarket approvals were granted on the basis of just one study, suggesting limited evidence for their safety and efficacy.
Only half of studies included control groups and 31 percent were retrospective (considered less reliable than prospective studies).
And many studies were not conducted in the United States, raising questions about how valid they are for a U.S. patient population.
But the FDA countered that it had "serious concerns" with the study's approach.
"Congress has recognized that devices are not drugs and device trials require consideration of unique design considerations," the agency said in its statement. For example, the finding that 65 percent of pre-market approvals came after just one study was cited in by the researchers as a weakness in FDA procedure. "However, in many cases, the mechanism of action of devices, physical and local, is understood and predictable... As a result, a single pivotal device study may be all that is needed to demonstrate safety and effectiveness," the FDA statement said.
The agency also noted that the randomized, controlled trials used for drug testing are often not appropriate when testing devices. Patients cannot ethically be "blinded" to whether they are receiving a stent, bypass or a drug for a heart problem, for example. And trials on a very large scale are also improbable for expensive and experimental devices such as heart valve implants, the FDA noted. "These examples demonstrate that device clinical trials often must incorporate practical realities that are not present with standard drug development," the agency said.
The study did not specifically address whether or not the current approval process has led to actual harm in patients, although Redberg said this is a question "we may address in future work."
But for now, she said, "I think clear requirements from the FDA on the strength of data required for device approval, in terms of type of study, type of endpoint, length of follow-up, completeness of data and design of study would help, as well as post-marketing data requirements."
"We are clearly entering a period of time during which there is going to be increased scrutiny about what we do," Garratt added. "I welcome that, but am concerned that the scrutiny might be misdirected. The implication from the paper is that the FDA is being too shoddy and sloppy in its approval process, allowing too many devices to get approved and it's running up the health-care bill. I don't think that's fair. We've seen gigantic improvements in patient outcomes."
The American Heart Association has more on implantable medical devices.