Posted by ed_uk on February 18, 2005, at 10:01:19
From the Pharmaceutical Journal.........
Concerns about the reporting of clinical trial results in medical journals have recently been expressed in a variety of forums.
For example, one of the topics of a speech given by Richard Smith, former editor of the BMJ, was the ways in which the results of clinical trials can be presented to appear favourably to companies with new drugs for which they are seeking marketing approval.1
Dr Smith’s presentation was made as he received the 2004 award, from HealthWatch, a registered charity that aims to support and encourage the scientific testing of conventional, complementary and alternative medicine and therapies. Examples highlighted by Dr Smith include:
· Comparing a new drug with a placebo or too low a dose of a competitor’s drug
· Comparing a new drug with too high a dose of a competitor’s drug, so that it looks as though the new drug has fewer side effects
· Carrying out just a small-scale trial comparing a new drug to a better, but more expensive treatment, so that the results show that there is “no significant difference” between the two agents, thereby making the new drug appear to be good value for money
· Reporting just the end-points of a clinical trial in which a new drug performed well
· Reporting just the results of particular groups of patients, or patients from particular centres in a multi-centre trial, in which a new drug performed well
· Not publishing clinical trial results at all, if there is no subgroup that does well
· Publishing good studies more than once by including, for example, different outcome measurements or the results from different follow-up periods in separate papers
· Publishing positive results in major journals and negative or neutral results in minor journals
As an illustration of these methods being used, Dr Smith cited one particular drug about which there were publications describing 84 trials on 11,980 patients. In fact, there were only 70 trials involving 8,645 patients, but 17 per cent of the trials had been published more than once. This was not identifiable from the published studies. He went on to demonstrate how the multiple publication of trials could increase apparent effectiveness, using an example of how 16 trials showing a number needed to treat (NNT) of nine could be presented as 25 trials showing a NNT of five, by duplicating the results of the most favourable trials.
In a similar vein, members of Parliament heard just before Christmas from Richard Horton, editor of The Lancet, about ways in which pharmaceutical companies attempt to influence the editing of clinical trials reports they have submitted for publication. Dr Horton told the House of Commons Select Committee inquiry, set up to investigate the influence of the pharmaceutical industry, about how drug companies sometimes imply that there is a link between their submission of a paper with revenue for medical journals, from reprints.After submission, authors or sponsors might then try to intervene to move the peer review process along in directions that are less critical of their drug.2
Dr Horton gave as an example an unnamed company that had threatened to withdraw a paper on a cyclo-oxygenase-2 inhibitor because it believed that the review process was over-critical. The company stopped interfering after the paper’s authors had been told that the paper would be rejected unless the company backed off, he said.
There has also been criticism of the way in which clinical trials are reported and analysed once a drug has received marketing approval. A recent paper in The Lancet concluded that the concerns about the cardiac safety of rofecoxib should have been known by the end of 2000, and hence the drug withdrawn earlier than it was.3 Their study was based on the results of 18 randomised controlled trials and 11 observational studies incorporating 20,000 patients. It should, however, be pointed out that the manufacturers of rofecoxib did not agree with this assertion, stating that the authors had inappropriately combined heterogenous data, which runs counter to the basic principles of meta-analysis.4
Among the conclusions that seem appropriate to draw from all this information is that it is important that those carrying out meta-analyses of clinical trial reports, or reviewing such literature with a view to prescribing a drug or informing clinical practice, read the equivalent of the “small print” in the original papers. Whatever the vigilance exercised by pharmacists and others carrying out such activities, a fair reflection of a drug’s safety and efficacy will be hard to achieve if trial results have been published more than once (this not being evident from the reports) or if unfavourable results have not been published at all. Dr Smith’s suggestions of a register of all trials, so that unfavourable trials do not “disappear”, critical review of trial protocols by independent experts, publication of trials in online journals that do not rely on commercial sponsorship for revenue and more public funding for clinical trials, seem to be particularly worthwhile aims.
poster:ed_uk
thread:459814
URL: http://www.dr-bob.org/babble/20050217/msgs/459814.html