ACA App
Annals of Cardiac Anaesthesia Annals of Cardiac Anaesthesia Annals of Cardiac Anaesthesia
Home | About us | Editorial Board | Search | Ahead of print | Current Issue | Archives | Submission | Subscribe | Advertise | Contact | Login 
Users online: 1646 Small font size Default font size Increase font size Print this article Email this article Bookmark this page
 


 

 
     
    Advanced search
 

 
 
     
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
    Email Alert *
    Add to My List *
* Registration required (free)  


    References

 Article Access Statistics
    Viewed3820    
    Printed203    
    Emailed6    
    PDF Downloaded259    
    Comments [Add]    
    Cited by others 1    

Recommend this journal

 


 
 
EDITORIAL  
Year : 2011  |  Volume : 14  |  Issue : 1  |  Page : 3-5
Evidence based medicine: Can everything be evident?


Department of Anaesthesiology, Command Hospital (CC), Lucknow, UP, India

Click here for correspondence address and email

Date of Web Publication31-Dec-2010
 

How to cite this article:
Kapoor MC. Evidence based medicine: Can everything be evident?. Ann Card Anaesth 2011;14:3-5

How to cite this URL:
Kapoor MC. Evidence based medicine: Can everything be evident?. Ann Card Anaesth [serial online] 2011 [cited 2017 Apr 29];14:3-5. Available from: http://www.annals.in/text.asp?2011/14/1/3/74392


"90 percent of the published medical information that doctors rely on is flawed." John Ioannidis [1]

The practice of medicine is an "art," in which individual expertise and technique need to be nurtured to ultimately achieve the goal of a higher standard of patient care. The importance of clinical research for the practice of medicine is immense and undeniable. In 1991, the American College of Physicians (ACP) began a journal, the ACP Journal Club, in which articles published in various journals, with valid results, were abridged to one page and published along with a commentary of an expert. The ACP Journal Club soon commanded a high readership and became extremely popular. In 1995, the ACP and the BMJ Publishing Group collaborated to publish a journal, Evidence Based Medicine (EBM). This led to the evolution of EBM in all spheres of clinical medicine. EBM and Clinical Guidelines have become the order of the day in the last decade. The said need to promote them is to provide a stronger scientific foundation for clinical work, achieve consistency, efficiency, effectiveness, quality, safety in medical care and to limit idiosyncrasies. [2]

EBM is defined as "the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients," [3] whereas 'Clinical Guidelines' is defined as "systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances." [4] The exponential growth of the Cochrane Collaboration suggests that practicing EBM is highly fashionable. [5]

In the last decade, a number of high-profile studies and guidelines brought about path-breaking amends in the way Anaesthesiology and Critical Care are practiced. These studies and practice guidelines were supported by large randomized-controlled trials (RCT) and meta-analyses, considered as "gold standard" in EBM. However, a number of them were found to be wanting after being initially accepted with much fanfare. The "Surviving Sepsis Guidelines - 2004," [6] "Tight Glucose Control" regime [7] and studies on the efficacy and anti-inflammatory effects of Aprotinin [8] are some of the prominent ones that were soon found to be flawed.

Controversy, confusion, disappointment and uncertainty ensue when the results of clinical research on the effectiveness of interventions are subsequently contradicted, especially when high-impact research is involved. [9],[10],[11] There is concern that false claims are made in the majority of published research. [12] Ioannidis examined 49 original clinical research studies published in high-impact-factor specialty journals and cited more than 1000-times in the literature. Of these, seven (16%) were contradicted by subsequent studies while seven others (16%) had reported stronger effects than those of the subsequent studies. Of them, only 20 (44%) could be replicated and only 11 (24%) remained largely unchallenged. [11] If such a large percentage of most acclaimed research was proving untrustworthy, the scope and impact of the problem is undoubtedly undeniable.

Clinicians are being inundated by tidal waves of guidelines. [13] With more than 1000 guidelines created annually, calls for evolving "guidelines" for "clinical guidelines" have been raised. [14] RCTs may not be representative of the entire spectrum of a disease or procedure and its management. RCTs are often entered directly into meta-analyses without further evaluation of quality. [15] The proponents of meta-analysis are dismayed that many RCTs were scientifically unsatisfactory despite a "gold standard" status. [16],[17],[18] Horlocker and Brown in an editorial stated that, "Performing a systematic review is much like weaving cloth. Although the design may be the same, depending on the age, strength, and quantity of the material, the final product may be either silk, brocade or polyester. While both cover the subject, they vary greatly in intrinsic worth. As clinicians, it is our responsibility to achieve the best fit." [5]

The common errors in framing of RCTs and their selection for the meta-analyses are enumerated below:

  1. Only certain important symptoms and other clinical variables are considered to identify subgroups. Trial information often omits clinical details that may be crucial for many other therapeutic decisions. For example, pathologic and radiologic data are readily accepted but clinical observed data are exposed to tests of observer variability. [19]
  2. Diagnostic and inclusion/exclusion criteria considered in different RCTs may not be exact.
  3. A review may not examine the disease or process in totality. For example, a recent Cochrane review examining the effectiveness of antithrombotic therapy for prevention of thrombo-embolism after surgery for hip fracture concluded that while heparin and low-molecular weight heparin protected against Deep Vein Thrombosis , there was insufficient evidence to confirm protection against Pulmonary Embolism or overall benefit. [20]
  4. In many clinical areas, RCTs are either not pertinent or not possible. Trials may be deemed unethical if aimed at instigating and investigating agents (such as smoking or alcohol) having noxious effects in causing disease. Even if deemed ethical, the trials would probably be unable to recruit enough volunteer subjects. [15]
  5. All evidence is not equally good. Some studies are better designed than others, and sample sizes and levels of statistical significance vary. There have been attempts to assign a level to each piece of evidence to indicate its quality, but there are still differences within a level. [21]
  6. Even if the quality of the evidence is classified properly, evidence can be contradictory. How many weaker studies are needed to overcome the results of a stronger study? The answer is subjective. [21]
  7. The results are classified according to the randomly assigned regimens and not the regimens that were actually received. Additional therapeutic decisions and interventions and the subsequent consequences are ignored.
  8. The patient's clinical state and response determines adjustment of individual dosage or discontinuation of an agent and replacing it with another. Decisions to initiate or stop oxygen therapy, mechanical ventilation, blood transfusion, inotropes or potassium/magnesium supplementation will almost always depend on individual pathophysiologic status and not on the published guidelines.
  9. Personal preferences, psychosocial factors and comfort must be individualized and cannot be suitably guided by published reports for an "average" patient. The subjects in drug trials and other studies are seldom an exact match for the patients that a doctor is treating. The trial may have been performed using patients in a different age range, physical condition, race, time frame and with/without comorbid diseases. [21]


Other factors cited to influence the results of the studies are:

  1. The choice and delegation of "authority" in the EBM processes may not stand on their own merits but may be influenced by who commissioned or supported the work or who did it. [15]
  2. The medical publication process may be biased as they are accountable to a publisher and to a sponsoring medical society. Publication of the study or otherwise may depend on: Who decides what to accept or reject? Who determines what information will be disseminated? Who chooses the peer reviewers? What are their qualifications and credentials? Who determines which topics are chosen to receive meta-analytic appraisals and authoritative decisions? [15] It is generally also difficult to publish negative trials.
  3. Funding weakens the reliability of medical research. To get funding and tenured positions, researchers have to get their work published in well-regarded journals, where the rejection rates are high. The studies that tend to make the grade are those with eye-catching findings. [1]
  4. Statistics may be manipulated to achieve the result the investigator desires. Peer reviewers can be fooled by statistical methods, even if they are careful to check everything, due to lack of knowledge of the probability theory and lack of standardization. Meta-analysis can be manipulated by not performing the most appropriate analysis.
  5. Lastly, the process of developing a set of evidence-based guidelines is not an entirely impartial and objective process. Insurance companies may influence the process to keep a check on costs. Drug companies may try to ensure that their products are included in the standard protocol. Those that evaluate the scientific evidence too may have their own biases affecting the outcome.


EBM has also received flak for promoting stagnation and bland uniformity, and has been derogatorily referred to as "cookbook medicine." [22],[23] Instead of using clinical judgment, practitioners are encouraged to follow protocols that treat all patients as essentially interchangeable. By discouraging idiosyncrasies in clinical technique, standards introduce disincentives for individual innovations in care. Protocol-based medical practice may result in replacement of physicians by less-expensive and less-skilled technicians who may be incapable of operating effectively in diverse situations. [22],[23]

EBM can, at best, promote evidence brought forward in RCTs and meta-analyses. It can, in no way, promulgate evidence not reflected in RCTs. This state of affairs is akin to the laying down of arms by Dronacharya (a great teacher and warrior), in the Hindu epic Mahabharata, on being informed that Aswathama had been killed in the war. He was tricked to believe that it was his son Aswathama, while in actuality, it was an elephant named Aswathama that had been slain!

 
   References Top

1.Ioannidis JP. Lies, damned lies, and medical science. The Atlantic November, 2010. Available from: http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/ [Last accessed on 2010 28 Oct].  Back to cited text no. 1
    
2.Timmermans S, Mauck A. The promises and pitfalls of evidence-based medicine. Health Aff 2005;24:18-28.  Back to cited text no. 2
    
3.Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. "Evidence-based Medicine: What It Is and What It Isn′t" BMJ 1996;312:71-2.  Back to cited text no. 3
    
4.Field MJ, Lohr KN. Clinical Practice Guidelines: Directions for a New Program. Washington, DC: National Academy Press; 1990.  Back to cited text no. 4
    
5.Horlocker TT, Brown DR. Evidence-Based Medicine: Haute Couture or the Emperor′s New Clothes? Anesth Analg 2005;100:1807-10.  Back to cited text no. 5
    
6.Dellinger RP, Carlet JM, Masur H, Gerlach H, Calandra T, Cohen J, et al. Surviving Sepsis Campaign Management Guidelines Committee. Surviving Sepsis Campaign guidelines for management of severe sepsis and septic shock. Crit Care Med 2004;32:858-73.  Back to cited text no. 6
    
7.Van Den Berghe G, Wouters P, Weekers F, Verwaest C, Bruyninckx F, Schetz M, et al. Intensive insulin therapy in critically ill patients. N Engl J Med 2001;345:1359-67.  Back to cited text no. 7
    
8.Mangano DT, Tudor IC, Dietzel C for the Multicenter Study of Perioperative Ischemia Research Group and the Ischemia Research and Education Foundation. The Risk associated with aprotinin in cardiac surgery. N Engl J Med 2006;354:353-65.  Back to cited text no. 8
    
9.Ioannidis JP. Why most published research findings are false. PLoS Med 2005;2:e124.  Back to cited text no. 9
    
10.Vandenbroucke JP. When are observational studies as credible as randomised trials? Lancet 2004;363:1728-31.  Back to cited text no. 10
    
11.Ioannidis JP. Contradicted and Initially Stronger Effects in Highly Cited Clinical Research. JAMA 2005;294:218-28.  Back to cited text no. 11
    
12.Colhoun HM, McKeigue PM, Davey Smith G. Problems of reporting genetic associations with complex outcomes. Lancet 2003;361:865-7.  Back to cited text no. 12
    
13.Jackson R, Feder G. Guidelines for Clinical Guidelines. BMJ 1998;317:427-8.  Back to cited text no. 13
    
14.Rosser WW, Davis D, Gilbart E. Assessing Guidelines for Use in Family Practice. J Fam Pract 2001;50:969-73.  Back to cited text no. 14
    
15.Feinstein AR, Horwitz RI. Problems in the ′′Evidence′′ of ′′Evidence-based Medicine.′′ Am J Med 1997;103:529-35.  Back to cited text no. 15
    
16.Sacks HS, Berrier J, Reitman D, Ancona-Berk VA, Chalmers TC. Meta-analysis of randomized controlled trials. N Engl J Med 1987;316:450-5.  Back to cited text no. 16
    
17.Detsey AS, Naylor CD, O′Rourke K, McGeer AJ, L′Abbe KA. Incorporating variations in the quality of individual randomized trials into meta-analysis. J Clin Epidemiol 1992;45:255-65.  Back to cited text no. 17
    
18.Moher D, Jadad AR, Nichol G, Penman M, Tugwell P, Walsh S. Assessing the quality of randomized controlled trials: An annotated bibliography of scales and checklists. Control Clin Trials 1995;16:62-7.  Back to cited text no. 18
    
19.Elmore JG, Feinstein AR. A bibliography of publications on observer variability (final instalment). J Clin Epidemiol 1992;45:567-80.  Back to cited text no. 19
    
20.Handoll HH, Farrar MJ, Mcbirnie J, Tytherleighstrong G, Milne AA, Gillespie WJ. Heparin, low molecular weight heparin and physical methods for preventing deep vein thrombosis and pulmonary embolism following surgery for hip fractures. Cochrane Database Syst Rev 2002;4:CD000305  Back to cited text no. 20
    
21.David Annis. The limits of evidence-based medicine. Digital Bits Skeptics. 2008; Article ID: 1244. Available from: http://www.dbskeptic.com/2008/08/17/the-limits-of-evidence-based-medicine/ [Last accessed on 2010 Oct 29].  Back to cited text no. 21
    
22.Costantini O, Papp KK, Como J, Aucott J, Carlson MD, Aron DC. Attitudes of faculty, housestaff, and medical students toward clinical practice guidelines. Acad Med 1999;74:1138-43.  Back to cited text no. 22
    
23.23. Charlton BG. Restoring the Balance: Evidence-based medicine put in its place. J Eval Clin Pract 1997;3:87-98.  Back to cited text no. 23
    

Top
Correspondence Address:
Mukul C Kapoor
16 Church Road, Delhi Cantt - 110 010
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/0971-9784.74392

Rights and Permissions



This article has been cited by
1 Have we something to replace evidence based medicine
Mahboobi, H., Mansoori, F., Jahanshahi, K.A., Khorgoei, T.
Annals of Cardiac Anaesthesia. 2011; 14(3): 246
[Pubmed]



 

Top
Previous articleNext article