ISSN: 2471-9919
Evidence based Medicine and Practice
Make the best use of Scientific Research and information from our 700+ peer reviewed, Open Access Journals that operates with the help of 50,000+ Editorial Board Members and esteemed reviewers and 1000+ Scientific associations in Medical, Clinical, Pharmaceutical, Engineering, Technology and Management Fields.
Meet Inspiring Speakers and Experts at our 3000+ Global Conferenceseries Events with over 600+ Conferences, 1200+ Symposiums and 1200+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business
  • Editorial   
  • Evidence Based Medicine and Practice, Vol 2(1)
  • DOI: 10.4172/2471-9919.1000e112

Critical Appraisal of Research Designs

Department of Clinical Research, Federal University of Uberlandia, Uberlândia, Brazil

Received: 11-Dec-2015 / Accepted Date: 18-Dec-2015 / Published Date: 25-Dec-2015 DOI: 10.4172/2471-9919.1000e112

41541

Introduction

For each research question there is a more suitable type of design or research design (Table 1). It is necessary to identify the advantages and disadvantages of each type of study, and to assess their level of evidence and degree of recommendation [1-4].

Clinical question Research designs
Diagnosis cross-sectional studies, controlled trial
Therapy double-blind RCT, Systematic review/meta-analysis/network meta-analysis
Prognosis cohort studies, case control, case series
Harm/Etiology cohort studies, case control, case series
Prevention randomized controlled trial, cohort studies
Quality improvement Randomized controlled trials
Incidence cohort studies
Prevalence cross-sectional studies

Table 1: Clinical question and research designs.

The Table 2 shows the checklists needed to make a critical analysis of research designs [1-4].

Appraisal questions
Is there a clear statement of the aims and objectives of each stage of the research, and was there an innovation? Did the authors clearly define the aims and objectives of the project? Were the aims and objectives appropriate?
Was an innovation being considered at the outset, or did one arise during the course of the project?
 Was the action research relevant to practitioners and users? Did it address local issues? Does it contribute something new to understanding of the issues?
Was it relevant to the experience of those participating? Is further research suggested? Is it stated how the action research will influence policy and practice in general?
Were the phases of the project clearly outlined?  Was a logical process in evidence, including problem identification, planning, action (change or intervention that was implemented), and evaluation? Did these influence the process and progress of the project?
Were the participants and stakeholders clearly described and justified? Did the project focus on health professionals, health care administrators, or health care teams? Is it stated who was selected and by whom for each phase of the project?
 Is it discussed how participants were selected for each phase of the project? Was consideration given to the local context while implementing change? Is it clear which context was selected, and why, for each phase of the project?
Is there a critical examination of values, beliefs and power relationships?Is there a discussion of who would be affected by the change and in what way? Was the context appropriate for this type of study?
Was the relationship between researchers and participants adequately considered? .Is the level and extent of participation clearly defined for each stage? Are the types of relationships that evolved over the course of the project acknowledged? Did the researchers and participants critically examine their own roles, potential biases and influences, i.e., were they reflexive?
Was the project managed appropriately? Were key individuals approached and involved where appropriate? Did those involved appear to have the requisite skills for carrying out the various tasks required to implement change and research?
 Was there a feasible implementation plan that was consistent with the skills, resources and time available?  Was this adjusted in response to local events and participants? Is there a clear discussion of the actions taken (the change or the intervention) and the methods used to evaluate them?
Were ethical issues encountered and how were they dealt with? Was consideration given to participants, researchers and those affected by the action research process? Was consideration given to underlying professional values? How were these explored and realized in practice? Were confidentiality and informed consent addressed?
Was the study adequately funded/supported? Were the assessments of cost and resources realistic? Were there any conflicts of interest?
Were data collected in a way that addressed the research issue? Were appropriate methods and techniques used to answer research questions? Is it clear how data were collected, and why, for each phase of the project? Were data collection and record-keeping systematic? If methods were modified during data collection, is an explanation provided?
Were steps taken to promote the rigour of the findings? Were differing perspectives on issues sought? Did the researchers undertake method and theoretical triangulation? Were the key findings of the project fed back to participants at key stages? How was their feedback used? Do the researchers offer a reflexive account?
Were data analyses sufficiently rigorous? Were procedures for analysis described? Were the analyses systematic? What steps were made to guard against selectivity? Do the researchers explain how the data presented were selected from the original sample? Are arguments, themes, concepts and categories derived from the data? Are points of tension, contrast or contradiction identified? Are competing arguments presented?
Was the study design flexible and responsive? Were findings used to generate plans and ideas for change? Was the approach adapted to circumstances and issues of real-life settings: i.e., are justifications offered for changes in plan?
Are there clear statements of the findings and outcomes of each phase of the study? Are the findings and outcomes presented logically for each phase of the study? Are they explicit and easy to understand? Are they presented systematically and critically – can the reader judge the range of evidence/research being used? Are there discussions of personal and practical developments?
Do the researchers link the data that are presented to their own commentary and interpretation? Are justifications for methods of reflection provided? Is there a discussion of how participants were engaged in reflection? Is there a clear distinction made between the data and their interpretation? Have researchers critically examined their own and others’ roles in the interpretation of data? Is sufficient evidence presented to satisfy the reader about the evidence and the conclusions?
Is the connection with an existing body of knowledge made clear? Is there a range of sources of ideas, categories and interpretations?Are theoretical and ideological insights offered?
Is there discussion of the extent to which aims and objectives were achieved at each stage? Have action research objectives been met? Are the reasons for successes and failures analysed?
Are the findings of the study transferable? Could the findings be transferred to other settings? Is the context of the study clearly described?
Have the authors articulated the criteria upon which their own work is to be read/judged? Have the authors justified the perspective from which the proposal or report should be interpreted?
Conflicts of interest are declared.
Rate the overall methodological quality of the study, using the following as a guide:
High quality (++): Majority of criteria met. Little or no risk of bias.
Acceptable (+): Most criteria met. Some flaws in the study with an associated risk of bias.
Low quality (-): Either most criteria not met, or significant flaws relating to key aspects of study design.
Reject (0): Poor quality study with significant flaws. Wrong study type. Not relevant to guideline.

Table 2: Critical appraisal of research designs.

41542

References

  1. Guyatt G, Meade MO, Cook DJ, Rennie D (2014) Users' Guides to the Medical Literature: A Manual for Evidence-based Clinical Practice, Third edition. New York.
  2. Sackett DL, Richardson WS, Rosemberg WS, Rosenberg W, Haynes BR (2010) Evidence-Based Medicine: how to practice and teach EBM. Churchill Livingstone.
  3. Critical Appraisal Skills Programme (CASP), Public Health Resource Unit, Institute of Health Science, Oxford.
  4. Greenhalgh T, Robert G, Bate P, Macfarlane F, Kyriakidou O (2005) Diffusion of Innovations in Health Service Organisations: A systematic literature review, Blackwell Publishing Ltd.

Citation: Roever L (2016) Critical Appraisal of Research Designs. Evidence Based Medicine and Practice 1: e112. DOI: 10.4172/2471-9919.1000e112

Copyright: © 2016 Roever L. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Top