Banner

SPH Writing Support Services

  • Appointment System
  • ESL Conversation Group
  • Mini-Courses
  • Thesis/Dissertation Writing Group
  • Career Writing
  • Citing Sources
  • Critiquing Research Articles
  • Project Planning for the Beginner This link opens in a new window
  • Grant Writing
  • Publishing in the Sciences
  • Systematic Review Overview
  • Systematic Review Resources This link opens in a new window
  • Writing Across Borders / Writing Across the Curriculum
  • Conducting an article critique for a quantitative research study: Perspectives for doctoral students and other novice readers (Vance et al.)
  • Critique Process (Boswell & Cannon)
  • The experience of critiquing published research: Learning from the student and researcher perspective (Knowles & Gray)
  • A guide to critiquing a research paper. Methodological appraisal of a paper on nurses in abortion care (Lipp & Fothergill)
  • Step-by-step guide to critiquing research. Part 1: Quantitative research (Coughlan et al.)
  • Step-by-step guide to critiquing research. Part 2: Qualitative research (Coughlan et al.)

Guidelines:

  • Critiquing Research Articles (Flinders University)
  • Framework for How to Read and Critique a Research Study (American Nurses Association)
  • How to Critique a Journal Article (UIS)
  • How to Critique a Research Paper (University of Michigan)
  • How to Write an Article Critique
  • Research Article Critique Form
  • Writing a Critique or Review of a Research Article (University of Calgary)

Presentations:

  • The Critique Process: Reviewing and Critiquing Research
  • Writing a Critique
  • << Previous: Citing Sources
  • Next: Project Planning for the Beginner >>
  • Last Updated: Apr 30, 2024 12:52 PM
  • URL: https://libguides.sph.uth.tmc.edu/writing_support_services

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Quantitative research questionsQuantitative research hypotheses
Descriptive research questionsSimple hypothesis
Comparative research questionsComplex hypothesis
Relationship research questionsDirectional hypothesis
Non-directional hypothesis
Associative hypothesis
Causal hypothesis
Null hypothesis
Alternative hypothesis
Working hypothesis
Statistical hypothesis
Logical hypothesis
Hypothesis-testing
Qualitative research questionsQualitative research hypotheses
Contextual research questionsHypothesis-generating
Descriptive research questions
Evaluation research questions
Explanatory research questions
Exploratory research questions
Generative research questions
Ideological research questions
Ethnographic research questions
Phenomenological research questions
Grounded theory questions
Qualitative case study questions

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Quantitative research questions
Descriptive research question
- Measures responses of subjects to variables
- Presents variables to measure, analyze, or assess
What is the proportion of resident doctors in the hospital who have mastered ultrasonography (response of subjects to a variable) as a diagnostic technique in their clinical training?
Comparative research question
- Clarifies difference between one group with outcome variable and another group without outcome variable
Is there a difference in the reduction of lung metastasis in osteosarcoma patients who received the vitamin D adjunctive therapy (group with outcome variable) compared with osteosarcoma patients who did not receive the vitamin D adjunctive therapy (group without outcome variable)?
- Compares the effects of variables
How does the vitamin D analogue 22-Oxacalcitriol (variable 1) mimic the antiproliferative activity of 1,25-Dihydroxyvitamin D (variable 2) in osteosarcoma cells?
Relationship research question
- Defines trends, association, relationships, or interactions between dependent variable and independent variable
Is there a relationship between the number of medical student suicide (dependent variable) and the level of medical student stress (independent variable) in Japan during the first wave of the COVID-19 pandemic?

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Quantitative research hypotheses
Simple hypothesis
- Predicts relationship between single dependent variable and single independent variable
If the dose of the new medication (single independent variable) is high, blood pressure (single dependent variable) is lowered.
Complex hypothesis
- Foretells relationship between two or more independent and dependent variables
The higher the use of anticancer drugs, radiation therapy, and adjunctive agents (3 independent variables), the higher would be the survival rate (1 dependent variable).
Directional hypothesis
- Identifies study direction based on theory towards particular outcome to clarify relationship between variables
Privately funded research projects will have a larger international scope (study direction) than publicly funded research projects.
Non-directional hypothesis
- Nature of relationship between two variables or exact study direction is not identified
- Does not involve a theory
Women and men are different in terms of helpfulness. (Exact study direction is not identified)
Associative hypothesis
- Describes variable interdependency
- Change in one variable causes change in another variable
A larger number of people vaccinated against COVID-19 in the region (change in independent variable) will reduce the region’s incidence of COVID-19 infection (change in dependent variable).
Causal hypothesis
- An effect on dependent variable is predicted from manipulation of independent variable
A change into a high-fiber diet (independent variable) will reduce the blood sugar level (dependent variable) of the patient.
Null hypothesis
- A negative statement indicating no relationship or difference between 2 variables
There is no significant difference in the severity of pulmonary metastases between the new drug (variable 1) and the current drug (variable 2).
Alternative hypothesis
- Following a null hypothesis, an alternative hypothesis predicts a relationship between 2 study variables
The new drug (variable 1) is better on average in reducing the level of pain from pulmonary metastasis than the current drug (variable 2).
Working hypothesis
- A hypothesis that is initially accepted for further research to produce a feasible theory
Dairy cows fed with concentrates of different formulations will produce different amounts of milk.
Statistical hypothesis
- Assumption about the value of population parameter or relationship among several population characteristics
- Validity tested by a statistical experiment or analysis
The mean recovery rate from COVID-19 infection (value of population parameter) is not significantly different between population 1 and population 2.
There is a positive correlation between the level of stress at the workplace and the number of suicides (population characteristics) among working people in Japan.
Logical hypothesis
- Offers or proposes an explanation with limited or no extensive evidence
If healthcare workers provide more educational programs about contraception methods, the number of adolescent pregnancies will be less.
Hypothesis-testing (Quantitative hypothesis-testing research)
- Quantitative research uses deductive reasoning.
- This involves the formation of a hypothesis, collection of data in the investigation of the problem, analysis and use of the data from the investigation, and drawing of conclusions to validate or nullify the hypotheses.

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative research questions
Contextual research question
- Ask the nature of what already exists
- Individuals or groups function to further clarify and understand the natural context of real-world problems
What are the experiences of nurses working night shifts in healthcare during the COVID-19 pandemic? (natural context of real-world problems)
Descriptive research question
- Aims to describe a phenomenon
What are the different forms of disrespect and abuse (phenomenon) experienced by Tanzanian women when giving birth in healthcare facilities?
Evaluation research question
- Examines the effectiveness of existing practice or accepted frameworks
How effective are decision aids (effectiveness of existing practice) in helping decide whether to give birth at home or in a healthcare facility?
Explanatory research question
- Clarifies a previously studied phenomenon and explains why it occurs
Why is there an increase in teenage pregnancy (phenomenon) in Tanzania?
Exploratory research question
- Explores areas that have not been fully investigated to have a deeper understanding of the research problem
What factors affect the mental health of medical students (areas that have not yet been fully investigated) during the COVID-19 pandemic?
Generative research question
- Develops an in-depth understanding of people’s behavior by asking ‘how would’ or ‘what if’ to identify problems and find solutions
How would the extensive research experience of the behavior of new staff impact the success of the novel drug initiative?
Ideological research question
- Aims to advance specific ideas or ideologies of a position
Are Japanese nurses who volunteer in remote African hospitals able to promote humanized care of patients (specific ideas or ideologies) in the areas of safe patient environment, respect of patient privacy, and provision of accurate information related to health and care?
Ethnographic research question
- Clarifies peoples’ nature, activities, their interactions, and the outcomes of their actions in specific settings
What are the demographic characteristics, rehabilitative treatments, community interactions, and disease outcomes (nature, activities, their interactions, and the outcomes) of people in China who are suffering from pneumoconiosis?
Phenomenological research question
- Knows more about the phenomena that have impacted an individual
What are the lived experiences of parents who have been living with and caring for children with a diagnosis of autism? (phenomena that have impacted an individual)
Grounded theory question
- Focuses on social processes asking about what happens and how people interact, or uncovering social relationships and behaviors of groups
What are the problems that pregnant adolescents face in terms of social and cultural norms (social processes), and how can these be addressed?
Qualitative case study question
- Assesses a phenomenon using different sources of data to answer “why” and “how” questions
- Considers how the phenomenon is influenced by its contextual situation.
How does quitting work and assuming the role of a full-time mother (phenomenon assessed) change the lives of women in Japan?
Qualitative research hypotheses
Hypothesis-generating (Qualitative hypothesis-generating research)
- Qualitative research uses inductive reasoning.
- This involves data collection from study participants or the literature regarding a phenomenon of interest, using the collected data to develop a formal hypothesis, and using the formal hypothesis as a framework for testing the hypothesis.
- Qualitative exploratory studies explore areas deeper, clarifying subjective experience and allowing formulation of a formal hypothesis potentially testable in a future quantitative approach.

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

VariablesUnclear and weak statement (Statement 1) Clear and good statement (Statement 2) Points to avoid
Research questionWhich is more effective between smoke moxibustion and smokeless moxibustion?“Moreover, regarding smoke moxibustion versus smokeless moxibustion, it remains unclear which is more effective, safe, and acceptable to pregnant women, and whether there is any difference in the amount of heat generated.” 1) Vague and unfocused questions
2) Closed questions simply answerable by yes or no
3) Questions requiring a simple choice
HypothesisThe smoke moxibustion group will have higher cephalic presentation.“Hypothesis 1. The smoke moxibustion stick group (SM group) and smokeless moxibustion stick group (-SLM group) will have higher rates of cephalic presentation after treatment than the control group.1) Unverifiable hypotheses
Hypothesis 2. The SM group and SLM group will have higher rates of cephalic presentation at birth than the control group.2) Incompletely stated groups of comparison
Hypothesis 3. There will be no significant differences in the well-being of the mother and child among the three groups in terms of the following outcomes: premature birth, premature rupture of membranes (PROM) at < 37 weeks, Apgar score < 7 at 5 min, umbilical cord blood pH < 7.1, admission to neonatal intensive care unit (NICU), and intrauterine fetal death.” 3) Insufficiently described variables or outcomes
Research objectiveTo determine which is more effective between smoke moxibustion and smokeless moxibustion.“The specific aims of this pilot study were (a) to compare the effects of smoke moxibustion and smokeless moxibustion treatments with the control group as a possible supplement to ECV for converting breech presentation to cephalic presentation and increasing adherence to the newly obtained cephalic position, and (b) to assess the effects of these treatments on the well-being of the mother and child.” 1) Poor understanding of the research question and hypotheses
2) Insufficient description of population, variables, or study outcomes

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

VariablesUnclear and weak statement (Statement 1)Clear and good statement (Statement 2)Points to avoid
Research questionDoes disrespect and abuse (D&A) occur in childbirth in Tanzania?How does disrespect and abuse (D&A) occur and what are the types of physical and psychological abuses observed in midwives’ actual care during facility-based childbirth in urban Tanzania?1) Ambiguous or oversimplistic questions
2) Questions unverifiable by data collection and analysis
HypothesisDisrespect and abuse (D&A) occur in childbirth in Tanzania.Hypothesis 1: Several types of physical and psychological abuse by midwives in actual care occur during facility-based childbirth in urban Tanzania.1) Statements simply expressing facts
Hypothesis 2: Weak nursing and midwifery management contribute to the D&A of women during facility-based childbirth in urban Tanzania.2) Insufficiently described concepts or variables
Research objectiveTo describe disrespect and abuse (D&A) in childbirth in Tanzania.“This study aimed to describe from actual observations the respectful and disrespectful care received by women from midwives during their labor period in two hospitals in urban Tanzania.” 1) Statements unrelated to the research question and hypotheses
2) Unattainable or unexplorable objectives

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

how to critique a quantitative research article example

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

A guide to critical appraisal of evidence

Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN

Ellen Fineout-Overholt is the Mary Coulter Dowdy Distinguished Professor of Nursing at the University of Texas at Tyler School of Nursing, Tyler, Tex.

The author has disclosed no financial relationships related to this article.

Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers successfully determine what is known about a clinical issue. Patient outcomes are improved when clinicians apply a body of evidence to daily practice.

How do nurses assess the quality of clinical research? This article outlines a stepwise approach to critical appraisal of research studies' worth to clinical practice: rapid critical appraisal, evaluation, synthesis, and recommendation. When critical care nurses apply a body of valid, reliable, and applicable evidence to daily practice, patient outcomes are improved.

FU1-4

Critical care nurses can best explain the reasoning for their clinical actions when they understand the worth of the research supporting their practices. In c ritical appraisal , clinicians assess the worth of research studies to clinical practice. Given that achieving improved patient outcomes is the reason patients enter the healthcare system, nurses must be confident their care techniques will reliably achieve best outcomes.

Nurses must verify that the information supporting their clinical care is valid, reliable, and applicable. Validity of research refers to the quality of research methods used, or how good of a job researchers did conducting a study. Reliability of research means similar outcomes can be achieved when the care techniques of a study are replicated by clinicians. Applicability of research means it was conducted in a similar sample to the patients for whom the findings will be applied. These three criteria determine a study's worth in clinical practice.

Appraising the worth of research requires a standardized approach. This approach applies to both quantitative research (research that deals with counting things and comparing those counts) and qualitative research (research that describes experiences and perceptions). The word critique has a negative connotation. In the past, some clinicians were taught that studies with flaws should be discarded. Today, it is important to consider all valid and reliable research informative to what we understand as best practice. Therefore, the author developed the critical appraisal methodology that enables clinicians to determine quickly which evidence is worth keeping and which must be discarded because of poor validity, reliability, or applicability.

Evidence-based practice process

The evidence-based practice (EBP) process is a seven-step problem-solving approach that begins with data gathering (see Seven steps to EBP ). During daily practice, clinicians gather data supporting inquiry into a particular clinical issue (Step 0). The description is then framed as an answerable question (Step 1) using the PICOT question format ( P opulation of interest; I ssue of interest or intervention; C omparison to the intervention; desired O utcome; and T ime for the outcome to be achieved). 1 Consistently using the PICOT format helps ensure that all elements of the clinical issue are covered. Next, clinicians conduct a systematic search to gather data answering the PICOT question (Step 2). Using the PICOT framework, clinicians can systematically search multiple databases to find available studies to help determine the best practice to achieve the desired outcome for their patients. When the systematic search is completed, the work of critical appraisal begins (Step 3). The known group of valid and reliable studies that answers the PICOT question is called the body of evidence and is the foundation for the best practice implementation (Step 4). Next, clinicians evaluate integration of best evidence with clinical expertise and patient preferences and values to determine if the outcomes in the studies are realized in practice (Step 5). Because healthcare is a community of practice, it is important that experiences with evidence implementation be shared, whether the outcome is what was expected or not. This enables critical care nurses concerned with similar care issues to better understand what has been successful and what has not (Step 6).

Critical appraisal of evidence

The first phase of critical appraisal, rapid critical appraisal, begins with determining which studies will be kept in the body of evidence. All valid, reliable, and applicable studies on the topic should be included. This is accomplished using design-specific checklists with key markers of good research. When clinicians determine a study is one they want to keep (a “keeper” study) and that it belongs in the body of evidence, they move on to phase 2, evaluation. 2

In the evaluation phase, the keeper studies are put together in a table so that they can be compared as a body of evidence, rather than individual studies. This phase of critical appraisal helps clinicians identify what is already known about a clinical issue. In the third phase, synthesis, certain data that provide a snapshot of a particular aspect of the clinical issue are pulled out of the evaluation table to showcase what is known. These snapshots of information underpin clinicians' decision-making and lead to phase 4, recommendation. A recommendation is a specific statement based on the body of evidence indicating what should be done—best practice. Critical appraisal is not complete without a specific recommendation. Each of the phases is explained in more detail below.

Phase 1: Rapid critical appraisal . Rapid critical appraisal involves using two tools that help clinicians determine if a research study is worthy of keeping in the body of evidence. The first tool, General Appraisal Overview for All Studies (GAO), covers the basics of all research studies (see Elements of the General Appraisal Overview for All Studies ). Sometimes, clinicians find gaps in knowledge about certain elements of research studies (for example, sampling or statistics) and need to review some content. Conducting an internet search for resources that explain how to read a research paper, such as an instructional video or step-by-step guide, can be helpful. Finding basic definitions of research methods often helps resolve identified gaps.

To accomplish the GAO, it is best to begin with finding out why the study was conducted and how it answers the PICOT question (for example, does it provide information critical care nurses want to know from the literature). If the study purpose helps answer the PICOT question, then the type of study design is evaluated. The study design is compared with the hierarchy of evidence for the type of PICOT question. The higher the design falls within the hierarchy or levels of evidence, the more confidence nurses can have in its finding, if the study was conducted well. 3,4 Next, find out what the researchers wanted to learn from their study. These are called the research questions or hypotheses. Research questions are just what they imply; insufficient information from theories or the literature are available to guide an educated guess, so a question is asked. Hypotheses are reasonable expectations guided by understanding from theory and other research that predicts what will be found when the research is conducted. The research questions or hypotheses provide the purpose of the study.

Next, the sample size is evaluated. Expectations of sample size are present for every study design. As an example, consider as a rule that quantitative study designs operate best when there is a sample size large enough to establish that relationships do not exist by chance. In general, the more participants in a study, the more confidence in the findings. Qualitative designs operate best with fewer people in the sample because these designs represent a deeper dive into the understanding or experience of each person in the study. 5 It is always important to describe the sample, as clinicians need to know if the study sample resembles their patients. It is equally important to identify the major variables in the study and how they are defined because this helps clinicians best understand what the study is about.

The final step in the GAO is to consider the analyses that answer the study research questions or confirm the study hypothesis. This is another opportunity for clinicians to learn, as learning about statistics in healthcare education has traditionally focused on conducting statistical tests as opposed to interpreting statistical tests. Understanding what the statistics indicate about the study findings is an imperative of critical appraisal of quantitative evidence.

The second tool is one of the variety of rapid critical appraisal checklists that speak to validity, reliability, and applicability of specific study designs, which are available at varying locations (see Critical appraisal resources ). When choosing a checklist to implement with a group of critical care nurses, it is important to verify that the checklist is complete and simple to use. Be sure to check that the checklist has answers to three key questions. The first question is: Are the results of the study valid? Related subquestions should help nurses discern if certain markers of good research design are present within the study. For example, identifying that study participants were randomly assigned to study groups is an essential marker of good research for a randomized controlled trial. Checking these essential markers helps clinicians quickly review a study to check off these important requirements. Clinical judgment is required when the study lacks any of the identified quality markers. Clinicians must discern whether the absence of any of the essential markers negates the usefulness of the study findings. 6-9

TU1

The second question is: What are the study results? This is answered by reviewing whether the study found what it was expecting to and if those findings were meaningful to clinical practice. Basic knowledge of how to interpret statistics is important for understanding quantitative studies, and basic knowledge of qualitative analysis greatly facilitates understanding those results. 6-9

The third question is: Are the results applicable to my patients? Answering this question involves consideration of the feasibility of implementing the study findings into the clinicians' environment as well as any contraindication within the clinicians' patient populations. Consider issues such as organizational politics, financial feasibility, and patient preferences. 6-9

When these questions have been answered, clinicians must decide about whether to keep the particular study in the body of evidence. Once the final group of keeper studies is identified, clinicians are ready to move into the phase of critical appraisal. 6-9

Phase 2: Evaluation . The goal of evaluation is to determine how studies within the body of evidence agree or disagree by identifying common patterns of information across studies. For example, an evaluator may compare whether the same intervention is used or if the outcomes are measured in the same way across all studies. A useful tool to help clinicians accomplish this is an evaluation table. This table serves two purposes: first, it enables clinicians to extract data from the studies and place the information in one table for easy comparison with other studies; and second, it eliminates the need for further searching through piles of periodicals for the information. (See Bonus Content: Evaluation table headings .) Although the information for each of the columns may not be what clinicians consider as part of their daily work, the information is important for them to understand about the body of evidence so that they can explain the patterns of agreement or disagreement they identify across studies. Further, the in-depth understanding of the body of evidence from the evaluation table helps with discussing the relevant clinical issue to facilitate best practice. Their discussion comes from a place of knowledge and experience, which affords the most confidence. The patterns and in-depth understanding are what lead to the synthesis phase of critical appraisal.

The key to a successful evaluation table is simplicity. Entering data into the table in a simple, consistent manner offers more opportunity for comparing studies. 6-9 For example, using abbreviations versus complete sentences in all columns except the final one allows for ease of comparison. An example might be the dependent variable of depression defined as “feelings of severe despondency and dejection” in one study and as “feeling sad and lonely” in another study. 10 Because these are two different definitions, they need to be different dependent variables. Clinicians must use their clinical judgment to discern that these different dependent variables require different names and abbreviations and how these further their comparison across studies.

TU2

Sample and theoretical or conceptual underpinnings are important to understanding how studies compare. Similar samples and settings across studies increase agreement. Several studies with the same conceptual framework increase the likelihood of common independent variables and dependent variables. The findings of a study are dependent on the analyses conducted. That is why an analysis column is dedicated to recording the kind of analysis used (for example, the name of the statistical analyses for quantitative studies). Only statistics that help answer the clinical question belong in this column. The findings column must have a result for each of the analyses listed; however, in the actual results, not in words. For example, a clinician lists a t -test as a statistic in the analysis column, so a t -value should reflect whether the groups are different as well as probability ( P -value or confidence interval) that reflects statistical significance. The explanation for these results would go in the last column that describes worth of the research to practice. This column is much more flexible and contains other information such as the level of evidence, the studies' strengths and limitations, any caveats about the methodology, or other aspects of the study that would be helpful to its use in practice. The final piece of information in this column is a recommendation for how this study would be used in practice. Each of the studies in the body of evidence that addresses the clinical question is placed in one evaluation table to facilitate the ease of comparing across the studies. This comparison sets the stage for synthesis.

Phase 3: Synthesis . In the synthesis phase, clinicians pull out key information from the evaluation table to produce a snapshot of the body of evidence. A table also is used here to feature what is known and help all those viewing the synthesis table to come to the same conclusion. A hypothetical example table included here demonstrates that a music therapy intervention is effective in reducing the outcome of oxygen saturation (SaO 2 ) in six of the eight studies in the body of evidence that evaluated that outcome (see Sample synthesis table: Impact on outcomes ). Simply using arrows to indicate effect offers readers a collective view of the agreement across studies that prompts action. Action may be to change practice, affirm current practice, or conduct research to strengthen the body of evidence by collaborating with nurse scientists.

When synthesizing evidence, there are at least two recommended synthesis tables, including the level-of-evidence table and the impact-on-outcomes table for quantitative questions, such as therapy or relevant themes table for “meaning” questions about human experience. (See Bonus Content: Level of evidence for intervention studies: Synthesis of type .) The sample synthesis table also demonstrates that a final column labeled synthesis indicates agreement across the studies. Of the three outcomes, the most reliable for clinicians to see with music therapy is SaO 2 , with positive results in six out of eight studies. The second most reliable outcome would be reducing increased respiratory rate (RR). Parental engagement has the least support as a reliable outcome, with only two of five studies showing positive results. Synthesis tables make the recommendation clear to all those who are involved in caring for that patient population. Although the two synthesis tables mentioned are a great start, the evidence may require more synthesis tables to adequately explain what is known. These tables are the foundation that supports clinically meaningful recommendations.

Phase 4: Recommendation . Recommendations are definitive statements based on what is known from the body of evidence. For example, with an intervention question, clinicians should be able to discern from the evidence if they will reliably get the desired outcome when they deliver the intervention as it was in the studies. In the sample synthesis table, the recommendation would be to implement the music therapy intervention across all settings with the population, and measure SaO 2 and RR, with the expectation that both would be optimally improved with the intervention. When the synthesis demonstrates that studies consistently verify an outcome occurs as a result of an intervention, however that intervention is not currently practiced, care is not best practice. Therefore, a firm recommendation to deliver the intervention and measure the appropriate outcomes must be made, which concludes critical appraisal of the evidence.

A recommendation that is off limits is conducting more research, as this is not the focus of clinicians' critical appraisal. In the case of insufficient evidence to make a recommendation for practice change, the recommendation would be to continue current practice and monitor outcomes and processes until there are more reliable studies to be added to the body of evidence. Researchers who use the critical appraisal process may indeed identify gaps in knowledge, research methods, or analyses, for example, that they then recommend studies that would fill in the identified gaps. In this way, clinicians and nurse scientists work together to build relevant, efficient bodies of evidence that guide clinical practice.

Evidence into action

Critical appraisal helps clinicians understand the literature so they can implement it. Critical care nurses have a professional and ethical responsibility to make sure their care is based on a solid foundation of available evidence that is carefully appraised using the phases outlined here. Critical appraisal allows for decision-making based on evidence that demonstrates reliable outcomes. Any other approach to the literature is likely haphazard and may lead to misguided care and unreliable outcomes. 11 Evidence translated into practice should have the desired outcomes and their measurement defined from the body of evidence. It is also imperative that all critical care nurses carefully monitor care delivery outcomes to establish that best outcomes are sustained. With the EBP paradigm as the basis for decision-making and the EBP process as the basis for addressing clinical issues, critical care nurses can improve patient, provider, and system outcomes by providing best care.

Seven steps to EBP

Step 0–A spirit of inquiry to notice internal data that indicate an opportunity for positive change.

Step 1– Ask a clinical question using the PICOT question format.

Step 2–Conduct a systematic search to find out what is already known about a clinical issue.

Step 3–Conduct a critical appraisal (rapid critical appraisal, evaluation, synthesis, and recommendation).

Step 4–Implement best practices by blending external evidence with clinician expertise and patient preferences and values.

Step 5–Evaluate evidence implementation to see if study outcomes happened in practice and if the implementation went well.

Step 6–Share project results, good or bad, with others in healthcare.

Adapted from: Steps of the evidence-based practice (EBP) process leading to high-quality healthcare and best patient outcomes. © Melnyk & Fineout-Overholt, 2017. Used with permission.

Critical appraisal resources

  • The Joanna Briggs Institute http://joannabriggs.org/research/critical-appraisal-tools.html
  • Critical Appraisal Skills Programme (CASP) www.casp-uk.net/casp-tools-checklists
  • Center for Evidence-Based Medicine www.cebm.net/critical-appraisal
  • Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice . 3rd ed. Philadelphia, PA: Wolters Kluwer; 2015.

A full set of critical appraisal checklists are available in the appendices.

Bonus content!

This article includes supplementary online-exclusive material. Visit the online version of this article at www.nursingcriticalcare.com to access this content.

critical appraisal; decision-making; evaluation of research; evidence-based practice; synthesis

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Determining the level of evidence: experimental research appraisal, the qt interval, evidence-based practice for red blood cell transfusions, recognizing and preventing drug diversion, searching with critical appraisal tools.

Javascript is currently disabled in your browser. Several features of this site will not function whilst javascript is disabled.

  • Why Publish With Us?
  • Editorial Policies
  • Author Guidelines
  • Peer Review Guidelines
  • Open Outlook
  • Submit New Manuscript

how to critique a quantitative research article example

  • Sustainability
  • Press Center
  • Testimonials
  • Favored Author Program
  • Permissions
  • Pre-Submission

Chinese website (中文网站)

open access to scientific and medical research

A part of Taylor & Francis Group

Back to Journals » Nursing: Research and Reviews » Volume 3

how to critique a quantitative research article example

Conducting an article critique for a quantitative research study: perspectives for doctoral students and other novice readers

  • Get Permission
  • Cite this article

Authors Vance DE   , Talley M , Azuero A , Pearce PF , Christian BJ

Received 29 January 2013

Accepted for publication 12 March 2013

Published 22 April 2013 Volume 2013:3 Pages 67—75

DOI https://doi.org/10.2147/NRR.S43374

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

David E Vance, 1 Michele Talley, 1 Andres Azuero, 1 Patricia F Pearce, 2 Becky J Christian 1 1 School of Nursing, University of Alabama at Birmingham, Birmingham, AL, USA; 2 Loyola University School of Nursing, New Orleans, LA, USA Abstract: The ability to critically evaluate the merits of a quantitative design research article is a necessary skill for practitioners and researchers of all disciplines, including nursing, in order to judge the integrity and usefulness of the evidence and conclusions made in an article. In general, this skill is automatic for many practitioners and researchers who already possess a good working knowledge of research methodology, including: hypothesis development, sampling techniques, study design, testing procedures and instrumentation, data collection and data management, statistics, and interpretation of findings. For graduate students and junior faculty who have yet to master these skills, completing a formally written article critique can be a useful process to hone such skills. However, a fundamental knowledge of research methods is still needed in order to be successful. Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a brief example to demonstrate the principles and form. Keywords: quantitative article critique, statistics, methodology, graduate students

Creative Commons License

Contact Us   •   Privacy Policy   •   Associations & Partners   •   Testimonials   •   Terms & Conditions   •   Recommend this site •   Cookies •   Top

Contact Us   •   Privacy Policy

Banner

  • Teesside University Student & Library Services
  • Learning Hub Group

Critical Appraisal for Health Students

  • Critical Appraisal of a quantitative paper
  • Critical Appraisal: Help
  • Critical Appraisal of a qualitative paper
  • Useful resources

Appraisal of a Quantitative paper: Top tips

undefined

  • Introduction

Critical appraisal of a quantitative paper (RCT)

This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external validity) is provided and there is an opportunity to practise the technique on a sample article.

Please note this framework is for appraising one particular type of quantitative research a Randomised Controlled Trial (RCT) which is defined as 

a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo.  The groups are then followed up to see if there are any differences between the results.  This helps in assessing the effectiveness of the intervention.(CASP, 2020)

Support materials

  • Framework for reading quantitative papers (RCTs)
  • Critical appraisal of a quantitative paper PowerPoint

To practise following this framework for critically appraising a quantitative article, please look at the following article:

Marrero, D.G.  et al.  (2016) 'Comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  AJPH Research , 106(5), pp. 949-956.

Critical Appraisal of a quantitative paper (RCT): practical example

  • Internal Validity
  • External Validity
  • Reliability Measurement Tool

How to use this practical example 

Using the framework, you can have a go at appraising a quantitative paper - we are going to look at the following article:

Marrero, d.g.  et al  (2016) 'comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  ajph research , 106(5), pp. 949-956.,            step 1.  take a quick look at the article, step 2.  click on the internal validity tab above - there are questions to help you appraise the article, read the questions and look for the answers in the article. , step 3.   click on each question and our answers will appear., step 4.    repeat with the other aspects of external validity and reliability. , questioning the internal validity:, randomisation : how were participants allocated to each group did a randomisation process taken place, comparability of groups: how similar were the groups eg age, sex, ethnicity – is this made clear, blinding (none, single, double or triple): who was not aware of which group a patient was in (eg nobody, only patient, patient and clinician, patient, clinician and researcher) was it feasible for more blinding to have taken place , equal treatment of groups: were both groups treated in the same way , attrition : what percentage of participants dropped out did this adversely affect one group has this been evaluated, overall internal validity: does the research measure what it is supposed to be measuring, questioning the external validity:, attrition: was everyone accounted for at the end of the study was any attempt made to contact drop-outs, sampling approach: how was the sample selected was it based on probability or non-probability what was the approach (eg simple random, convenience) was this an appropriate approach, sample size (power calculation): how many participants was a sample size calculation performed did the study pass, exclusion/ inclusion criteria: were the criteria set out clearly were they based on recognised diagnostic criteria, what is the overall external validity can the results be applied to the wider population, questioning the reliability (measurement tool) internal validity:, internal consistency reliability (cronbach’s alpha). has a cronbach’s alpha score of 0.7 or above been included, test re-test reliability correlation. was the test repeated more than once were the same results received has a correlation coefficient been reported is it above 0.7 , validity of measurement tool. is it an established tool if not what has been done to check if it is reliable pilot study expert panel literature review criterion validity (test against other tools): has a criterion validity comparison been carried out was the score above 0.7, what is the overall reliability how consistent are the measurements , overall validity and reliability:, overall how valid and reliable is the paper.

  • << Previous: Critical Appraisal of a qualitative paper
  • Next: Useful resources >>
  • Last Updated: Jul 23, 2024 3:37 PM
  • URL: https://libguides.tees.ac.uk/critical_appraisal

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Research essentials. How to critique quantitative research

Affiliation.

  • 1 Anglia Ruskin University.
  • PMID: 26558974
  • DOI: 10.7748/ncyp.27.9.12.s14

QUANTITATIVE RESEARCH is a systematic approach to investigating numerical data and involves measuring or counting attributes, that is quantities. Through a process of transforming information that is collected or observed, the researcher can often describes a situation or event, answering the 'what' and 'how many' questions about a situation ( Parahoo 2014 ).

PubMed Disclaimer

Similar articles

  • Research essentials. Smith J, Chudleigh J. Smith J, et al. Nurs Child Young People. 2015 Mar;27(2):14. doi: 10.7748/ncyp.27.2.14.s15. Nurs Child Young People. 2015. PMID: 25760001 No abstract available.
  • What's the object of object working memory in infancy? Unraveling 'what' and 'how many'. Kibbe MM, Leslie AM. Kibbe MM, et al. Cogn Psychol. 2013 Jun;66(4):380-404. doi: 10.1016/j.cogpsych.2013.05.001. Epub 2013 Jun 12. Cogn Psychol. 2013. PMID: 23770623
  • Understanding the basic aspects of research papers. Lee P. Lee P. Nurs Times. 2006 Jul 4-10;102(27):28-30. Nurs Times. 2006. PMID: 16850704
  • Research philosophy: towards an understanding. Crossan F. Crossan F. Nurse Res. 2003;11(1):46-55. doi: 10.7748/nr2003.10.11.1.46.c5914. Nurse Res. 2003. PMID: 14533474 Review.
  • Science of Unitary Human Beings: an update on research. Tae Sook Kim. Tae Sook Kim. Nurs Sci Q. 2008 Oct;21(4):294-9. doi: 10.1177/0894318408324333. Nurs Sci Q. 2008. PMID: 18953005 Review.
  • Search in MeSH
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • How it works

researchprospect post subheader

Quantitative Research Questionnaire – Types & Examples

Published by Alvin Nicolas at August 20th, 2024 , Revised On August 21, 2024

Research is usually done to provide solutions to an ongoing problem. Wherever the researchers see a gap, they tend to launch research to enhance their knowledge and to provide solutions to the needs of others. If they want to research from a subjective point of view, they consider qualitative research. On the other hand, when they research from an objective point of view, they tend to consider quantitative research.

There’s a fine line between subjectivity and objectivity. Qualitative research, related to subjectivity, assesses individuals’ personal opinions and experiences, while quantitative research, associated with objectivity, collects numerical data to derive results. However, the best medium to collect data in quantitative research is a questionnaire.

Let’s discuss what a quantitative research questionnaire is, its types, methods of writing questions, and types of survey questions. By thoroughly understanding these key essential terms, you can efficiently create a professional and well-organised quantitative research questionnaire.

What is a Quantitative Research Questionnaire?

Quantitative research questionnaires are preferably used during quantitative research. They are a well-structured set of questions designed specifically to gather specific, close-ended participant responses. This allows the researchers to gather numerical data and obtain a deep understanding of a particular event or problem.

As you know, qualitative research questionnaires contain open-ended questions that allow the participants to express themselves freely, while quantitative research questionnaires contain close-ended and specific questions, such as multiple-choice and Likert scales, to assess individuals’ behaviour.

Quantitative research questionnaires are usually used in research in various fields, such as psychology, medicine, chemistry, and economics.

Let’s see how you can write quantitative research questions by going through some examples:

  • How much do British people consume fast food per week?
  • What is the percentage of students living in hostels in London?

Types of Quantitative Research Questions With Examples

After learning what a quantitative research questionnaire is and what quantitative research questions look like, it’s time to thoroughly discuss the different types of quantitative research questions to explore this topic more.

Dichotomous Questions

Dichotomous questions are those with a margin for only two possible answers. They are usually used when the answers are “Yes/No” or “True/False.” These questions significantly simplify the research process and help collect simple responses.

Example: Have you ever visited Istanbul?

Multiple Choice Questions

Multiple-choice questions have a list of possible answers for the participants to choose from. They help assess people’s general knowledge, and the data gathered by multiple-choice questions can be easily analysed.

Example: Which of the following is the capital of France?

Multiple Answer Questions

Multiple-answer questions are similar to multiple-choice questions. However, there are multiple answers for participants to choose from. They are used when the questions can’t have a single, specific answer.

Example: Which of the following movie genres are your favourite?

Likert Scale Questions

Likert scale questions are used when the preferences and emotions of the participants are measured from one extreme to another. The scales are usually applied to measure likelihood, frequency, satisfaction, and agreement. The Likert scale has only five options to choose from.

Example: How satisfied are you with your job?

Semantic Differential Questions

Similar to Likert scales, semantic differential questions are also used to measure the emotions and attitudes of participants. The only difference is that instead of using extreme options such as strongly agree and strongly disagree, opposites of a particular choice are given to reduce bias.

Example: Please rate the services of our company.

Rank Order Questions

Rank-order questions are usually used to measure the preferences and choices of the participants efficiently. In this, multiple choices are given, and participants are asked to rank them according to their perspective. This helps to create a good participant profile.

Example: Rank the given books according to your interest.

Matrix Questions

Matrix questions are similar to Likert scales. In Likert scales, participants’ responses are measured through separate questions, while in matrix questions, multiple questions are compiled in a single row to simplify the data collection method efficiently.

Example: Rate the following activities that you do in daily life.

How To Write Quantitative Research Questions?

Quantitative research questions allow researchers to gather empirical data to answer their research problems. As we have discussed the different types of quantitative research questions above, it’s time to learn how to write the perfect quantitative research questions for a questionnaire and streamline your research process.

Here are the steps to follow to write quantitative research questions efficiently.

Step 1: Determine the Research Goals

The first step in writing quantitative research questions is to determine your research goals. Determining and confirming your research goals significantly helps you understand what kind of questions you need to create and for what grade. Efficiently determining the research goals also reduces the need for further modifications in the questionnaire.

Step 2: Be Mindful About the Variables

There are two variables in the questions: independent and dependent. It is essential to decide what would be the dependent variable in your questions and what would be the independent. It significantly helps to understand where to emphasise and where not. It also reduces the probability of additional and vague questions.

Step 3: Choose the Right Type of Question

It is also important to determine the right type of questions to add to your questionnaire. Whether you want Likert scales, rank-order questions, or multiple-answer questions, choosing the right type of questions will help you measure individuals’ responses efficiently and accurately.

Step 4: Use Easy and Clear Language

Another thing to keep in mind while writing questions for a quantitative research questionnaire is to use easy and clear language. As you know, quantitative research is done to measure specific and simple responses in empirical form, and using easy and understandable language in questions makes a huge difference.

Step 5: Be Specific About The Topic

Always be mindful and specific about your topic. Avoid writing questions that divert from your topic because they can cause participants to lose interest. Use the basic terms of your selected topic and gradually go deep. Also, remember to align your topic and questions with your research objectives and goals.

Step 6: Appropriately Write Your Questions

When you have considered all the above-discussed things, it’s time to write your questions appropriately. Don’t just haste in writing. Think twice about the result of a question and then consider writing it in the questionnaire. Remember to be precise while writing. Avoid overwriting.

Step 7: Gather Feedback From Peers

When you have finished writing questions, gather feedback from your researcher peers. Write down all the suggestions and feedback given by your peers. Don’t panic over the criticism of your questions. Remember that it’s still time to make necessary changes to the questionnaire before launching your campaign.

Step 8: Refine and Finalise the Questions

After gathering peer feedback, make necessary and appropriate changes to your questions. Be mindful of your research goals and topic. Try to modify your questions according to them. Also, be mindful of the theme and colour scheme of the questionnaire that you decided on. After refining the questions, finalise your questionnaire.

Types of Survey Questionnaires in Quantitative Research

Quantitative research questionnaires have close-ended questions that allow the researchers to measure accurate and specific responses from the participants. They don’t contain open-ended questions like qualitative research, where the response is measured by interviews and focus groups. Good combinations of questions are used in the quantitative research survey .

However, here are the types of surveys in quantitative research:

Descriptive Survey

The descriptive survey is used to obtain information about a particular variable. It is used to associate a quantity and quantify research variables. The questions associated with descriptive surveys mostly start with “What is” and “How much”.

Example: A descriptive survey to measure how much money children spend to buy toys.

Comparative Survey

A comparative survey is used to establish a comparison between one or more dependable variables and two or more comparison groups. This survey aims to form a comparative relation between the variables under study. The structure of the question in a comparative survey is, “What is the difference in [dependable variable] between [two or more groups]?”.

Example: A comparative survey on the difference in political awareness between Eastern and Western citizens.

Relationship-Based Survey

Relationship-based survey is used to understand the relationship or association between two or more independent and dependent variables. Cause and effect between two or more variables is measured in the relationship-based survey. The structure of questions in a relationship-based survey is, “What is the relation [between or among] [independent variable] and [dependable variable]?”.

Example: What is the relationship between education and lifestyle in America?

Advantages & Disadvantages of Questionnaires in Quantitative Research

Quantitative research questionnaires are an excellent tool to collect data and information about the responses of individuals. Quantitative research comes with various advantages, but along with advantages, it also has its disadvantages. Check the table below to learn about the advantages and disadvantages of a quantitative research questionnaire.

It is an efficient source for quickly collecting data. It restricts the depth of the topic during collection.
There is less risk of subjectivity and research bias. There is a high risk of artificial and unreal expectations of research questions.
It significantly helps to collect extensive insights into the population. It overemphasises empirical data, avoiding personal opinions.
It focuses on simplicity and particularity. There is a risk of over-simplicity.
There are clear and achievable research objectives. There is a risk of additional amendments and modifications.

Quantitative Research Questionnaire Example

Here is an example of a quantitative research questionnaire to help you get the idea and create an efficient and well-developed questionnaire for your research:

Warm welcome, and thank you for participating in our survey. Please provide your response to the questions below. Your esteemed response will significantly help us to achieve our research goals and provide effective solutions to society.

17-20

21-24

25-28

29-32

ii) What is your gender?

Male

Female

Other

Prefer not to say

ii) Have you graduated?

Yes

No

iii) Are you employed?

iv) Are you married?

Yes

No

 

Part 2: Provide your honest response. 

Question 1: I have tried online shopping.

Strongly Disagree

Disagree

Neutral 

Agree

Strongly Agree

Question 2: I have good experience with online shopping.

Strongly Disagree

Disagree

Neutral

Agree

Strongly Agree

Question 3: I have a bad experience with online shopping.

Question 4: I received my order on time. 

Question 5: I like physical shopping more. 

Frequently Asked Questions

What is a quantitative research questionnaire.

A quantitative research questionnaire is a well-structured set of questions designed specifically to gather specific and close-ended participant responses.

What is the difference between qualitative and quantitative research?

The difference between qualitative and quantitative research is subjectivity and objectivity. Subjectivity is associated with qualitative research, while objectivity is associated with quantitative research. 

What are the advantages of a quantitative research questionnaire?

  • It is quick and efficient.
  • There is less risk of research bias and subjectivity.
  • It is particular and simple.

You May Also Like

A meta-analysis is a formal, epidemiological, quantitative study design that uses statistical methods to generalise the findings of the selected independent studies.

Disadvantages of primary research – It can be expensive, time-consuming and take a long time to complete if it involves face-to-face contact with customers.

A survey includes questions relevant to the research topic. The participants are selected, and the questionnaire is distributed to collect the data.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works
  • Open access
  • Published: 19 August 2024

Patient reported measures of continuity of care and health outcomes: a systematic review

  • Patrick Burch 1 ,
  • Alex Walter 1 ,
  • Stuart Stewart 1 &
  • Peter Bower 1  

BMC Primary Care volume  25 , Article number:  309 ( 2024 ) Cite this article

170 Accesses

7 Altmetric

Metrics details

There is a considerable amount of research showing an association between continuity of care and improved health outcomes. However, the methods used in most studies examine only the pattern of interactions between patients and clinicians through administrative measures of continuity. The patient experience of continuity can also be measured by using patient reported experience measures. Unlike administrative measures, these can allow elements of continuity such as the presence of information or how joined up care is between providers to be measured. Patient experienced continuity is a marker of healthcare quality in its own right. However, it is unclear if, like administrative measures, patient reported continuity is also linked to positive health outcomes.

Cohort and interventional studies that examined the relationship between patient reported continuity of care and a health outcome were eligible for inclusion. Medline, EMBASE, CINAHL and the Cochrane Library were searched in April 2021. Citation searching of published continuity measures was also performed. QUIP and Cochrane risk of bias tools were used to assess study quality. A box-score method was used for study synthesis.

Nineteen studies were eligible for inclusion. 15 studies measured continuity using a validated, multifactorial questionnaire or the continuity/co-ordination subscale of another instrument. Two studies placed patients into discrete groups of continuity based on pre-defined questions, one used a bespoke questionnaire, one calculated an administrative measure of continuity using patient reported data. Outcome measures examined were quality of life ( n  = 11), self-reported health status ( n  = 8), emergency department use or hospitalisation ( n  = 7), indicators of function or wellbeing ( n  = 6), mortality ( n  = 4) and physiological measures ( n  = 2). Analysis was limited by the relatively small number of hetrogenous studies. The majority of studies showed a link between at least one measure of continuity and one health outcome.

Whilst there is emerging evidence of a link between patient reported continuity and several outcomes, the evidence is not as strong as that for administrative measures of continuity. This may be because administrative measures record something different to patient reported measures, or that studies using patient reported measures are smaller and less able to detect smaller effects. Future research should use larger sample sizes to clarify if a link does exist and what the potential mechanisms underlying such a link could be. When measuring continuity, researchers and health system administrators should carefully consider what type of continuity measure is most appropriate.

Peer Review reports

Introduction

Continuity of primary care is associated with multiple positive outcomes including reduced hospitals admissions, lower costs and a reduction in mortality [ 1 , 2 , 3 ]. Providing continuity is often seen as opposed to providing rapid access to appointments [ 4 ] and many health systems have chosen to focus primary care policy on access rather than continuity [ 5 , 6 , 7 ]. Continuity has fallen in several primary care systems and this has led to calls to improve it [ 8 , 9 ]. However, it is sometimes unclear exactly what continuity is and what should be improved.

In its most basic form, continuity of care can be defined as a continuous relationship between a patient and a healthcare professional [ 10 ]. However, from the patient perspective, continuity of care can also be experienced as joined up seamless care from multiple providers [ 11 ].

One of the most commonly cited models of continuity by Haggerty et al. defines continuity as

“ …the degree to which a series of discrete healthcare events is experienced as coherent and connected and consistent with the patient’s medical needs and personal context. Continuity of care is distinguished from other attributes of care by two core elements—care over time and the focus on individual patients” [ 11 ].

It then breaks continuity down into three parts (see Table  1 ) [ 11 ]. Other academic models of patient continuity exists but they contain elements which are broadly analogous [ 10 , 12 , 13 , 14 ].

Continuity can be measured through administrative measures or by asking patients about their experience of continuity [ 16 ]. Administrative mesures are commonly used as they allow continuity to be calculated easily for large numbers of patient consultations. Administraive measures capture one element of continuity – the frequency or pattern of professionals seen by a patient [ 16 , 17 ]. There are multiple studies and several systematic reviews showing that better health outcomes are associated with administrative measures of continuity of care [ 1 , 2 , 18 , 19 ]. One of the most recent of these reviews used a box-score method to assess the relationship between reduced mortality and continuity (i.e., counting the numbers of studies reporting significant and non-significant relationships) [ 18 ]. The review examined thirteen studies and found a positive association in nine. Administrative measures of continuity cannot capture aspects of continuity such as informational or management continuity or the nature of the relationship between the patient and clinicians. To address this, there have been several patient-reported experience measures (PREMs) of continuity developed that attempt to capture the patient experience of continuity beyond the pattern in which they see particular clinicians [ 14 , 17 , 20 , 21 ]. Studies have shown a variable correlation between administrative and patient reported measures of continity and their relationship to health outcomes [ 22 ]. Pearson correlation co-efficients vary between 0.11 and 0.87 depending on what is measured and how [ 23 , 24 ]. This suggests that they are capturing different things and that both measures have their uses and drawbacks [ 23 , 25 ]. Patients may have good administrative measures of continuity but report a poor experience. Conversely, administrative measures of continuity may be poor, but a patient may report a high level of experienced continuity. Patient experienced continuity and patient satisfaction with healthcare is an aim in its own right in many healthcare systems [ 26 ]. Whilst this is laudable, it may be unclear to policy makers if prioritising patient-experienced continuity will improve health outcomes.

This review seeks to answer two questions.

Is patient reported continuity of care associated with positive health outcomes?

Are particular types of patient reported continuity (relational, informational or management) associated with positive health outcomes?

A review protocol was registered with PROSPERO in June 2021 (ID: CRD42021246606).

Search strategy

A structured search was undertaken using appropriate search terms on Medline, EMBASE, CINAHL and the Cochrane Library in April 2021 (see Appendix ). The searches were limited to the last 20 years. This age limitation reflects the period in which the more holistic description of continuity (as exemplified by Haggerty et al. 2003) became more prominent. In addition to database searches, existing reviews of PREMs of continuity and co-ordination were searched for appropriate measures. Citation searching of these measures was then undertaken to locate studies that used these outcome measures.

Inclusion criteria

Full text papers were reviewed if the title or abstract suggested that the paper measured (a) continuity through a PREM and (b) a health outcome. Health outcomes were defined as outcomes that measured a direct effect on patient health (e.g., health status) or patient use of emergency or inpatient care. Papers with outcomes relating to patient satisfaction or satisfaction with a particular service were excluded as were process measures (such as quality of documentation, cost to health care provider). Cohort and interventional studies were eligible for inclusion, if they reported data on the relationship between continuity and a relevant health outcome. Cross-sectional studies were excluded because of the risk of recall bias [ 27 ].

The majority of participants in a study had to be aged over 16, based in a healthcare setting and receiving healthcare from healthcare professionals (medical or non-medical). We felt that patients under 16 were unlikely to be asked to fill out continuity PREMs. Studies that used PREMs to quantitatively measure one or more elements of experienced continuity of care or coordination were eligible for inclusion [ 11 ]. Any PREMs that could map to one or more of the three key elements of Haggerty’s definition (Table  1 ) definition were eligible for inclusion. The types of continuity measured by each study were mapped to the Haggerty concepts of continuity by at least two reviewers independently. Our search also included patient reported measures of co-ordination, as a previous review of continuity PREMs highlighted the conceptual overlap between patient experienced continuity and some measures of patient experienced co-ordination [ 17 ]. Whilst there are different definitions of co-ordination, the concept of patient perceived co-ordination is arguably the same as management continuity [ 13 , 14 , 28 ]. Patient reported measures of care co-ordination were reviewed by two reviewers to see whether they measured the concept of management continuity. Because of the overlap between concepts of continuity and other theories (e.g., patient-centred care, quality of care), in studies where it was not clear that continuity was being measured, agreement, with documented reasons, was made about their inclusion/exclusion after discussion between three of the reviewers (PB, SS and AW). Disagreements were resolved by documented group discussion. Some PREMs measured concepts of continuity alongside other concepts such as access. These studies were eligible for inclusion only if measurements of continuity were reported and analysed separately.

Data abstraction

All titles/abstracts were initially screened by one reviewer (PB). 20% of the abstracts were independently reviewed by 2 other reviewers (SS and AW), blinded to the results of the initial screening. All full text reviews were done by two blinded reviewers independently. Disagreements were resolved by group discussion between PB, SS, AW and PBo. Excel was used for collation of search results, titles, and abstracts. Rayyan was used in the full text review process.

Data extraction was performed independently by two reviewers. The following data were extracted to an Excel spreadsheet: study design, setting, participant inclusion criteria, method of measurement of continuity, type of continuity measured, outcomes analysed, temporal relationship of continuity to outcomes in the study, co-variates, and quantitative data for continuity measures and outcomes. Disagreements were resolved by documented discussion or involvement of a third reviewer.

Study risk of bias assessment

Cohort studies were assessed for risk of bias at a study level using the QUIP tool by two reviewers acting independently [ 29 ]. Trials were assessed using the Cochrane risk of bias tool. The use of the QUIP tool was a deviation from the review protocol as the Ottowa-Newcastle tool in the protocol was less suitable for use on the type of cohort studies returned in the search. Any disagreements in rating were resolved by documented discussion.

As outlined in our original protocol, our preferred analysis strategy was to perform meta-analysis. However, we were unable to do this as insufficient numbers of studies reported data amenable to the calculation of an effect size. Instead, we used a box-score method [ 30 ]. This involved assessing and tabulating the relationship between each continuity measure and each outcome in each study. These relationships were recorded as either positive, negative or non-significant (using a conventional p  value of < 0.05 as our cut off for significance). Advantages and disadvantages of this method are explored in the discussion section. Where a study used both bivariate analysis and multivariate analysis, the results from the multivariate analysis were extracted. Results were marked as “mixed” where more than one measure for an outcome was used and the significance/direction differed between outcome measures. Sensitivity analysis of study quality and size was carried out.

Figure  1 shows the search results and number of inclusions/exclusions. Studies were excluded for a number of reasons including; having inappropriate outcome measures [ 31 ], focusing on non-adult patient populations [ 32 ] and reporting insufficient data to examine the relationship between continuity and outcomes [ 33 ]. All studies are described in Table  2 .

figure 1

Results of search strategy –NB. 18 studies provided 19 assessments

Study settings

Studies took place in 9 different, mostly economically developed, countries. Studies were set in primary care [ 5 ], hospital/specialist outpatient [ 7 ], hospital in-patient [ 5 ], or the general population [ 2 ].

Study design and assessment of bias

All included studies, apart from one trial [ 34 ], were cohort studies. Study duration varied from 2 months to 5 years. Most studies were rated as being low-moderate or moderate risk of bias, due to outcomes being patient reported, issues with recruitment, inadequately describing cohort populations, significant rates of attrition and/or failure to account for patients lost to follow up.

Measurement of continuity

The majority of the studies (15/19) measured continuity using a validated, multifactorial patient reported measure of continuity or using the continuity/co-ordination subscale of another validated instrument. Two studies placed patients into discrete groups of continuity based on answers to pre-defined questions (e.g., do you have a regular GP that you see? ) [ 35 , 36 ], one used a bespoke questionnaire [ 34 ], and one calculated an administrative measure of continuity (UPC – Usual Provider of Care index) using patient reported visit data collected from patient interviews [ 37 ]. Ten studies reported more than one type of patient reported continuity, four reported relational continuity, three reported overall continuity, one informational continuity and one management continuity.

Study outcomes

Most of the studies reported more than one outcome measure. To enable comparison across studies we grouped the most common outcome measures together. These were quality of life ( n  = 11), self-reported health status ( n  = 8), emergency department use or hospitalisation ( n  = 7), and mortality ( n  = 4). Other outcomes reported included physiological parameters e.g., blood pressure or blood test parameters ( n  = 2) [ 36 , 38 ] and other indicators of functioning or well-being ( n  = 6).

Association between outcomes and continuity measures

Twelve of the nineteen studies demonstrated at least one statistically significant association between at least one patient reported measure of continuity and at least one outcome. However, ten of these studies examined more than one outcome measure. Two of these significant studies showed negative findings; better informational continuity was associated with worse self-reported disease status [ 35 ] and improved continuity was related to increased admissions and ED use [ 39 ]. Four studies demonstrated no association between measures of continuity and any health outcomes.

The four most commonly reported types of outcomes were analysed separately (Table  3 ). All the outcomes had a majority of studies showing no significant association with continuity or a mixed/unclear association. Sensitivity analysis of the results in Table  3 , excluding high and moderate-high risk studies, did not change this finding. Each of these outcomes were also examined in relation to the type of continuity that was measured (Table  4 ) Apart from the relationship between informational continuity and quality or life, all other combinations of continuity type/outcome had a majority of studies showing no significant association with continuity or a mixed/unclear association. However, the relationship between informational continuity and quality of life was only examined in two separate studies [ 40 , 41 ]. One of these studies contained less than 100 patients and was removed when sensitivity analysis of study size was carried out [ 40 ]. Sensitivity analysis of the results in Table  4 , excluding high and moderate-high risk studies, did not change the findings.

Two sensitivity analyses were carried out (a) removing all studies with less than 100 participants and (b) those with less than 1000 participants. There were only five studies with at least 1000 participants. These all showed at least one positive association between continuity and health outcome. Of note, three of these five studies examined emergency department use/readmissions and all three found a significant positive association.

Continuity of care is a multi-dimensional concept that is often linked to positive health outcomes. There is strong evidence that administrative measures of continuity are associated with improved health outcomes including a reduction in mortality, healthcare costs and utilisation of healthcare [ 3 , 18 , 19 ]. Our interpretation of the evidence in this review is that there is an emerging link between patient reported continuity and health outcomes. Most studies in the review contained at least one significant association between continuity and a health outcome. However, when outcome measures were examined individually, the findings were less consistent.

The evidence for a link between patient reported continuity is not as strong as that for administrative measures. There are several possible explanations for this. The review retrieved a relatively small number of studies that examined a range of different outcomes, in different patient populations, in different settings, using different outcomes, and different measures of continuity. This resulted in small numbers of studies examining the relationship of a particular measure of continuity with a particular outcome (Table  4 ). The studies in the review took place in a wide variety of country and healthcare settings and it may be that the effects of continuity vary in different contexts. Finally, in comparison to studies of administrative measures of continuity, the studies in this review were small: the median number of participants in the studies was 486, compared to 39,249 in a recent systematic review examining administrative measures of continuity [ 18 ]. Smaller studies are less able to detect small effect sizes and this may be the principle reason for the difference between the results of this review and previous reviews of administrative measures of continuity. When studies with less than 1000 participants were excluded, all remaining studies showed at least one positive finding and there was a consistent association between reduction in emergency department use/re-admissions and continuity. This suggests that a modest association between certain outcomes and patient reported continuity may be present but, due to effect size, larger studies are needed to demonstrate it. The box score method does not take account of differential size of studies.

Continuity is not a concept that is universally agreed upon. We mapped concepts of continuity onto the commonly used Haggerty framework [ 11 ]. Apart from the use of the Nijmegen Continuity of care questionnaire in three studies [ 42 ], all studies measured continuity using different methods and concepts of continuity. We could have used other theoretical constructs of continuity for the mapping of measures. It was not possible to find the exact questions asked of patients in every study. We therefore mapped several of the continuity measures based on higher level descriptions given by the authors. The diversity of patient measures may account for some of the variability in findings between studies. However, it may be that the nature of continuity captured by patient reported measures is less closely linked to health outcomes than that captured by administrative measures. Administrative measures capture the pattern of interactions between patients and clinicians. All studies in this review (apart from Study 18) use PREMs that attempt to capture something different to the pattern in which a patient sees a clinician. Depending on the specific measure used, this includes: aspects of information transfer between services, how joined up care was between different providers and the nature of the patient-clinician relationship. PREMs can only capture what the patient perceives and remembers. The experience of continuity for the patient is important in its own right. However, it may be that the aspects of continuity that are most linked to positive health outcomes are best reflected by administrative measures. Sidaway-Lee et al. have hypothesised why relational continuity may be linked to health outcomes [ 43 ]. This includes the ability for a clinician to think more holistically and the motivation to “go the extra mile” for a patient. Whilst these are difficult to measure directly, it may be that administrative measures are a better proxy marker than PREMs for these aspects of continuity.

Conclusions/future work

This review shows a potential emerging relationship between patient reported continuity and health outcomes. However, the evidence for this association is currently weaker than that demonstrated in previous reviews of administrative measures of continuity.

If continuity is to be measured and improved, as is being proposed in some health systems [ 44 ], these findings have potential implications as to what type of measure we should use. Measurement of health system performance often drives change [ 45 ]. Health systems may respond to calls to improve continuity differently, depending on how continuity is measured. Continuity PREMs are important and patient experienced continuity should be a goal in its own right. However, it is the fact that continuity is linked to multiple positive health care and health system outcomes that is often given as the reason for pursing it as a goal [ 8 , 44 , 46 ]. Whilst this review shows there is emerging evidence of a link, it is not as strong as that found in studies of administrative measures. If, as has been shown in other work, PREMS and administrative measures are looking at different things [ 23 , 24 ], we need to choose our measures of continuity carefully.

Larger studies are required to confirm the emerging link between patient experienced continuity and outcomes shown in this paper. Future studies, where possible, should collect both administrative and patient reported measures of continuity and seek to understand the relative importance of the three different aspects of continuity (relational, informational, managerial). The relationship between patient experienced continuity and outcomes is likely to vary between different groups and future work should examine differential effects in different patient populations There are now several validated measures of patient experienced continuity [ 17 , 20 , 21 , 42 ]. Whilst there may be an argument more should be developed, the use of a standardised questionnaire (such as the Nijmegen questionnaire) where possible, would enable closer comparison between patient experiences in different healthcare settings.

Data availability

The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.

Gray DJP, Sidaway-Lee K, White E, Thorne A, Evans PH. Continuity of care with doctors - a matter of life and death? A systematic review of continuity of care and mortality. BMJ Open. 2018;8(6):1–12.

Google Scholar  

Barker I, Steventon A, Deeny SR. Association between continuity of care in general practice and hospital admissions for ambulatory care sensitive conditions: cross sectional study of routinely collected, person level data. BMJ Online. 2017;356.

Bazemore A, Merenstein Z, Handler L, Saultz JW. The impact of interpersonal continuity of primary care on health care costs and use: a critical review. Ann Fam Med. 2023;21(3):274–9.

Article   PubMed   PubMed Central   Google Scholar  

Palmer W, Hemmings N, Rosen R, Keeble E, Williams S, Imison C. Improving access and continuity in general practice. The Nuffield Trust; 2018 [cited 2022 Jan 15]. https://www.nuffieldtrust.org.uk/research/improving-access-and-continuity-in-general-practice

Pettigrew LM, Kumpunen S, Rosen R, Posaner R, Mays N. Lessons for ‘large-scale’ general practice provider organisations in England from other inter-organisational healthcare collaborations. Health Policy. 2019;123(1):51–61.

Article   PubMed   Google Scholar  

Glenister KM, Guymer J, Bourke L, Simmons D. Characteristics of patients who access zero, one or multiple general practices and reasons for their choices: a study in regional Australia. BMC Fam Pract. 2021;22(1):2.

Kringos D, Boerma W, Bourgueil Y, Cartier T, Dedeu T, Hasvold T, et al. The strength of primary care in Europe: an international comparative study. Br J Gen Pract. 2013;63(616):e742–50.

Salisbury H. Helen Salisbury: everyone benefits from continuity of care. BMJ. 2023;382:p1870.

Article   Google Scholar  

Gray DP, Sidaway-Lee K, Johns C, Rickenbach M, Evans PH. Can general practice still provide meaningful continuity of care? BMJ. 2023;383:e074584.

Ladds E, Greenhalgh T. Modernising continuity: a new conceptual framework. Br J Gen Pr. 2023;73(731):246–8.

Haggerty JL, Reid, Robert, Freeman G, Starfield B, Adair CE, McKendry R. Continuity of care: a multidisciplinary review. BMJ. 2003;327(7425):1219–21.

Freeman G, Shepperd S, Robinson I, Ehrich K, Richards S, Pitman P et al. Continuity of care continuity of care report of a scoping exercise for the national co-ordinating centre for NHS service delivery and organisation R & D. 2001 [cited 2020 Oct 15]. https://njl-admin.nihr.ac.uk/document/download/2027166

Saultz JW. Defining and measuring interpersonal continuity of care. Ann Fam Med. 2003;1(3):134–43.

Uijen AA, Schers HJ, Schellevis FG. Van den bosch WJHM. How unique is continuity of care? A review of continuity and related concepts. Fam Pract. 2012;29(3):264–71.

Murphy M, Salisbury C. Relational continuity and patients’ perception of GP trust and respect: a qualitative study. Br J Gen Pr. 2020;70(698):e676–83.

Gray DP, Sidaway-Lee K, Whitaker P, Evans P. Which methods are most practicable for measuring continuity within general practices? Br J Gen Pract. 2023;73(731):279–82.

Uijen AA, Schers HJ. Which questionnaire to use when measuring continuity of care. J Clin Epidemiol. 2012;65(5):577–8.

Baker R, Bankart MJ, Freeman GK, Haggerty JL, Nockels KH. Primary medical care continuity and patient mortality. Br J Gen Pr. 2020;70(698):E600–11.

Van Walraven C, Oake N, Jennings A, Forster AJ. The association between continuity of care and outcomes: a systematic and critical review. J Eval Clin Pr. 2010;16(5):947–56.

Aller MB, Vargas I, Garcia-Subirats I, Coderch J, Colomés L, Llopart JR, et al. A tool for assessing continuity of care across care levels: an extended psychometric validation of the CCAENA questionnaire. Int J Integr Care. 2013;13(OCT/DEC):1–11.

Haggerty JL, Roberge D, Freeman GK, Beaulieu C, Bréton M. Validation of a generic measure of continuity of care: when patients encounter several clinicians. Ann Fam Med. 2012;10(5):443–51.

Bentler SE, Morgan RO, Virnig BA, Wolinsky FD, Hernandez-Boussard T. The association of longitudinal and interpersonal continuity of care with emergency department use, hospitalization, and mortality among medicare beneficiaries. PLoS ONE. 2014;9(12):1–18.

Bentler SE, Morgan RO, Virnig BA, Wolinsky FD. Do claims-based continuity of care measures reflect the patient perspective? Med Care Res Rev. 2014;71(2):156–73.

Rodriguez HP, Marshall RE, Rogers WH, Safran DG. Primary care physician visit continuity: a comparison of patient-reported and administratively derived measures. J Gen Intern Med. 2008;23(9):1499–502.

Adler R, Vasiliadis A, Bickell N. The relationship between continuity and patient satisfaction: a systematic review. Fam Pr. 2010;27(2):171–8.

Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573–6.

Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc. 2016;9:211–7.

Schultz EM, McDonald KM. What is care coordination? Int J Care Coord. 2014;17(1–2):5–24.

Hayden, van der Windt, Danielle, Cartwright, Jennifer, Cote, Pierre, Bombardier, Claire. Assessing Bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.

Green BF, Hall JA. Quantitative methods for literature reviews. Annu Rev Psychol. 1984;35(1):37–54.

Article   CAS   PubMed   Google Scholar  

Safran DG, Montgomery JE, Chang H, Murphy J, Rogers WH. Switching doctors: predictors of voluntary disenrollment from a primary physician’s practice. J Fam Pract. 2001;50(2):130–6.

CAS   PubMed   Google Scholar  

Burns T, Catty J, Harvey K, White S, Jones IR, McLaren S, et al. Continuity of care for carers of people with severe mental illness: results of a longitudinal study. Int J Soc Psychiatry. 2013;59(7):663–70.

Engelhardt JB, Rizzo VM, Della Penna RD, Feigenbaum PA, Kirkland KA, Nicholson JS, et al. Effectiveness of care coordination and health counseling in advancing illness. Am J Manag Care. 2009;15(11):817–25.

PubMed   Google Scholar  

Uijen AA, Bischoff EWMA, Schellevis FG, Bor HHJ, Van Den Bosch WJHM, Schers HJ. Continuity in different care modes and its relationship to quality of life: a randomised controlled trial in patients with COPD. Br J Gen Pr. 2012;62(599):422–8.

Humphries C, Jaganathan S, Panniyammakal J, Singh S, Dorairaj P, Price M, et al. Investigating discharge communication for chronic disease patients in three hospitals in India. PLoS ONE. 2020;15(4):1–20.

Konrad TR, Howard DL, Edwards LJ, Ivanova A, Carey TS. Physician-patient racial concordance, continuity of care, and patterns of care for hypertension. Am J Public Health. 2005;95(12):2186–90.

Van Walraven C, Taljaard M, Etchells E, Bell CM, Stiell IG, Zarnke K, et al. The independent association of provider and information continuity on outcomes after hospital discharge: implications for hospitalists. J Hosp Med. 2010;5(7):398–405.

Gulliford MC, Naithani S, Morgan M. Continuity of care and intermediate outcomes of type 2 diabetes mellitus. Fam Pr. 2007;24(3):245–51.

Kaneko M, Aoki T, Mori H, Ohta R, Matsuzawa H, Shimabukuro A, et al. Associations of patient experience in primary care with hospitalizations and emergency department visits on isolated islands: a prospective cohort study. J Rural Health. 2019;35(4):498–505.

Beesley VL, Janda M, Burmeister EA, Goldstein D, Gooden H, Merrett ND, et al. Association between pancreatic cancer patients’ perception of their care coordination and patient-reported and survival outcomes. Palliat Support Care. 2018;16(5):534–43.

Valaker I, Fridlund B, Wentzel-Larsen T, Nordrehaug JE, Rotevatn S, Råholm MB, et al. Continuity of care and its associations with self-reported health, clinical characteristics and follow-up services after percutaneous coronary intervention. BMC Health Serv Res. 2020;20(1):1–15.

Uijen AA, Schellevis FG, Van Den Bosch WJHM, Mokkink HGA, Van Weel C, Schers HJ. Nijmegen continuity questionnaire: development and testing of a questionnaire that measures continuity of care. J Clin Epidemiol. 2011;64(12):1391–9.

Sidaway-Lee K, Gray DP, Evans P, Harding A. What mechanisms could link GP relational continuity to patient outcomes ? Br J Gen Pr. 2021;(June):278–81.

House of Commons Health and Social Care Committee. The future of general practice. 2022. https://publications.parliament.uk/pa/cm5803/cmselect/cmhealth/113/report.html

Close J, Byng R, Valderas JM, Britten N, Lloyd H. Quality after the QOF? Before dismantling it, we need a redefined measure of ‘quality’. Br J Gen Pract. 2018;68(672):314–5.

Gray DJP. Continuity of care in general practice. BMJ. 2017;356:j84.

Download references

Acknowledgements

Not applicable.

Patrick Burch carried this work out as part of a PhD Fellowship funded by THIS Institute.

Author information

Authors and affiliations.

Centre for Primary Care and Health Services Research, Institute of Population Health, University of Manchester, Manchester, England

Patrick Burch, Alex Walter, Stuart Stewart & Peter Bower

You can also search for this author in PubMed   Google Scholar

Contributions

PBu conceived the review and performed the searches. PBu, AW and SS performed the paper selections, reviews and data abstractions. PBo helped with the design of the review and was inovlved the reviewer disputes. All authors contributed towards the drafting of the final manuscript.

Corresponding author

Correspondence to Patrick Burch .

Ethics declarations

Ethics approval, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Burch, P., Walter, A., Stewart, S. et al. Patient reported measures of continuity of care and health outcomes: a systematic review. BMC Prim. Care 25 , 309 (2024). https://doi.org/10.1186/s12875-024-02545-8

Download citation

Received : 27 March 2023

Accepted : 29 July 2024

Published : 19 August 2024

DOI : https://doi.org/10.1186/s12875-024-02545-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

BMC Primary Care

ISSN: 2731-4553

how to critique a quantitative research article example

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

education-logo

Article Menu

how to critique a quantitative research article example

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

To ban or not to ban a rapid review on the impact of smartphone bans in schools on social well-being and academic performance.

how to critique a quantitative research article example

1. Introduction

2.1. research question and hypotheses, 2.2. literature search and study selection, 2.3. study selection, 2.4. title, abstract, and full text review, 2.5. data extraction, 2.6. risk of bias assessment, 4. discussion, 5. limitations, 6. recommendations for action, author contributions, data availability statement, conflicts of interest.

  • Twenge, J.M.; Campbell, W.K. Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Prev. Med. Rep. 2018 , 12 , 271–283. [ Google Scholar ] [ CrossRef ]
  • Nagel, A. Erfassung problematischer Smartphonenutzung und der Effekt auf die kognitive Unterrichtsmeidung von Schüler: Innen. Z. Bild. 2024 , 34 , 21–39. [ Google Scholar ]
  • Zierer, K. Putting Learning before Technology!: The Possibilities and Limits of Digitalization ; Routledge: London, UK, 2019. [ Google Scholar ]
  • OECD. Students, Digital Devices and Success. OECD Education Policy Perspectives ; No. 102; OECD Publishing: Paris, France, 2024. [ Google Scholar ] [ CrossRef ]
  • Sanders, T.; Noetel, M.; Parker, P.; Del Pozo Cruz, B.; Biddle, S.; Ronto, R.; Lonsdale, C. An umbrella review of the benefits and risks associated with youths’ interactions with electronic screens. Nat. Hum. Behav. 2024 , 8 , 82–99. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ward, A.F.; Duke, K.; Gneezy, A.; Bos, M.W. Brain drain: The mere presence of one’s own smartphone reduces available cognitive capacity. J. Assoc. Consum. Res. 2017 , 2 , 140–154. [ Google Scholar ] [ CrossRef ]
  • Böttger, T.; Poschik, M.; Zierer, K. Does the brain drain effect really exist? A meta-analysis. Behav. Sci. 2023 , 13 , 751. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Glass, A.L.; Kang, M. Dividing attention in the classroom reduces exam performance. Educ. Psychol. 2021 , 41 , 948–960. [ Google Scholar ] [ CrossRef ]
  • Wilmer, H.H.; Sherman, L.E.; Chein, J.M. Smartphones and cognition: A review of research exploring the links between mobile technology habits and cognitive functioning. Front. Psychol. 2017 , 8 , 605. [ Google Scholar ] [ CrossRef ]
  • Kates, A.W.; Wu, H.; Coryn, C.L. The effects of mobile phone use on academic performance: A meta-analysis. Comput. Educ. 2018 , 127 , 107–112. [ Google Scholar ] [ CrossRef ]
  • Ryff, C.D.; Singer, B.H. Best News Yet on the Six-Factor Model of Well-Being. Soc. Sci. Res. 2006 , 35 , 1103–1119. [ Google Scholar ] [ CrossRef ]
  • Wang, J.; Nansel, T.R.; Iannotti, R.J. Cyber and Traditional Bullying: Differential Association with Depression. J. Adolesc. Health 2011 , 48 , 415–417. [ Google Scholar ] [ CrossRef ]
  • Olweus, D.; Limber, S.P. Some Problems with Cyberbullying Research. Curr. Opin. Psychol. 2018 , 19 , 139–143. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Patchin, J.W.; Hinduja, S. It is time to teach safe digital citizenship. J. Adolesc. Health 2020 , 66 , 140–143. [ Google Scholar ] [ CrossRef ]
  • Garcia-Navarro, L. The Risk of Teen Suicide and Depression Is Linked to Smartphone Use, Study Says. NPR , 2017. Available online: https://laist.com/news/npr-news/the-risk-of-teen-depression-and-suicide-is-linked-to-smartphone-use-study-says (accessed on 18 July 2024).
  • Nesi, J. The impact of social media on youth mental health: Challenges and opportunities. N. C. Med. J. 2020 , 81 , 116–121. [ Google Scholar ] [ CrossRef ]
  • Stone, T.E. UNESCO. Technology in Education: A Tool on Whose Terms. Glob. Educ. Monit. Rep. 2023 , 18 , 1–433. [ Google Scholar ]
  • Selwyn, N.; Aagaard, J. Banning mobile phones from classrooms-An opportunity to advance understandings of technology addiction, distraction and cyberbullying. Br. J. Educ. Technol. 2021 , 52 , 8–19. [ Google Scholar ] [ CrossRef ]
  • Garritty, C.; Gartlehner, G.; Kamel, C. Cochrane Rapid Reviews: Interim Guidance from the Cochrane Rapid Reviews Methods Group ; Cochrane: London, UK, 2020. [ Google Scholar ]
  • Seidler, A.; Nußbaumer-Streit, B.; Apfelbacher, C.; Zeeb, H. Rapid Reviews in Zeiten von COVID-19–Erfahrungen im Zuge des Kompetenznetzes Public Health zu COVID-19 und Vorschlag eines standardisierten Vorgehens. Das Gesundheitswesen 2021 , 83 , 173–179. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tricco, A.C.; Lillie, E.; Zarin, W.; O’Brien, K.K.; Colquhoun, H.; Levac, D.; Moher, D.; Peters, M.D.; Horsley, T.; Weeks, L.; et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann. Intern. Med. 2018 , 169 , 467–473. [ Google Scholar ] [ CrossRef ]
  • Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005 , 8 , 19–32. [ Google Scholar ] [ CrossRef ]
  • Tran, A. Perceptions of the Influence of Cell Phones and Social Media Usage on Students’ Academic Performance. Doctoral Dissertation, San Jose State University, San Jose, CA, USA, 2021. [ Google Scholar ]
  • Guldvik, M.K.; Kvinnsland, I. Smarter without Smartphones? Effects of Mobile Phone Bans in Schools on Academic Performance, Well-Being, and Bullying. Master’s Thesis, NHH Norwegian School of Economics, Bergen, Norway, 2018. [ Google Scholar ]
  • Abrahamsson, S. Distraction or Teaching Tool: Do Smartphone Bans in Schools Help Students? Norwegian Institute of Public Health: Trondheim, Norway, 2020; Ikke-Publiceret Manuscript; Available online: https://sites.google.com/view/saraabrahamsson/research (accessed on 11 June 2024).
  • Beneito, P.; Vicente-Chirivella, Ó. Banning mobile phones in schools: Evidence from regional-level policies in Spain. Appl. Econ. Anal. 2022 , 30 , 153–175. [ Google Scholar ] [ CrossRef ]
  • Cakirpaloglu, S.D.; Čech, T.; Maléřová, M.; Adámková, H. The effect of mobile phone ban in schools on the evaluation of classroom climate. In EDULEARN20 Proceedings ; IATED: Valencia, Spain, 2020; pp. 3204–3212. [ Google Scholar ]
  • Beland, L.; Murphy, R. III communication: Technology, distraction & student performance. Labour Econ. 2016 , 41 , 61–76. [ Google Scholar ] [ CrossRef ]
  • Kessel, D.; Lif Hardardottir, H.; Tyrefors, B. The impact of banning mobile phones in Swedish secondary schools. Econ. Educ. Rev. 2020 , 77 , 102009. [ Google Scholar ] [ CrossRef ]
  • Chandler, J.; Cumpston, M.; Li, T.; Page, M.J.; Welch, V.J.H.W. Cochrane Handbook for Systematic Reviews of Interventions ; Wiley: Hoboken, NJ, USA, 2019. [ Google Scholar ]
  • Froese, A.D.; Carpenter, C.N.; Inman, D.A.; Schooley, J.R.; Barnes, R.B.; Brecht, P.W. Effects of classroom cell phone use on expected and actual learning. Coll. Stud. J. 2012 , 46 , 323–332. [ Google Scholar ]
  • Kuznekoff, J.H.; Munz, S.; Titsworth, S. Mobile phones in the classroom: Examining the effects of texting, Twitter, and message content on student learning. Commun. Educ. 2015 , 64 , 344–365. [ Google Scholar ] [ CrossRef ]
  • Campbell, M.A. Cyber bullying: An old problem in a new guise? Aust. J. Guid. Couns. 2005 , 15 , 68–76. [ Google Scholar ] [ CrossRef ]
  • Turkle, S. Reclaiming Conversation: The Power of Talk in a Digital Age ; Penguin Books: London, UK, 2015. [ Google Scholar ]
  • Davis, K.; Koepke, L. Risk and protective factors associated with cyberbullying: Are relationships or rules more protective? Learn. Media Technol. 2016 , 41 , 521–545. [ Google Scholar ] [ CrossRef ]
  • European Commission. Survey of Schools: ICT in Education ; European Commission: Brussels, Belgium, 2013. [ CrossRef ]
  • Gikas, J.; Grant, M.M. Mobile computing devices in higher education: Student perspectives on learning with cellphones, smartphones & social media. Internet High. Educ. 2013 , 19 , 18–26. [ Google Scholar ] [ CrossRef ]
  • Hattie, J. Visible Learning: The Sequel: A Synthesis of over 2,100 Meta-Analyses Relating to Achievement ; Routledge: London, UK, 2023. [ Google Scholar ]
  • Fage, C.; Consel, C.; Etchegoyhen, K.; Amestoy, A.; Bouvard, M.; Mazon, C.; Sauzéon, H. An emotion regulation app for school inclusion of children with ASD: Design principles and evaluation. Comput. Educ. 2019 , 131 , 1–21. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Abbreviated Title (Year)AuthorsResearch AreaReferencendSEpWeighting (%)95% CI
Distraction or teaching tool (2020) [ ]AbrahamssonSocial well-beingBullying151,9250.360.280.202.79[−0.19; 0.91]
Banning mobile phones in schools (2021) [ ]Beneito and Vicente-ChirivellaSocial well-beingBullying age 12–14840.360.200.075.12[−0.03; 0.75]
Bullying age 15–17 840.470.18<0.055.73[0.11; 0.83]
Effect of mobile phone ban (2020) [ ]Cakirpaloglu, Čech, Maléřová, and AdámkováSocial well-beingSatisfaction8320.120.100.2313.82[−0.08; 0.31]
Conflicts8320.200.120.0911.20[−0.03; 0.43]
Competition8320.200.110.0712.36[−0.01; 0.41]
Overall effect ‘social well-being’--0.220.06<0.01-[0.11; 0.32]
Distraction or teaching tool (2020) [ ]AbrahamssonPerformanceGrade point average151,9250.110.060.0721.10[−0.01; 0.23]
Ill Communication (2016) [ ]Beland and MurphyPerformanceTest performance130,4820.061.000.950.24[−1.90; 2.02]
Impact of banning mobile phones (2020) [ ]Kessel et al.PerformanceTest performance93710.010.030.6627.64[−0.05; 0.07]
Overall effect ‘performance’--0.050.050.30-[−0.04; 0.14]
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Böttger, T.; Zierer, K. To Ban or Not to Ban? A Rapid Review on the Impact of Smartphone Bans in Schools on Social Well-Being and Academic Performance. Educ. Sci. 2024 , 14 , 906. https://doi.org/10.3390/educsci14080906

Böttger T, Zierer K. To Ban or Not to Ban? A Rapid Review on the Impact of Smartphone Bans in Schools on Social Well-Being and Academic Performance. Education Sciences . 2024; 14(8):906. https://doi.org/10.3390/educsci14080906

Böttger, Tobias, and Klaus Zierer. 2024. "To Ban or Not to Ban? A Rapid Review on the Impact of Smartphone Bans in Schools on Social Well-Being and Academic Performance" Education Sciences 14, no. 8: 906. https://doi.org/10.3390/educsci14080906

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

IMAGES

  1. (PDF) Conducting an article critique for a quantitative research study

    how to critique a quantitative research article example

  2. Critique for a Quantitative Research Article Paper

    how to critique a quantitative research article example

  3. Critical Analysis of Quantitative Research

    how to critique a quantitative research article example

  4. FREE 9+ Quantitative Research Templates in PDF

    how to critique a quantitative research article example

  5. Quantitative critique sample. Sample Paper on Quantitative Critique

    how to critique a quantitative research article example

  6. Sample Quantitative Nursing Research Article Critique : SPH Writing

    how to critique a quantitative research article example

COMMENTS

  1. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    to identify what is best practice. This article is a step-by step-approach to critiquing quantitative research to help nurses demystify the process and decode the terminology. Key words: Quantitative research methodologies Review process • Research]or many qualified nurses and nursing students research is research, and it is often quite difficult

  2. Critiquing Quantitative Research Reports: Key Points for the Beginner

    The first step in the critique process is for the reader to browse the abstract and article for an overview. During this initial review a great deal of information can be obtained. The abstract should provide a clear, concise overview of the study. During this review it should be noted if the title, problem statement, and research question (or ...

  3. Conducting an article critique for a quantitative research study

    Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a ...

  4. How to appraise quantitative research

    Title, keywords and the authors. The title of a paper should be clear and give a good idea of the subject area. The title should not normally exceed 15 words 2 and should attract the attention of the reader. 3 The next step is to review the key words. These should provide information on both the ideas or concepts discussed in the paper and the ...

  5. PDF Step-by-step guide to critiquing research. Part 2: quaiitative researcii

    research. Part 2: quaiitative researcii Frances Ryan, Michael Coughlan, Patricia Cronin Al>stract As with a quantitative study, critical analysis of a qualitative study involves an in-depth review of how each step of the research was undertaken. Qualitative and quantitative studies are, however, fundamentally different approaches to research ...

  6. Step-by-step guide to critiquing research. Part 1: quantitative

    Abstract. When caring for patients, it is essential that nurses are using the current best practice. To determine what this is, nurses must be able to read research critically. But for many qualified and student nurses, the terminology used in research can be difficult to understand, thus making critical reading even more daunting.

  7. Conducting an article critique for a quantitative research study

    Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a brief example to demonstrate the principles and form. Keywords: quantitative article critique, statistics, methodology, graduate students Introduction

  8. PDF Framework for How to Read and Critique a Research Study

    1. Critiquing the research article a. Title - Does it accurately describe the article? b. Abstract - Is it representative of the article? c. Introduction - Does it make the purpose of the article clear? d. Statement of the problem - Is the problem properly introduced? e. Purpose of the study - Has the reason for conducting the ...

  9. Critiquing Research Articles

    Conducting an article critique for a quantitative research study: Perspectives for doctoral students and other novice readers (Vance et al.) ... Writing a Critique or Review of a Research Article (University of Calgary) Presentations: The Critique Process: Reviewing and Critiquing Research.

  10. PDF Writing a Critique or Review of a Research Article

    Agreeing with, defending or confirming a particular point of view. Proposing a new point of view. Conceding to an existing point of view, but qualifying certain points. Reformulating an existing idea for a better explanation. Dismissing a point of view through an evaluation of its criteria. Reconciling two seemingly different points of view.

  11. Quantitative Critique: Step-by-Step Analysis

    This video discusses how to analyze and critique a Quantitative research study. It is broken down for both the survey and the experimental design and covers ...

  12. Writing an Article Critique

    A summary of a research article requires you to share the key points of the article so your reader can get a clear picture of what the article is about. A critique may include a brief summary, but the main focus should be on your evaluation and analysis of the research itself. What steps need to be taken to write an article critique? Before you ...

  13. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  14. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  15. Conducting an article critique for a quantitative research study

    However, a fundamental knowledge of research methods is still needed in order to be successful. Because there are few published examples of critique examples, this article provides the practical points of conducting a formally written quantitative research article critique while providing a brief example to demonstrate the principles and form.

  16. Critical Appraisal of a quantitative paper

    How to use this practical example . Using the framework, you can have a go at appraising a quantitative paper - we are going to look at the following article: Marrero, D.G. et al (2016) 'Comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial', AJPH Research, 106(5), pp. 949-956.

  17. Making sense of research: A guide for critiquing a paper

    Learning how to critique research articles is one of the fundamental skills of scholarship in any discipline. The range, quantity and quality of publications available today via print, electronic and Internet databases means it has become essential to equip students and practitioners with the prerequisites to judge the integrity and usefulness of published research.

  18. (PDF) Critiquing A Research Paper A Practical Example

    Step 4: Assess the validity and R eliability of. study (R eading the whole P aper) • Validity. • The results produced are the true representative of reality. • Absolute Wight is not the true ...

  19. PDF CRITIQUING LITERATURE

    CRITIQUING LITERATUREWHY DO W. CRITIQUE LITERATURE?Evaluating literature is a process of analysing research to determine its str. ngths and weaknesses. This is an important process as not all published research is reliable or. scientifically sound. Arguments and the interpretation of data can be biase.

  20. Critiquing Research Evidence for Use in Practice: Revisited

    The first step is to critique and appraise the research evidence. Through critiquing and appraising the research evidence, dialog with colleagues, and changing practice based on evidence, NPs can improve patient outcomes ( Dale, 2005) and successfully translate research into evidence-based practice in today's ever-changing health care ...

  21. PDF How to appraise quantitative research

    quantitative research, which often contains the results of statistical testing. However, nurses have a professional responsibility to critique research to improve their prac-tice, care and patient safety.1 This article provides a step by step guide on how to critically appraise a quantitative paper. Title, keywords and the authors

  22. Review A guide to critiquing a research paper. Methodological appraisal

    Introduction. Developing and maintaining proficiency in critiquing research have become a core skill in today's evidence-based nursing. In addition, understanding, synthesising and critiquing research are fundamental parts of all nursing curricula at both pre- and post-registration levels (NMC, 2011).This paper presents a guide, which has potential utility in both practice and when undertaking ...

  23. Research essentials. How to critique quantitative research

    Abstract. QUANTITATIVE RESEARCH is a systematic approach to investigating numerical data and involves measuring or counting attributes, that is quantities. Through a process of transforming information that is collected or observed, the researcher can often describes a situation or event, answering the 'what' and 'how many' questions about a ...

  24. Quantitative Research Questionnaire

    Types of Survey Questionnaires in Quantitative Research. Quantitative research questionnaires have close-ended questions that allow the researchers to measure accurate and specific responses from the participants. They don't contain open-ended questions like qualitative research, where the response is measured by interviews and focus groups.

  25. Patient reported measures of continuity of care and health outcomes: a

    There is a considerable amount of research showing an association between continuity of care and improved health outcomes. However, the methods used in most studies examine only the pattern of interactions between patients and clinicians through administrative measures of continuity. The patient experience of continuity can also be measured by using patient reported experience measures.

  26. To Ban or Not to Ban? A Rapid Review on the Impact of Smartphone Bans

    After a comprehensive database search, five research studies with quantitative results were selected and analyzed, and the effect sizes were calculated in the areas of academic performance and social behavior. The meta-analysis yielded an overall effect size of d = 0.162 (p < 0.05). Smartphone bans have a significant, but modest, effect.