• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

drawing conclusions in research process

Home Market Research

Statistical Methods: What It Is, Process, Analyze & Present

statistical methods

Statistical methods are vital in transforming raw data into actionable insights across various fields. Researchers, analysts, and decision-makers can collect, organize, analyze, interpret, and present data effectively using these mathematical techniques. 

These methods facilitate understanding complex data sets, uncovering patterns, and making informed decisions in business, healthcare, social sciences, and engineering.

Statistical methods provide a systematic approach to data analysis, from summarizing data with descriptive statistics to making predictions and testing hypotheses with inferential techniques.  

This blog explores key components of statistical methods, including data collection, organization, analysis, interpretation, and presentation. It also discusses best practices, common challenges, and how QuestionPro Research enhances statistical analysis to support exceptional decision-making.

What are Statistical Methods?

Statistical Methods are mathematical techniques and processes used to collect, organize, analyze, interpret, and present data. These methods are helpful for: 

  • Researchers
  • Analysts 
  • Decision-makers

They are usually used to make sense of large data sets, identify patterns, and draw meaningful conclusions. Statistical methods are essential in transforming raw data into actionable insights , making them a cornerstone of business, healthcare, social sciences, engineering, and more.

Key Components of Statistical Methods:

  • Data Collection : Gathering data through various means such as surveys, experiments, or observational studies.
  • Data Organization : Structuring and summarizing the collected data meaningfully using tables, graphs, and summary statistics are everywhere. 
  • Data Analysis : Applying statistics techniques to explore relationships, test hypotheses, and make predictions based on the data. 
  • Data Interpretation : Concluding the analysis, understanding the implications of the findings, and making decisions based on the results.
  • Presentation: Communicating the findings effectively through reports, charts, and presentations to make the information accessible to others.

Statistical methods provide a systematic approach to understanding and interpreting data, allowing for informed decision-making in various disciplines.

Types of Statistical Methods

Statistical methods can be broadly categorized into several types based on their purpose and the nature of the data they analyze. Here are the main types:

01. Descriptive Statistics

Descriptive Statistics are used to summarize and describe the main features of a data set. They provide simple summaries of the sample and the measures, offering a way to understand the basic aspects of the data.

  • Mean: The arithmetic average of a data set, calculated by adding all the values and dividing by the number of observations. It is a measure of central tendency that provides insight into the general magnitude of the data.
  • Median: The middle value of a data set when ordered from least to greatest. If the data set has an even number of observations, the median is the average of the two middle numbers. The median helps us understand the central tendency, especially in skewed distributions.
  • Mode: The value that appears most frequently in a data set. A data set may have one mode, more than one, or no mode. The mode is particularly useful in categorical data analysis.
  • Standard Deviation: A measure of the dispersion or spread of data around the mean. It indicates how much the values in a data set deviate from the mean, with a higher standard deviation signifying greater variability.
  • Range: The difference between a data set’s maximum and minimum values. The range provides a measure of the spread of the data, but it is sensitive to outliers.

02. Inferential Statistics

Inferential Statistics allow researchers to make predictions or inferences about a population based on a sample of data. These methods test hypotheses, estimate population parameters, and explore relationships between variables.

  • T-Test: A hypothesis test used to compare the means of two groups. It assesses whether the difference between the means is statistically significant. The t-test is commonly used in small sample sizes.
  • Chi-Square Test: A statistical test used to examine the association between categorical variables. It compares the observed frequencies of categories with the expected frequencies to determine if there is a significant relationship.
  • ANOVA (Analysis of Variance): A technique used to compare the means of three or more groups. ANOVA tests whether the differences among group means are statistically significant and is often used in experimental research. 
  • Confidence Intervals: A range of values derived from sample data likely to contain the true population parameter. For example, a 95% confidence interval suggests a 95% chance that the interval contains the actual parameter value. Confidence intervals provide a measure of the precision of an estimate.
  • Linear Regression: A type of regression analysis where the relationship between the dependent variable and one independent variable is modeled as a straight line. Linear regression is used to predict outcomes and understand the strength of the relationship between variables.
  • Multiple Regression: An extension of linear regression that involves two or more independent variables. It allows for a more comprehensive analysis of how various factors contribute to the outcome of the dependent variable.
  • Correlation: A measure of the strength and direction of the relationship between two variables. The correlation coefficient ranges from -1 to 1, where -1 indicates a perfect negative correlation, 0 means no correlation, and 1 indicates a perfect positive correlation. Correlation is used to identify and quantify relationships between variables.

Applications of Statistical Methods

Statistical methods are indispensable across various industries and fields. They enable data-driven decision-making, optimize processes, and provide insights that drive innovation and improvements. Below are key applications of statistical methods in different sectors:

1. Business

In business, statistical methods are critical for analyzing data to inform strategies, optimize operations, and predict future trends.

  • Marketing Analysis: Statistical methods help businesses understand customer behavior, segment markets, and measure the effectiveness of marketing campaigns. Techniques like regression analysis and hypothesis testing are used to identify which factors drive sales and how to allocate marketing budgets efficiently.
  • Sales Forecasting: Businesses use statistical models to predict future sales based on historical data. Time series analysis and regression models are commonly employed to forecast demand, helping companies manage inventory, plan production, and set sales targets.
  • Product Quality Improvement: Statistical methods such as control charts, Six Sigma, and design of experiments (DOE) are used to monitor and improve product quality. These techniques help identify defects, optimize manufacturing processes, and ensure that products meet customer expectations.

2. Healthcare

In healthcare, statistical methods are vital for research, diagnosis, and treatment planning, contributing to better patient outcomes and advancements in medical science.

  • Clinical Trials: Statistical analysis is essential in designing and evaluating clinical trials. It helps determine the efficacy and safety of new treatments or drugs. Techniques like randomization, hypothesis testing, and survival analysis are used to analyze trial data and draw reliable conclusions.
  • Disease Pattern Analysis: Epidemiologists use statistical methods to study the distribution and determinants of diseases in populations. Logistic regression and survival analysis help identify risk factors, track disease outbreaks, and develop public health interventions.
  • Treatment Effectiveness: Statistical methods are used to assess the effectiveness of medical treatments by comparing patient outcomes before and after treatment. Methods such as paired t-tests, ANOVA, and meta-analysis are commonly used in these evaluations.

3. Social Sciences

In the social sciences, statistical methods study human behavior, social trends, and relationships between variables. They provide empirical evidence that supports theories and informs policy decisions.

  • Survey Analysis: Surveys are a common data collection method in social sciences, and statistical analysis helps in interpreting the results. Techniques like factor analysis, regression, and correlation are used to analyze survey data, identify trends, and draw conclusions about populations.
  • Behavioral Studies: Researchers use statistical methods to explore underlying patterns in human behavior, such as consumer preferences, social interactions, and decision-making processes. Cluster analysis, ANOVA, and structural equation modeling (SEM) help uncover underlying factors and relationships in behavioral data. 

4. Engineering

In engineering, statistical methods improve the design, production, and reliability of products and processes, ensuring efficiency and quality in manufacturing and operations.

  • Quality Control: Statistical Process Control (SPC) techniques, such as control charts and process capability analysis, monitor production processes and maintain product quality. These methods help detect and correct variations before they lead to defects.
  • Reliability Testing: Engineers use statistical methods to evaluate product reliability and durability. Techniques like life data analysis, Weibull analysis, and failure mode and effects analysis (FMEA) help predict product lifespans and identify potential points of failure.
  • Process Optimization: Statistical methods, such as the design of experiments (DOE) and response surface methodology (RSM), are used to optimize manufacturing processes. These techniques help identify the best combination of factors to achieve desired outcomes, such as maximizing efficiency or minimizing costs.

Best Practices for Using Statistical Methods

Using statistical methods effectively requires adherence to several best practices to ensure the results’ accuracy, reliability, and relevance. Here are some key best practices to consider:

  • Define Clear Objectives: Before selecting any statistical method, clearly define the objectives of your analysis. Understanding your goal will guide your choice of appropriate techniques and tools.
  • Understand Your Data: Conduct thorough exploratory data analysis (EDA) to understand your data’s distribution, patterns, and potential anomalies. This step helps you select the right statistical methods and avoid incorrect assumptions.
  • Choose the Right Method: Select statistical methods that align with your data type and research objectives. For example, use regression analysis to predict outcomes, ANOVA to compare group means, and chi-square tests to test categorical data.
  • Check Assumptions: Most statistical methods have underlying assumptions (e.g., normality, homoscedasticity, independence). Ensure your data meets these assumptions; if not, consider data transformation or alternative methods.
  • Avoid Overfitting: When building predictive models, avoid overfitting by using less complex models that fit the noise in your data rather than the underlying trend. Cross-validation techniques can help assess model performance.
  • Ensure Data Quality: Your data quality directly impacts the quality of your results. Ensure data is clean, consistent, and error-free before applying statistical methods.
  • Interpret Results in Context: Statistical significance does not always imply practical significance. Interpret your results in the context of your research question and real-world implications.
  • Document Your Process: Keep detailed records of your data analysis process, including the methods used, assumptions made, and rationale behind your choices. This ensures the transparency and reproducibility of your work.
  • Validate Findings: Use multiple methods or datasets to validate your findings. Consistent results across different approaches enhance the credibility of your analysis.

By following these best practices, you can leverage statistical methods to produce meaningful, actionable insights.

Challenges and Limitations

When using statistical methods, several challenges and limitations can impact the quality and reliability of your analysis. Here are some key challenges:

1. Data Quality Issues:

One of the primary challenges in statistical analysis is ensuring data quality. Poor data quality, such as missing values, outliers, and inconsistencies, can lead to biased or inaccurate results. Data collected from various sources might have errors or not be representative of the population, which compromises the reliability of the analysis. Addressing these issues often requires substantial preprocessing, which can be time-consuming and complex.

2. Misinterpretation of Results:

Statistical methods can produce complex results that are sometimes counterintuitive. A common limitation is the misinterpretation of statistical significance as practical significance. For example, a statistically significant result may have little real-world impact.  

Also, misunderstanding the implications of p-values, confidence intervals, and correlation versus causation can lead to incorrect conclusions that misinform decision-making processes.

3. Selection of Appropriate Methods:

Choosing the correct statistical method is crucial, yet it can be challenging, especially for complex data sets or when multiple variables are involved. Inappropriate method selection can lead to invalid results or missed insights. 

This challenge is compounded by the vast array of available statistical techniques, each with its assumptions and applicability. The complexity increases when dealing with non-standard data types, such as time series or categorical data, where specialized methods are required.

These challenges highlight the need for a solid foundational understanding of statistical principles, careful data handling, and a thoughtful approach to method selection and result interpretation. Awareness of these limitations can help mitigate their impact and improve the robustness of statistical analyses.

QuestioPro Research in Statistical Methods

QuestionPro Research offers tools designed to enhance careers in statistical analysis and data interpretation, providing valuable insights for decision-making. Here’s an overview of how QuestionPro integrates statistical methods to support robust research: 

01. Advanced Statistical Tools

QuestionPro Research provides a range of advanced statistical tools to help users easily perform complex analyses. Features include descriptive statistics, cross-tabulations, and inferential tests such as t-tests, chi-square tests, and ANOVA. These tools allow researchers to explore data patterns, test hypotheses, and draw meaningful conclusions.

02. Customizable Analysis Options

The platform offers customizable analysis options, enabling users to tailor their statistical approach based on specific research needs. Users can select from various statistical methods and configure parameters to fit their unique data characteristics. This flexibility ensures that the analysis aligns with the research objectives and provides relevant insights.

03. Data Quality Assurance

QuestionPro emphasizes the importance of data quality in statistical analysis. The platform includes data cleaning and validation features, helping users identify and address missing values, outliers, and inconsistencies. By ensuring high-quality data, users can enhance the accuracy and reliability of their statistical computing results. 

04. Visualizations and Reporting

QuestionPro provides robust visualization tools to facilitate the interpretation of statistical results. Users can generate charts, graphs, and dashboards visually representing data and statistical findings. These visualizations make it easier to understand complex results and communicate insights effectively to stakeholders.

05. User-Friendly Interface

Despite offering advanced statistical capabilities, QuestionPro maintains a user-friendly interface that simplifies the process of performing statistical analyses. Intuitive navigation and guided workflows help users efficiently conduct and interpret analyses, regardless of their statistical expertise.

06. Integration and Support

QuestionPro Research integrates with other data sources and analytical tools, enhancing the flexibility of statistical analysis. The platform also offers support and resources to help users apply statistical methods and interpret results accurately.

QuestionPro Research equips users with the tools and support to conduct thorough and accurate statistical analyses, facilitating informed decision-making based on reliable data insights.

Statistical methods are essential for converting raw data into actionable insights across diverse fields. Techniques like descriptive statistics summarize data characteristics, while inferential methods predict, test hypotheses, and conclude broader populations.  

Applications span business, healthcare, social sciences, and engineering, helping optimize strategies, assess treatment effectiveness, analyze data behavior, and improve product quality.

Best practices for statistical analysis include defining objectives, understanding data, choosing appropriate methods, checking assumptions, avoiding overfitting, ensuring data quality, and interpreting results contextually. Despite their utility, data quality issues and method selection difficulties can arise.

QuestionPro Research enhances statistical analysis with advanced tools, customizable options, data quality assurance, and user-friendly interfaces, supporting accurate and effective data-driven decision-making.

MORE LIKE THIS

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

drawing conclusions in research process

Why You Should Attend XDAY 2024

Aug 30, 2024

Alchemer vs Qualtrics

Alchemer vs Qualtrics: Find out which one you should choose

target population

Target Population: What It Is + Strategies for Targeting

Aug 29, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

TRAILS Faculty Launch New Study on Perception Bias and AI Systems

Descriptive image for TRAILS Faculty Launch New Study on Perception Bias and AI Systems

Perception bias is a cognitive bias that occurs when we subconsciously draw conclusions based on what we expect to see or experience. It has been studied extensively, particularly as it relates to health information, the workplace environment, and even social gatherings.

But what is the relationship between human-based perception bias, and information that is generated by artificial intelligence (AI) algorithms?

Researchers from the Institute for Trustworthy AI in Law & Society (TRAILS) are exploring this topic, conducting a series of studies to determine the level of bias that users expect from AI systems, and how AI providers explain to users that their systems may include biased data.

The project, led by Adam Aviv, an associate professor of computer science at George Washington University, and Michelle Mazurek , an associate professor of computer science at the University of Maryland, is supported by a $150K seed grant from TRAILS.

It is one of eight projects that received funding in January when TRAILS unveiled its inaugural round of seed grants.

Mazurek and Aviv have a long track record of successful collaborations on security-related topics. Mazurek, who is the director of the Maryland Cybersecurity Center at UMD, says they’re both interested in how people make decisions related to their online security, privacy and safety.

She believes that decision-making based on AI-generated content—particularly how much trust is placed in that content—is a natural extension of the duo’s previous work.

Click  HERE  to read the full article

The Department welcomes comments, suggestions and corrections.  Send email to editor [-at-] cs [dot] umd [dot] edu .

  • U.S. Locations
  • UMGC Europe
  • Learn Online
  • Find Answers
  • 855-655-8682
  • Current Students

Online Guide to Writing and Research

The research process, explore more of umgc.

  • Online Guide to Writing

Planning and Writing a Research Paper

Draw Conclusions

As a writer, you are presenting your viewpoint, opinions, evidence, etc. for others to review, so you must take on this task with maturity, courage and thoughtfulness.  Remember, you are adding to the discourse community with every research paper that you write.  This is a privilege and an opportunity to share your point of view with the world at large in an academic setting.

Because research generates further research, the conclusions you draw from your research are important. As a researcher, you depend on the integrity of the research that precedes your own efforts, and researchers depend on each other to draw valid conclusions. 

Business process and workflow automation with flowchart. Hand holding wooden cube block arranging processing management

To test the validity of your conclusions, you will have to review both the content of your paper and the way in which you arrived at the content. You may ask yourself questions, such as the ones presented below, to detect any weak areas in your paper, so you can then make those areas stronger.  Notice that some of the questions relate to your process, others to your sources, and others to how you arrived at your conclusions.

Checklist for Evaluating Your Conclusions

CheckedQuestions
Does the evidence in my paper evolve from a stated thesis or topic statement?
Do all of my resources for evidence agree with each other? Are there conflicts, and have I identified them as conflicts?
Have I offered enough evidence for every conclusion I have drawn? Are my conclusions based on empirical studies, expert testimony, or data, or all of these?
Are all of my sources credible? Is anyone in my audience likely to challenge them?
Have I presented circular reasoning or illogical conclusions?
Am I confident that I have covered most of the major sources of information on my topic? If not, have I stated this as a limitation of my research?
Have I discovered further areas for research and identified them in my paper?
Have others to whom I have shown my paper perceived the validity of my conclusions?
Are my conclusions strong? If not, what causes them to be weak?

Key Takeaways

  • Because research generates further research, the conclusions you draw from your research are important.
  • To test the validity of your conclusions, you will have to review both the content of your paper and the way in which you arrived at the content.

Mailing Address: 3501 University Blvd. East, Adelphi, MD 20783 This work is licensed under a  Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License . © 2022 UMGC. All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity of information located at external sites.

Table of Contents: Online Guide to Writing

Chapter 1: College Writing

How Does College Writing Differ from Workplace Writing?

What Is College Writing?

Why So Much Emphasis on Writing?

Chapter 2: The Writing Process

Doing Exploratory Research

Getting from Notes to Your Draft

Introduction

Prewriting - Techniques to Get Started - Mining Your Intuition

Prewriting: Targeting Your Audience

Prewriting: Techniques to Get Started

Prewriting: Understanding Your Assignment

Rewriting: Being Your Own Critic

Rewriting: Creating a Revision Strategy

Rewriting: Getting Feedback

Rewriting: The Final Draft

Techniques to Get Started - Outlining

Techniques to Get Started - Using Systematic Techniques

Thesis Statement and Controlling Idea

Writing: Getting from Notes to Your Draft - Freewriting

Writing: Getting from Notes to Your Draft - Summarizing Your Ideas

Writing: Outlining What You Will Write

Chapter 3: Thinking Strategies

A Word About Style, Voice, and Tone

A Word About Style, Voice, and Tone: Style Through Vocabulary and Diction

Critical Strategies and Writing

Critical Strategies and Writing: Analysis

Critical Strategies and Writing: Evaluation

Critical Strategies and Writing: Persuasion

Critical Strategies and Writing: Synthesis

Developing a Paper Using Strategies

Kinds of Assignments You Will Write

Patterns for Presenting Information

Patterns for Presenting Information: Critiques

Patterns for Presenting Information: Discussing Raw Data

Patterns for Presenting Information: General-to-Specific Pattern

Patterns for Presenting Information: Problem-Cause-Solution Pattern

Patterns for Presenting Information: Specific-to-General Pattern

Patterns for Presenting Information: Summaries and Abstracts

Supporting with Research and Examples

Writing Essay Examinations

Writing Essay Examinations: Make Your Answer Relevant and Complete

Writing Essay Examinations: Organize Thinking Before Writing

Writing Essay Examinations: Read and Understand the Question

Chapter 4: The Research Process

Planning and Writing a Research Paper: Ask a Research Question

Planning and Writing a Research Paper: Cite Sources

Planning and Writing a Research Paper: Collect Evidence

Planning and Writing a Research Paper: Decide Your Point of View, or Role, for Your Research

Planning and Writing a Research Paper: Draw Conclusions

Planning and Writing a Research Paper: Find a Topic and Get an Overview

Planning and Writing a Research Paper: Manage Your Resources

Planning and Writing a Research Paper: Outline

Planning and Writing a Research Paper: Survey the Literature

Planning and Writing a Research Paper: Work Your Sources into Your Research Writing

Research Resources: Where Are Research Resources Found? - Human Resources

Research Resources: What Are Research Resources?

Research Resources: Where Are Research Resources Found?

Research Resources: Where Are Research Resources Found? - Electronic Resources

Research Resources: Where Are Research Resources Found? - Print Resources

Structuring the Research Paper: Formal Research Structure

Structuring the Research Paper: Informal Research Structure

The Nature of Research

The Research Assignment: How Should Research Sources Be Evaluated?

The Research Assignment: When Is Research Needed?

The Research Assignment: Why Perform Research?

Chapter 5: Academic Integrity

Academic Integrity

Giving Credit to Sources

Giving Credit to Sources: Copyright Laws

Giving Credit to Sources: Documentation

Giving Credit to Sources: Style Guides

Integrating Sources

Practicing Academic Integrity

Practicing Academic Integrity: Keeping Accurate Records

Practicing Academic Integrity: Managing Source Material

Practicing Academic Integrity: Managing Source Material - Paraphrasing Your Source

Practicing Academic Integrity: Managing Source Material - Quoting Your Source

Practicing Academic Integrity: Managing Source Material - Summarizing Your Sources

Types of Documentation

Types of Documentation: Bibliographies and Source Lists

Types of Documentation: Citing World Wide Web Sources

Types of Documentation: In-Text or Parenthetical Citations

Types of Documentation: In-Text or Parenthetical Citations - APA Style

Types of Documentation: In-Text or Parenthetical Citations - CSE/CBE Style

Types of Documentation: In-Text or Parenthetical Citations - Chicago Style

Types of Documentation: In-Text or Parenthetical Citations - MLA Style

Types of Documentation: Note Citations

Chapter 6: Using Library Resources

Finding Library Resources

Chapter 7: Assessing Your Writing

How Is Writing Graded?

How Is Writing Graded?: A General Assessment Tool

The Draft Stage

The Draft Stage: The First Draft

The Draft Stage: The Revision Process and the Final Draft

The Draft Stage: Using Feedback

The Research Stage

Using Assessment to Improve Your Writing

Chapter 8: Other Frequently Assigned Papers

Reviews and Reaction Papers: Article and Book Reviews

Reviews and Reaction Papers: Reaction Papers

Writing Arguments

Writing Arguments: Adapting the Argument Structure

Writing Arguments: Purposes of Argument

Writing Arguments: References to Consult for Writing Arguments

Writing Arguments: Steps to Writing an Argument - Anticipate Active Opposition

Writing Arguments: Steps to Writing an Argument - Determine Your Organization

Writing Arguments: Steps to Writing an Argument - Develop Your Argument

Writing Arguments: Steps to Writing an Argument - Introduce Your Argument

Writing Arguments: Steps to Writing an Argument - State Your Thesis or Proposition

Writing Arguments: Steps to Writing an Argument - Write Your Conclusion

Writing Arguments: Types of Argument

Appendix A: Books to Help Improve Your Writing

Dictionaries

General Style Manuals

Researching on the Internet

Special Style Manuals

Writing Handbooks

Appendix B: Collaborative Writing and Peer Reviewing

Collaborative Writing: Assignments to Accompany the Group Project

Collaborative Writing: Informal Progress Report

Collaborative Writing: Issues to Resolve

Collaborative Writing: Methodology

Collaborative Writing: Peer Evaluation

Collaborative Writing: Tasks of Collaborative Writing Group Members

Collaborative Writing: Writing Plan

General Introduction

Peer Reviewing

Appendix C: Developing an Improvement Plan

Working with Your Instructor’s Comments and Grades

Appendix D: Writing Plan and Project Schedule

Devising a Writing Project Plan and Schedule

Reviewing Your Plan with Others

By using our website you agree to our use of cookies. Learn more about how we use cookies by reading our  Privacy Policy .

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

drawing conclusions in research process

Drawing Conclusions

For any research project and any scientific discipline, drawing conclusions is the final, and most important, part of the process.

This article is a part of the guide:

  • Null Hypothesis
  • Research Hypothesis
  • Defining a Research Problem
  • Selecting Method

Browse Full Outline

  • 1 Scientific Method
  • 2.1.1 Null Hypothesis
  • 2.1.2 Research Hypothesis
  • 2.2 Prediction
  • 2.3 Conceptual Variable
  • 3.1 Operationalization
  • 3.2 Selecting Method
  • 3.3 Measurements
  • 3.4 Scientific Observation
  • 4.1 Empirical Evidence
  • 5.1 Generalization
  • 5.2 Errors in Conclusion

Whichever reasoning processes and research methods were used, the final conclusion is critical, determining success or failure. If an otherwise excellent experiment is summarized by a weak conclusion, the results will not be taken seriously.

Success or failure is not a measure of whether a hypothesis is accepted or refuted, because both results still advance scientific knowledge.

Failure lies in poor experimental design, or flaws in the reasoning processes, which invalidate the results. As long as the research process is robust and well designed, then the findings are sound, and the process of drawing conclusions begins.

The key is to establish what the results mean. How are they applied to the world?

drawing conclusions in research process

What Has Been Learned?

Generally, a researcher will summarize what they believe has been learned from the research, and will try to assess the strength of the hypothesis.

Even if the null hypothesis is accepted, a strong conclusion will analyze why the results were not as predicted. 

Theoretical physicist Wolfgang Pauli was known to have criticized another physicist’s work by saying, “it’s not only not right; it is not even wrong.”

While this is certainly a humorous put-down, it also points to the value of the null hypothesis in science, i.e. the value of being “wrong.” Both accepting or rejecting the null hypothesis provides useful information – it is only when the research provides no illumination on the phenomenon at all that it is truly a failure.

In observational research , with no hypothesis, the researcher will analyze the findings, and establish if any valuable new information has been uncovered. The conclusions from this type of research may well inspire the development of a new hypothesis for further experiments. 

drawing conclusions in research process

Generating Leads for Future Research

However, very few experiments give clear-cut results, and most research uncovers more questions than answers.

The researcher can use these to suggest interesting directions for further study. If, for example, the null hypothesis was accepted, there may still have been trends apparent within the results. These could form the basis of further study, or experimental refinement and redesign.

Question: Let’s say a researcher is interested in whether people who are ambidextrous (can write with either hand) are more likely to have ADHD. She may have three groups – left-handed, right-handed and ambidextrous, and ask each of them to complete an ADHD screening.

She hypothesizes that the ambidextrous people will in fact be more prone to symptoms of ADHD. While she doesn’t find a significant difference when she compares the mean scores of the groups, she does notice another trend: the ambidextrous people seem to score lower overall on tests of verbal acuity. She accepts the null hypothesis, but wishes to continue with her research. Can you think of a direction her research could take, given what she has already learnt?

Answer: She may decide to look more closely at that trend. She may design another experiment to isolate the variable of verbal acuity, by controlling for everything else. This may eventually help her arrive at a new hypothesis: ambidextrous people have lower verbal acuity.

Evaluating Flaws in the Research Process

The researcher will then evaluate any apparent problems with the experiment. This involves critically evaluating any weaknesses and errors in the design, which may have influenced the results .

Even strict, ' true experimental ,' designs have to make compromises, and the researcher must be thorough in pointing these out, justifying the methodology and reasoning.

For example, when drawing conclusions, the researcher may think that another causal effect influenced the results, and that this variable was not eliminated during the experimental process . A refined version of the experiment may help to achieve better results, if the new effect is included in the design process.

In the global warming example, the researcher might establish that carbon dioxide emission alone cannot be responsible for global warming. They may decide that another effect is contributing, so propose that methane may also be a factor in global warming. A new study would incorporate methane into the model.

What are the Benefits of the Research?

The next stage is to evaluate the advantages and benefits of the research.

In medicine and psychology, for example, the results may throw out a new way of treating a medical problem, so the advantages are obvious.

In some fields, certain kinds of research may not typically be seen as beneficial, regardless of the results obtained. Ideally, researchers will consider the implications of their research beforehand, as well as any ethical considerations. In fields such as psychology, social sciences or sociology, it’s important to think about who the research serves and what will ultimately be done with the results.

For example, the study regarding ambidexterity and verbal acuity may be interesting, but what would be the effect of accepting that hypothesis? Would it really benefit anyone to know that the ambidextrous are less likely to have a high verbal acuity?

However, all well-constructed research is useful, even if it only strengthens or supports a more tentative conclusion made by prior research.

Suggestions Based Upon the Conclusions

The final stage is the researcher's recommendations based on the results, depending on the field of study. This area of the research process is informed by the researcher's judgement, and will integrate previous studies.

For example, a researcher interested in schizophrenia may recommend a more effective treatment based on what has been learnt from a study. A physicist might propose that our picture of the structure of the atom should be changed. A researcher could make suggestions for refinement of the experimental design, or highlight interesting areas for further study. This final piece of the paper is the most critical, and pulls together all of the findings into a coherent agrument.

The area in a research paper that causes intense and heated debate amongst scientists is often when drawing conclusions .

Sharing and presenting findings to the scientific community is a vital part of the scientific process. It is here that the researcher justifies the research, synthesizes the results and offers them up for scrutiny by their peers.

As the store of scientific knowledge increases and deepens, it is incumbent on researchers to work together. Long ago, a single scientist could discover and publish work that alone could have a profound impact on the course of history. Today, however, such impact can only be achieved in concert with fellow scientists.

Summary - The Strength of the Results

The key to drawing a valid conclusion is to ensure that the deductive and inductive processes are correctly used, and that all steps of the scientific method were followed.

Even the best-planned research can go awry, however. Part of interpreting results also includes the researchers putting aside their ego to appraise what, if anything went wrong. Has anything occurred to warrant a more cautious interpretation of results?

If your research had a robust design, questioning and scrutiny will be devoted to the experiment conclusion, rather than the methods.

Question: Researchers are interested in identifying new microbial species that are capable of breaking down cellulose for possible application in biofuel production. They collect soil samples from a particular forest and create laboratory cultures of every microbial species they discover there. They then “feed” each species a cellulose compound and observe that in all the species tested, there was no decrease in cellulose after 24 hours.

Read the following conclusions below and decide which of them is the most sound:

They conclude that there are no microbes that can break down cellulose.

They conclude that the sampled microbes are not capable of breaking down cellulose in a lab environment within 24 hours.

They conclude that all the species are related somehow.

They conclude that these microbes are not useful in the biofuel industry.

They conclude that microbes from forests don’t break down cellulose.

Answer: The most appropriate conclusion is number 2. As you can see, sound conclusions are often a question of not extrapolating too widely, or making assumptions that are not supported by the data obtained. Even conclusion number 2 will likely be presented as tentative, and only provides evidence given the limits of the methods used.

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth , Lyndsay T Wilson (Jul 22, 2008). Drawing Conclusions. Retrieved Sep 03, 2024 from Explorable.com: https://explorable.com/drawing-conclusions

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

drawing conclusions in research process

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

drawing conclusions in research process

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Overview of the Scientific Method

13 Drawing Conclusions and Reporting the Results

Learning objectives.

  • Identify the conclusions researchers can make based on the outcome of their studies.
  • Describe why scientists avoid the term “scientific proof.”
  • Explain the different ways that scientists share their findings.

Drawing Conclusions

Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research.

If the results are statistically significant and consistent with the hypothesis and the theory that was used to generate the hypothesis, then researchers can conclude that the theory is supported. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this avoidance is that the result may reflect a type I error.  Another reason for this  avoidance  is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A third reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This  difficulty  is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if  A  then  B ” and “not  B ” necessarily lead to the conclusion “not  A .” If  A  is the theory and  B  is the hypothesis (“if  A  then  B ”), then disconfirming the hypothesis (“not  B ”) must mean that the theory is incorrect (“not  A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a missed opportunity (the result of a type II error) or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable.

A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems. That is, the evidence from a study can be used to modify a theory.  This practice does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually must abandon their theories and replace them with ones that are more successful.

The bottom line here is that because statistics are probabilistic in nature and because all research studies have flaws there is no such thing as scientific proof, there is only scientific evidence.

Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by their poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Jump to navigation

Home

Cochrane Training

Chapter 15: interpreting results and drawing conclusions.

Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie A Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Key Points:

  • This chapter provides guidance on interpreting the results of synthesis in order to communicate the conclusions of the review effectively.
  • Methods are presented for computing, presenting and interpreting relative and absolute effects for dichotomous outcome data, including the number needed to treat (NNT).
  • For continuous outcome measures, review authors can present summary results for studies using natural units of measurement or as minimal important differences when all studies use the same scale. When studies measure the same construct but with different scales, review authors will need to find a way to interpret the standardized mean difference, or to use an alternative effect measure for the meta-analysis such as the ratio of means.
  • Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values, but report the confidence interval together with the exact P value.
  • Review authors should not make recommendations about healthcare decisions, but they can – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences and other factors that determine a decision such as cost.

Cite this chapter as: Schünemann HJ, Vist GE, Higgins JPT, Santesso N, Deeks JJ, Glasziou P, Akl EA, Guyatt GH. Chapter 15: Interpreting results and drawing conclusions. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

15.1 Introduction

The purpose of Cochrane Reviews is to facilitate healthcare decisions by patients and the general public, clinicians, guideline developers, administrators and policy makers. They also inform future research. A clear statement of findings, a considered discussion and a clear presentation of the authors’ conclusions are, therefore, important parts of the review. In particular, the following issues can help people make better informed decisions and increase the usability of Cochrane Reviews:

  • information on all important outcomes, including adverse outcomes;
  • the certainty of the evidence for each of these outcomes, as it applies to specific populations and specific interventions; and
  • clarification of the manner in which particular values and preferences may bear on the desirable and undesirable consequences of the intervention.

A ‘Summary of findings’ table, described in Chapter 14 , Section 14.1 , provides key pieces of information about health benefits and harms in a quick and accessible format. It is highly desirable that review authors include a ‘Summary of findings’ table in Cochrane Reviews alongside a sufficient description of the studies and meta-analyses to support its contents. This description includes the rating of the certainty of evidence, also called the quality of the evidence or confidence in the estimates of the effects, which is expected in all Cochrane Reviews.

‘Summary of findings’ tables are usually supported by full evidence profiles which include the detailed ratings of the evidence (Guyatt et al 2011a, Guyatt et al 2013a, Guyatt et al 2013b, Santesso et al 2016). The Discussion section of the text of the review provides space to reflect and consider the implications of these aspects of the review’s findings. Cochrane Reviews include five standard subheadings to ensure the Discussion section places the review in an appropriate context: ‘Summary of main results (benefits and harms)’; ‘Potential biases in the review process’; ‘Overall completeness and applicability of evidence’; ‘Certainty of the evidence’; and ‘Agreements and disagreements with other studies or reviews’. Following the Discussion, the Authors’ conclusions section is divided into two standard subsections: ‘Implications for practice’ and ‘Implications for research’. The assessment of the certainty of evidence facilitates a structured description of the implications for practice and research.

Because Cochrane Reviews have an international audience, the Discussion and Authors’ conclusions should, so far as possible, assume a broad international perspective and provide guidance for how the results could be applied in different settings, rather than being restricted to specific national or local circumstances. Cultural differences and economic differences may both play an important role in determining the best course of action based on the results of a Cochrane Review. Furthermore, individuals within societies have widely varying values and preferences regarding health states, and use of societal resources to achieve particular health states. For all these reasons, and because information that goes beyond that included in a Cochrane Review is required to make fully informed decisions, different people will often make different decisions based on the same evidence presented in a review.

Thus, review authors should avoid specific recommendations that inevitably depend on assumptions about available resources, values and preferences, and other factors such as equity considerations, feasibility and acceptability of an intervention. The purpose of the review should be to present information and aid interpretation rather than to offer recommendations. The discussion and conclusions should help people understand the implications of the evidence in relation to practical decisions and apply the results to their specific situation. Review authors can aid this understanding of the implications by laying out different scenarios that describe certain value structures.

In this chapter, we address first one of the key aspects of interpreting findings that is also fundamental in completing a ‘Summary of findings’ table: the certainty of evidence related to each of the outcomes. We then provide a more detailed consideration of issues around applicability and around interpretation of numerical results, and provide suggestions for presenting authors’ conclusions.

15.2 Issues of indirectness and applicability

15.2.1 the role of the review author.

“A leap of faith is always required when applying any study findings to the population at large” or to a specific person. “In making that jump, one must always strike a balance between making justifiable broad generalizations and being too conservative in one’s conclusions” (Friedman et al 1985). In addition to issues about risk of bias and other domains determining the certainty of evidence, this leap of faith is related to how well the identified body of evidence matches the posed PICO ( Population, Intervention, Comparator(s) and Outcome ) question. As to the population, no individual can be entirely matched to the population included in research studies. At the time of decision, there will always be differences between the study population and the person or population to whom the evidence is applied; sometimes these differences are slight, sometimes large.

The terms applicability, generalizability, external validity and transferability are related, sometimes used interchangeably and have in common that they lack a clear and consistent definition in the classic epidemiological literature (Schünemann et al 2013). However, all of the terms describe one overarching theme: whether or not available research evidence can be directly used to answer the health and healthcare question at hand, ideally supported by a judgement about the degree of confidence in this use (Schünemann et al 2013). GRADE’s certainty domains include a judgement about ‘indirectness’ to describe all of these aspects including the concept of direct versus indirect comparisons of different interventions (Atkins et al 2004, Guyatt et al 2008, Guyatt et al 2011b).

To address adequately the extent to which a review is relevant for the purpose to which it is being put, there are certain things the review author must do, and certain things the user of the review must do to assess the degree of indirectness. Cochrane and the GRADE Working Group suggest using a very structured framework to address indirectness. We discuss here and in Chapter 14 what the review author can do to help the user. Cochrane Review authors must be extremely clear on the population, intervention and outcomes that they intend to address. Chapter 14, Section 14.1.2 , also emphasizes a crucial step: the specification of all patient-important outcomes relevant to the intervention strategies under comparison.

In considering whether the effect of an intervention applies equally to all participants, and whether different variations on the intervention have similar effects, review authors need to make a priori hypotheses about possible effect modifiers, and then examine those hypotheses (see Chapter 10, Section 10.10 and Section 10.11 ). If they find apparent subgroup effects, they must ultimately decide whether or not these effects are credible (Sun et al 2012). Differences between subgroups, particularly those that correspond to differences between studies, should be interpreted cautiously. Some chance variation between subgroups is inevitable so, unless there is good reason to believe that there is an interaction, review authors should not assume that the subgroup effect exists. If, despite due caution, review authors judge subgroup effects in terms of relative effect estimates as credible (i.e. the effects differ credibly), they should conduct separate meta-analyses for the relevant subgroups, and produce separate ‘Summary of findings’ tables for those subgroups.

The user of the review will be challenged with ‘individualization’ of the findings, whether they seek to apply the findings to an individual patient or a policy decision in a specific context. For example, even if relative effects are similar across subgroups, absolute effects will differ according to baseline risk. Review authors can help provide this information by identifying identifiable groups of people with varying baseline risks in the ‘Summary of findings’ tables, as discussed in Chapter 14, Section 14.1.3 . Users can then identify their specific case or population as belonging to a particular risk group, if relevant, and assess their likely magnitude of benefit or harm accordingly. A description of the identifying prognostic or baseline risk factors in a brief scenario (e.g. age or gender) will help users of a review further.

Another decision users must make is whether their individual case or population of interest is so different from those included in the studies that they cannot use the results of the systematic review and meta-analysis at all. Rather than rigidly applying the inclusion and exclusion criteria of studies, it is better to ask whether or not there are compelling reasons why the evidence should not be applied to a particular patient. Review authors can sometimes help decision makers by identifying important variation where divergence might limit the applicability of results (Rothwell 2005, Schünemann et al 2006, Guyatt et al 2011b, Schünemann et al 2013), including biologic and cultural variation, and variation in adherence to an intervention.

In addressing these issues, review authors cannot be aware of, or address, the myriad of differences in circumstances around the world. They can, however, address differences of known importance to many people and, importantly, they should avoid assuming that other people’s circumstances are the same as their own in discussing the results and drawing conclusions.

15.2.2 Biological variation

Issues of biological variation that may affect the applicability of a result to a reader or population include divergence in pathophysiology (e.g. biological differences between women and men that may affect responsiveness to an intervention) and divergence in a causative agent (e.g. for infectious diseases such as malaria, which may be caused by several different parasites). The discussion of the results in the review should make clear whether the included studies addressed all or only some of these groups, and whether any important subgroup effects were found.

15.2.3 Variation in context

Some interventions, particularly non-pharmacological interventions, may work in some contexts but not in others; the situation has been described as program by context interaction (Hawe et al 2004). Contextual factors might pertain to the host organization in which an intervention is offered, such as the expertise, experience and morale of the staff expected to carry out the intervention, the competing priorities for the clinician’s or staff’s attention, the local resources such as service and facilities made available to the program and the status or importance given to the program by the host organization. Broader context issues might include aspects of the system within which the host organization operates, such as the fee or payment structure for healthcare providers and the local insurance system. Some interventions, in particular complex interventions (see Chapter 17 ), can be only partially implemented in some contexts, and this requires judgements about indirectness of the intervention and its components for readers in that context (Schünemann 2013).

Contextual factors may also pertain to the characteristics of the target group or population, such as cultural and linguistic diversity, socio-economic position, rural/urban setting. These factors may mean that a particular style of care or relationship evolves between service providers and consumers that may or may not match the values and technology of the program.

For many years these aspects have been acknowledged when decision makers have argued that results of evidence reviews from other countries do not apply in their own country or setting. Whilst some programmes/interventions have been successfully transferred from one context to another, others have not (Resnicow et al 1993, Lumley et al 2004, Coleman et al 2015). Review authors should be cautious when making generalizations from one context to another. They should report on the presence (or otherwise) of context-related information in intervention studies, where this information is available.

15.2.4 Variation in adherence

Variation in the adherence of the recipients and providers of care can limit the certainty in the applicability of results. Predictable differences in adherence can be due to divergence in how recipients of care perceive the intervention (e.g. the importance of side effects), economic conditions or attitudes that make some forms of care inaccessible in some settings, such as in low-income countries (Dans et al 2007). It should not be assumed that high levels of adherence in closely monitored randomized trials will translate into similar levels of adherence in normal practice.

15.2.5 Variation in values and preferences

Decisions about healthcare management strategies and options involve trading off health benefits and harms. The right choice may differ for people with different values and preferences (i.e. the importance people place on the outcomes and interventions), and it is important that decision makers ensure that decisions are consistent with a patient or population’s values and preferences. The importance placed on outcomes, together with other factors, will influence whether the recipients of care will or will not accept an option that is offered (Alonso-Coello et al 2016) and, thus, can be one factor influencing adherence. In Section 15.6 , we describe how the review author can help this process and the limits of supporting decision making based on intervention reviews.

15.3 Interpreting results of statistical analyses

15.3.1 confidence intervals.

Results for both individual studies and meta-analyses are reported with a point estimate together with an associated confidence interval. For example, ‘The odds ratio was 0.75 with a 95% confidence interval of 0.70 to 0.80’. The point estimate (0.75) is the best estimate of the magnitude and direction of the experimental intervention’s effect compared with the comparator intervention. The confidence interval describes the uncertainty inherent in any estimate, and describes a range of values within which we can be reasonably sure that the true effect actually lies. If the confidence interval is relatively narrow (e.g. 0.70 to 0.80), the effect size is known precisely. If the interval is wider (e.g. 0.60 to 0.93) the uncertainty is greater, although there may still be enough precision to make decisions about the utility of the intervention. Intervals that are very wide (e.g. 0.50 to 1.10) indicate that we have little knowledge about the effect and this imprecision affects our certainty in the evidence, and that further information would be needed before we could draw a more certain conclusion.

A 95% confidence interval is often interpreted as indicating a range within which we can be 95% certain that the true effect lies. This statement is a loose interpretation, but is useful as a rough guide. The strictly correct interpretation of a confidence interval is based on the hypothetical notion of considering the results that would be obtained if the study were repeated many times. If a study were repeated infinitely often, and on each occasion a 95% confidence interval calculated, then 95% of these intervals would contain the true effect (see Section 15.3.3 for further explanation).

The width of the confidence interval for an individual study depends to a large extent on the sample size. Larger studies tend to give more precise estimates of effects (and hence have narrower confidence intervals) than smaller studies. For continuous outcomes, precision depends also on the variability in the outcome measurements (i.e. how widely individual results vary between people in the study, measured as the standard deviation); for dichotomous outcomes it depends on the risk of the event (more frequent events allow more precision, and narrower confidence intervals), and for time-to-event outcomes it also depends on the number of events observed. All these quantities are used in computation of the standard errors of effect estimates from which the confidence interval is derived.

The width of a confidence interval for a meta-analysis depends on the precision of the individual study estimates and on the number of studies combined. In addition, for random-effects models, precision will decrease with increasing heterogeneity and confidence intervals will widen correspondingly (see Chapter 10, Section 10.10.4 ). As more studies are added to a meta-analysis the width of the confidence interval usually decreases. However, if the additional studies increase the heterogeneity in the meta-analysis and a random-effects model is used, it is possible that the confidence interval width will increase.

Confidence intervals and point estimates have different interpretations in fixed-effect and random-effects models. While the fixed-effect estimate and its confidence interval address the question ‘what is the best (single) estimate of the effect?’, the random-effects estimate assumes there to be a distribution of effects, and the estimate and its confidence interval address the question ‘what is the best estimate of the average effect?’ A confidence interval may be reported for any level of confidence (although they are most commonly reported for 95%, and sometimes 90% or 99%). For example, the odds ratio of 0.80 could be reported with an 80% confidence interval of 0.73 to 0.88; a 90% interval of 0.72 to 0.89; and a 95% interval of 0.70 to 0.92. As the confidence level increases, the confidence interval widens.

There is logical correspondence between the confidence interval and the P value (see Section 15.3.3 ). The 95% confidence interval for an effect will exclude the null value (such as an odds ratio of 1.0 or a risk difference of 0) if and only if the test of significance yields a P value of less than 0.05. If the P value is exactly 0.05, then either the upper or lower limit of the 95% confidence interval will be at the null value. Similarly, the 99% confidence interval will exclude the null if and only if the test of significance yields a P value of less than 0.01.

Together, the point estimate and confidence interval provide information to assess the effects of the intervention on the outcome. For example, suppose that we are evaluating an intervention that reduces the risk of an event and we decide that it would be useful only if it reduced the risk of an event from 30% by at least 5 percentage points to 25% (these values will depend on the specific clinical scenario and outcomes, including the anticipated harms). If the meta-analysis yielded an effect estimate of a reduction of 10 percentage points with a tight 95% confidence interval, say, from 7% to 13%, we would be able to conclude that the intervention was useful since both the point estimate and the entire range of the interval exceed our criterion of a reduction of 5% for net health benefit. However, if the meta-analysis reported the same risk reduction of 10% but with a wider interval, say, from 2% to 18%, although we would still conclude that our best estimate of the intervention effect is that it provides net benefit, we could not be so confident as we still entertain the possibility that the effect could be between 2% and 5%. If the confidence interval was wider still, and included the null value of a difference of 0%, we would still consider the possibility that the intervention has no effect on the outcome whatsoever, and would need to be even more sceptical in our conclusions.

Review authors may use the same general approach to conclude that an intervention is not useful. Continuing with the above example where the criterion for an important difference that should be achieved to provide more benefit than harm is a 5% risk difference, an effect estimate of 2% with a 95% confidence interval of 1% to 4% suggests that the intervention does not provide net health benefit.

15.3.2 P values and statistical significance

A P value is the standard result of a statistical test, and is the probability of obtaining the observed effect (or larger) under a ‘null hypothesis’. In the context of Cochrane Reviews there are two commonly used statistical tests. The first is a test of overall effect (a Z-test), and its null hypothesis is that there is no overall effect of the experimental intervention compared with the comparator on the outcome of interest. The second is the (Chi 2 ) test for heterogeneity, and its null hypothesis is that there are no differences in the intervention effects across studies.

A P value that is very small indicates that the observed effect is very unlikely to have arisen purely by chance, and therefore provides evidence against the null hypothesis. It has been common practice to interpret a P value by examining whether it is smaller than particular threshold values. In particular, P values less than 0.05 are often reported as ‘statistically significant’, and interpreted as being small enough to justify rejection of the null hypothesis. However, the 0.05 threshold is an arbitrary one that became commonly used in medical and psychological research largely because P values were determined by comparing the test statistic against tabulations of specific percentage points of statistical distributions. If review authors decide to present a P value with the results of a meta-analysis, they should report a precise P value (as calculated by most statistical software), together with the 95% confidence interval. Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values , but report the confidence interval together with the exact P value (see MECIR Box 15.3.a ).

We discuss interpretation of the test for heterogeneity in Chapter 10, Section 10.10.2 ; the remainder of this section refers mainly to tests for an overall effect. For tests of an overall effect, the computation of P involves both the effect estimate and precision of the effect estimate (driven largely by sample size). As precision increases, the range of plausible effects that could occur by chance is reduced. Correspondingly, the statistical significance of an effect of a particular magnitude will usually be greater (the P value will be smaller) in a larger study than in a smaller study.

P values are commonly misinterpreted in two ways. First, a moderate or large P value (e.g. greater than 0.05) may be misinterpreted as evidence that the intervention has no effect on the outcome. There is an important difference between this statement and the correct interpretation that there is a high probability that the observed effect on the outcome is due to chance alone. To avoid such a misinterpretation, review authors should always examine the effect estimate and its 95% confidence interval.

The second misinterpretation is to assume that a result with a small P value for the summary effect estimate implies that an experimental intervention has an important benefit. Such a misinterpretation is more likely to occur in large studies and meta-analyses that accumulate data over dozens of studies and thousands of participants. The P value addresses the question of whether the experimental intervention effect is precisely nil; it does not examine whether the effect is of a magnitude of importance to potential recipients of the intervention. In a large study, a small P value may represent the detection of a trivial effect that may not lead to net health benefit when compared with the potential harms (i.e. harmful effects on other important outcomes). Again, inspection of the point estimate and confidence interval helps correct interpretations (see Section 15.3.1 ).

MECIR Box 15.3.a Relevant expectations for conduct of intervention reviews

Interpreting results ( )

.

Authors commonly mistake a lack of evidence of effect as evidence of a lack of effect.

15.3.3 Relation between confidence intervals, statistical significance and certainty of evidence

The confidence interval (and imprecision) is only one domain that influences overall uncertainty about effect estimates. Uncertainty resulting from imprecision (i.e. statistical uncertainty) may be no less important than uncertainty from indirectness, or any other GRADE domain, in the context of decision making (Schünemann 2016). Thus, the extent to which interpretations of the confidence interval described in Sections 15.3.1 and 15.3.2 correspond to conclusions about overall certainty of the evidence for the outcome of interest depends on these other domains. If there are no concerns about other domains that determine the certainty of the evidence (i.e. risk of bias, inconsistency, indirectness or publication bias), then the interpretation in Sections 15.3.1 and 15.3.2 . about the relation of the confidence interval to the true effect may be carried forward to the overall certainty. However, if there are concerns about the other domains that affect the certainty of the evidence, the interpretation about the true effect needs to be seen in the context of further uncertainty resulting from those concerns.

For example, nine randomized controlled trials in almost 6000 cancer patients indicated that the administration of heparin reduces the risk of venous thromboembolism (VTE), with a risk ratio of 43% (95% CI 19% to 60%) (Akl et al 2011a). For patients with a plausible baseline risk of approximately 4.6% per year, this relative effect suggests that heparin leads to an absolute risk reduction of 20 fewer VTEs (95% CI 9 fewer to 27 fewer) per 1000 people per year (Akl et al 2011a). Now consider that the review authors or those applying the evidence in a guideline have lowered the certainty in the evidence as a result of indirectness. While the confidence intervals would remain unchanged, the certainty in that confidence interval and in the point estimate as reflecting the truth for the question of interest will be lowered. In fact, the certainty range will have unknown width so there will be unknown likelihood of a result within that range because of this indirectness. The lower the certainty in the evidence, the less we know about the width of the certainty range, although methods for quantifying risk of bias and understanding potential direction of bias may offer insight when lowered certainty is due to risk of bias. Nevertheless, decision makers must consider this uncertainty, and must do so in relation to the effect measure that is being evaluated (e.g. a relative or absolute measure). We will describe the impact on interpretations for dichotomous outcomes in Section 15.4 .

15.4 Interpreting results from dichotomous outcomes (including numbers needed to treat)

15.4.1 relative and absolute risk reductions.

Clinicians may be more inclined to prescribe an intervention that reduces the relative risk of death by 25% than one that reduces the risk of death by 1 percentage point, although both presentations of the evidence may relate to the same benefit (i.e. a reduction in risk from 4% to 3%). The former refers to the relative reduction in risk and the latter to the absolute reduction in risk. As described in Chapter 6, Section 6.4.1 , there are several measures for comparing dichotomous outcomes in two groups. Meta-analyses are usually undertaken using risk ratios (RR), odds ratios (OR) or risk differences (RD), but there are several alternative ways of expressing results.

Relative risk reduction (RRR) is a convenient way of re-expressing a risk ratio as a percentage reduction:

drawing conclusions in research process

For example, a risk ratio of 0.75 translates to a relative risk reduction of 25%, as in the example above.

The risk difference is often referred to as the absolute risk reduction (ARR) or absolute risk increase (ARI), and may be presented as a percentage (e.g. 1%), as a decimal (e.g. 0.01), or as account (e.g. 10 out of 1000). We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.2 Number needed to treat (NNT)

The number needed to treat (NNT) is a common alternative way of presenting information on the effect of an intervention. The NNT is defined as the expected number of people who need to receive the experimental rather than the comparator intervention for one additional person to either incur or avoid an event (depending on the direction of the result) in a given time frame. Thus, for example, an NNT of 10 can be interpreted as ‘it is expected that one additional (or less) person will incur an event for every 10 participants receiving the experimental intervention rather than comparator over a given time frame’. It is important to be clear that:

  • since the NNT is derived from the risk difference, it is still a comparative measure of effect (experimental versus a specific comparator) and not a general property of a single intervention; and
  • the NNT gives an ‘expected value’. For example, NNT = 10 does not imply that one additional event will occur in each and every group of 10 people.

NNTs can be computed for both beneficial and detrimental events, and for interventions that cause both improvements and deteriorations in outcomes. In all instances NNTs are expressed as positive whole numbers. Some authors use the term ‘number needed to harm’ (NNH) when an intervention leads to an adverse outcome, or a decrease in a positive outcome, rather than improvement. However, this phrase can be misleading (most notably, it can easily be read to imply the number of people who will experience a harmful outcome if given the intervention), and it is strongly recommended that ‘number needed to harm’ and ‘NNH’ are avoided. The preferred alternative is to use phrases such as ‘number needed to treat for an additional beneficial outcome’ (NNTB) and ‘number needed to treat for an additional harmful outcome’ (NNTH) to indicate direction of effect.

As NNTs refer to events, their interpretation needs to be worded carefully when the binary outcome is a dichotomization of a scale-based outcome. For example, if the outcome is pain measured on a ‘none, mild, moderate or severe’ scale it may have been dichotomized as ‘none or mild’ versus ‘moderate or severe’. It would be inappropriate for an NNT from these data to be referred to as an ‘NNT for pain’. It is an ‘NNT for moderate or severe pain’.

We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.3 Expressing risk differences

Users of reviews are liable to be influenced by the choice of statistical presentations of the evidence. Hoffrage and colleagues suggest that physicians’ inferences about statistical outcomes are more appropriate when they deal with ‘natural frequencies’ – whole numbers of people, both treated and untreated (e.g. treatment results in a drop from 20 out of 1000 to 10 out of 1000 women having breast cancer) – than when effects are presented as percentages (e.g. 1% absolute reduction in breast cancer risk) (Hoffrage et al 2000). Probabilities may be more difficult to understand than frequencies, particularly when events are rare. While standardization may be important in improving the presentation of research evidence (and participation in healthcare decisions), current evidence suggests that the presentation of natural frequencies for expressing differences in absolute risk is best understood by consumers of healthcare information (Akl et al 2011b). This evidence provides the rationale for presenting absolute risks in ‘Summary of findings’ tables as numbers of people with events per 1000 people receiving the intervention (see Chapter 14 ).

RRs and RRRs remain crucial because relative effects tend to be substantially more stable across risk groups than absolute effects (see Chapter 10, Section 10.4.3 ). Review authors can use their own data to study this consistency (Cates 1999, Smeeth et al 1999). Risk differences from studies are least likely to be consistent across baseline event rates; thus, they are rarely appropriate for computing numbers needed to treat in systematic reviews. If a relative effect measure (OR or RR) is chosen for meta-analysis, then a comparator group risk needs to be specified as part of the calculation of an RD or NNT. In addition, if there are several different groups of participants with different levels of risk, it is crucial to express absolute benefit for each clinically identifiable risk group, clarifying the time period to which this applies. Studies in patients with differing severity of disease, or studies with different lengths of follow-up will almost certainly have different comparator group risks. In these cases, different comparator group risks lead to different RDs and NNTs (except when the intervention has no effect). A recommended approach is to re-express an odds ratio or a risk ratio as a variety of RD or NNTs across a range of assumed comparator risks (ACRs) (McQuay and Moore 1997, Smeeth et al 1999). Review authors should bear these considerations in mind not only when constructing their ‘Summary of findings’ table, but also in the text of their review.

For example, a review of oral anticoagulants to prevent stroke presented information to users by describing absolute benefits for various baseline risks (Aguilar and Hart 2005, Aguilar et al 2007). They presented their principal findings as “The inherent risk of stroke should be considered in the decision to use oral anticoagulants in atrial fibrillation patients, selecting those who stand to benefit most for this therapy” (Aguilar and Hart 2005). Among high-risk atrial fibrillation patients with prior stroke or transient ischaemic attack who have stroke rates of about 12% (120 per 1000) per year, warfarin prevents about 70 strokes yearly per 1000 patients, whereas for low-risk atrial fibrillation patients (with a stroke rate of about 2% per year or 20 per 1000), warfarin prevents only 12 strokes. This presentation helps users to understand the important impact that typical baseline risks have on the absolute benefit that they can expect.

15.4.4 Computations

Direct computation of risk difference (RD) or a number needed to treat (NNT) depends on the summary statistic (odds ratio, risk ratio or risk differences) available from the study or meta-analysis. When expressing results of meta-analyses, review authors should use, in the computations, whatever statistic they determined to be the most appropriate summary for meta-analysis (see Chapter 10, Section 10.4.3 ). Here we present calculations to obtain RD as a reduction in the number of participants per 1000. For example, a risk difference of –0.133 corresponds to 133 fewer participants with the event per 1000.

RDs and NNTs should not be computed from the aggregated total numbers of participants and events across the trials. This approach ignores the randomization within studies, and may produce seriously misleading results if there is unbalanced randomization in any of the studies. Using the pooled result of a meta-analysis is more appropriate. When computing NNTs, the values obtained are by convention always rounded up to the next whole number.

15.4.4.1 Computing NNT from a risk difference (RD)

A NNT may be computed from a risk difference as

drawing conclusions in research process

where the vertical bars (‘absolute value of’) in the denominator indicate that any minus sign should be ignored. It is convention to round the NNT up to the nearest whole number. For example, if the risk difference is –0.12 the NNT is 9; if the risk difference is –0.22 the NNT is 5. Cochrane Review authors should qualify the NNT as referring to benefit (improvement) or harm by denoting the NNT as NNTB or NNTH. Note that this approach, although feasible, should be used only for the results of a meta-analysis of risk differences. In most cases meta-analyses will be undertaken using a relative measure of effect (RR or OR), and those statistics should be used to calculate the NNT (see Section 15.4.4.2 and 15.4.4.3 ).

15.4.4.2 Computing risk differences or NNT from a risk ratio

To aid interpretation of the results of a meta-analysis of risk ratios, review authors may compute an absolute risk reduction or NNT. In order to do this, an assumed comparator risk (ACR) (otherwise known as a baseline risk, or risk that the outcome of interest would occur with the comparator intervention) is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

drawing conclusions in research process

As an example, suppose the risk ratio is RR = 0.92, and an ACR = 0.3 (300 per 1000) is assumed. Then the effect on risk is 24 fewer per 1000:

drawing conclusions in research process

The NNT is 42:

drawing conclusions in research process

15.4.4.3 Computing risk differences or NNT from an odds ratio

Review authors may wish to compute a risk difference or NNT from the results of a meta-analysis of odds ratios. In order to do this, an ACR is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

drawing conclusions in research process

As an example, suppose the odds ratio is OR = 0.73, and a comparator risk of ACR = 0.3 is assumed. Then the effect on risk is 62 fewer per 1000:

drawing conclusions in research process

The NNT is 17:

drawing conclusions in research process

15.4.4.4 Computing risk ratio from an odds ratio

Because risk ratios are easier to interpret than odds ratios, but odds ratios have favourable mathematical properties, a review author may decide to undertake a meta-analysis based on odds ratios, but to express the result as a summary risk ratio (or relative risk reduction). This requires an ACR. Then

drawing conclusions in research process

It will often be reasonable to perform this transformation using the median comparator group risk from the studies in the meta-analysis.

15.4.4.5 Computing confidence limits

Confidence limits for RDs and NNTs may be calculated by applying the above formulae to the upper and lower confidence limits for the summary statistic (RD, RR or OR) (Altman 1998). Note that this confidence interval does not incorporate uncertainty around the ACR.

If the 95% confidence interval of OR or RR includes the value 1, one of the confidence limits will indicate benefit and the other harm. Thus, appropriate use of the words ‘fewer’ and ‘more’ is required for each limit when presenting results in terms of events. For NNTs, the two confidence limits should be labelled as NNTB and NNTH to indicate the direction of effect in each case. The confidence interval for the NNT will include a ‘discontinuity’, because increasingly smaller risk differences that approach zero will lead to NNTs approaching infinity. Thus, the confidence interval will include both an infinitely large NNTB and an infinitely large NNTH.

15.5 Interpreting results from continuous outcomes (including standardized mean differences)

15.5.1 meta-analyses with continuous outcomes.

Review authors should describe in the study protocol how they plan to interpret results for continuous outcomes. When outcomes are continuous, review authors have a number of options to present summary results. These options differ if studies report the same measure that is familiar to the target audiences, studies report the same or very similar measures that are less familiar to the target audiences, or studies report different measures.

15.5.2 Meta-analyses with continuous outcomes using the same measure

If all studies have used the same familiar units, for instance, results are expressed as durations of events, such as symptoms for conditions including diarrhoea, sore throat, otitis media, influenza or duration of hospitalization, a meta-analysis may generate a summary estimate in those units, as a difference in mean response (see, for instance, the row summarizing results for duration of diarrhoea in Chapter 14, Figure 14.1.b and the row summarizing oedema in Chapter 14, Figure 14.1.a ). For such outcomes, the ‘Summary of findings’ table should include a difference of means between the two interventions. However, when units of such outcomes may be difficult to interpret, particularly when they relate to rating scales (again, see the oedema row of Chapter 14, Figure 14.1.a ). ‘Summary of findings’ tables should include the minimum and maximum of the scale of measurement, and the direction. Knowledge of the smallest change in instrument score that patients perceive is important – the minimal important difference (MID) – and can greatly facilitate the interpretation of results (Guyatt et al 1998, Schünemann and Guyatt 2005). Knowing the MID allows review authors and users to place results in context. Review authors should state the MID – if known – in the Comments column of their ‘Summary of findings’ table. For example, the chronic respiratory questionnaire has possible scores in health-related quality of life ranging from 1 to 7 and 0.5 represents a well-established MID (Jaeschke et al 1989, Schünemann et al 2005).

15.5.3 Meta-analyses with continuous outcomes using different measures

When studies have used different instruments to measure the same construct, a standardized mean difference (SMD) may be used in meta-analysis for combining continuous data. Without guidance, clinicians and patients may have little idea how to interpret results presented as SMDs. Review authors should therefore consider issues of interpretability when planning their analysis at the protocol stage and should consider whether there will be suitable ways to re-express the SMD or whether alternative effect measures, such as a ratio of means, or possibly as minimal important difference units (Guyatt et al 2013b) should be used. Table 15.5.a and the following sections describe these options.

Table 15.5.a Approaches and their implications to presenting results of continuous variables when primary studies have used different instruments to measure the same construct. Adapted from Guyatt et al (2013b)

1a. Generic standard deviation (SD) units and guiding rules

It is widely used, but the interpretation is challenging. It can be misleading depending on whether the population is very homogenous or heterogeneous (i.e. how variable the outcome was in the population of each included study, and therefore how applicable a standard SD is likely to be). See Section .

Use together with other approaches below.

1b. Re-express and present as units of a familiar measure

Presenting data with this approach may be viewed by users as closer to the primary data. However, few instruments are sufficiently used in clinical practice to make many of the presented units easily interpretable. See Section .

When the units and measures are familiar to the decision makers (e.g. healthcare providers and patients), this presentation should be seriously considered.

Conversion to natural units is also an option for expressing results using the MID approach below (row 3).

1c. Re-express as result for a dichotomous outcome

Dichotomous outcomes are very familiar to clinical audiences and may facilitate understanding. However, this approach involves assumptions that may not always be valid (e.g. it assumes that distributions in intervention and comparator group are roughly normally distributed and variances are similar). It allows applying GRADE guidance for large and very large effects. See Section .

Consider this approach if the assumptions appear reasonable.

If the minimal important difference for an instrument is known describing the probability of individuals achieving this difference may be more intuitive. Review authors should always seriously consider this option.

Re-expressing SMDs is not the only way of expressing results as dichotomous outcomes. For example, the actual outcomes in the studies can be dichotomized, either directly or using assumptions, prior to meta-analysis.

2. Ratio of means

This approach may be easily interpretable to clinical audiences and involves fewer assumptions than some other approaches. It allows applying GRADE guidance for large and very large effects. It cannot be applied when measure is a change from baseline and therefore negative values possible and the interpretation requires knowledge and interpretation of comparator group mean. See Section

Consider as complementing other approaches, particularly the presentation of relative and absolute effects.

3. Minimal important difference units

This approach may be easily interpretable for audiences but is applicable only when minimal important differences are known. See Section .

Consider as complementing other approaches, particularly the presentation of relative and absolute effects.

15.5.3.1 Presenting and interpreting SMDs using generic effect size estimates

The SMD expresses the intervention effect in standard units rather than the original units of measurement. The SMD is the difference in mean effects between the experimental and comparator groups divided by the pooled standard deviation of participants’ outcomes, or external SDs when studies are very small (see Chapter 6, Section 6.5.1.2 ). The value of a SMD thus depends on both the size of the effect (the difference between means) and the standard deviation of the outcomes (the inherent variability among participants or based on an external SD).

If review authors use the SMD, they might choose to present the results directly as SMDs (row 1a, Table 15.5.a and Table 15.5.b ). However, absolute values of the intervention and comparison groups are typically not useful because studies have used different measurement instruments with different units. Guiding rules for interpreting SMDs (or ‘Cohen’s effect sizes’) exist, and have arisen mainly from researchers in the social sciences (Cohen 1988). One example is as follows: 0.2 represents a small effect, 0.5 a moderate effect and 0.8 a large effect (Cohen 1988). Variations exist (e.g. <0.40=small, 0.40 to 0.70=moderate, >0.70=large). Review authors might consider including such a guiding rule in interpreting the SMD in the text of the review, and in summary versions such as the Comments column of a ‘Summary of findings’ table. However, some methodologists believe that such interpretations are problematic because patient importance of a finding is context-dependent and not amenable to generic statements.

15.5.3.2 Re-expressing SMDs using a familiar instrument

The second possibility for interpreting the SMD is to express it in the units of one or more of the specific measurement instruments used by the included studies (row 1b, Table 15.5.a and Table 15.5.b ). The approach is to calculate an absolute difference in means by multiplying the SMD by an estimate of the SD associated with the most familiar instrument. To obtain this SD, a reasonable option is to calculate a weighted average across all intervention groups of all studies that used the selected instrument (preferably a pre-intervention or post-intervention SD as discussed in Chapter 10, Section 10.5.2 ). To better reflect among-person variation in practice, or to use an instrument not represented in the meta-analysis, it may be preferable to use a standard deviation from a representative observational study. The summary effect is thus re-expressed in the original units of that particular instrument and the clinical relevance and impact of the intervention effect can be interpreted using that familiar instrument.

The same approach of re-expressing the results for a familiar instrument can also be used for other standardized effect measures such as when standardizing by MIDs (Guyatt et al 2013b): see Section 15.5.3.5 .

Table 15.5.b Application of approaches when studies have used different measures: effects of dexamethasone for pain after laparoscopic cholecystectomy (Karanicolas et al 2008). Reproduced with permission of Wolters Kluwer

 

 

 

 

 

 

1a. Post-operative pain, standard deviation units

Investigators measured pain using different instruments. Lower scores mean less pain.

The pain score in the dexamethasone groups was on average than in the placebo groups).

539 (5)

OO

Low

 

 

As a rule of thumb, 0.2 SD represents a small difference, 0.5 a moderate and 0.8 a large.

1b. Post-operative pain

Measured on a scale from 0, no pain, to 100, worst pain imaginable.

The mean post-operative pain scores with placebo ranged from 43 to 54.

The mean pain score in the intervention groups was on average

 

539 (5)

 

OO

Low

Scores calculated based on an SMD of 0.79 (95% CI –1.41 to –0.17) and rescaled to a 0 to 100 pain scale.

The minimal important difference on the 0 to 100 pain scale is approximately 10.

1c. Substantial post-operative pain, dichotomized

Investigators measured pain using different instruments.

20 per 100

15 more (4 more to 18 more) per 100 patients in dexamethasone group achieved important improvement in the pain score.

RR = 0.25 (95% CI 0.05 to 0.75)

539 (5)

OO

Low

Scores estimated based on an SMD of 0.79 (95% CI –1.41 to –0.17).

 

2. Post-operative pain

Investigators measured pain using different instruments. Lower scores mean less pain.

The mean post-operative pain scores with placebo was 28.1.

On average a 3.7 lower pain score

(0.6 to 6.1 lower)

Ratio of means

0.87

(0.78 to 0.98)

539 (5)

OO

Low

Weighted average of the mean pain score in dexamethasone group divided by mean pain score in placebo.

3. Post-operative pain

Investigators measured pain using different instruments.

The pain score in the dexamethasone groups was on average less than the control group.

539 (5)

OO

Low

An effect less than half the minimal important difference suggests a small or very small effect.

1 Certainty rated according to GRADE from very low to high certainty. 2 Substantial unexplained heterogeneity in study results. 3 Imprecision due to wide confidence intervals. 4 The 20% comes from the proportion in the control group requiring rescue analgesia. 5 Crude (arithmetic) means of the post-operative pain mean responses across all five trials when transformed to a 100-point scale.

15.5.3.3 Re-expressing SMDs through dichotomization and transformation to relative and absolute measures

A third approach (row 1c, Table 15.5.a and Table 15.5.b ) relies on converting the continuous measure into a dichotomy and thus allows calculation of relative and absolute effects on a binary scale. A transformation of a SMD to a (log) odds ratio is available, based on the assumption that an underlying continuous variable has a logistic distribution with equal standard deviation in the two intervention groups, as discussed in Chapter 10, Section 10.6  (Furukawa 1999, Guyatt et al 2013b). The assumption is unlikely to hold exactly and the results must be regarded as an approximation. The log odds ratio is estimated as

drawing conclusions in research process

(or approximately 1.81✕SMD). The resulting odds ratio can then be presented as normal, and in a ‘Summary of findings’ table, combined with an assumed comparator group risk to be expressed as an absolute risk difference. The comparator group risk in this case would refer to the proportion of people who have achieved a specific value of the continuous outcome. In randomized trials this can be interpreted as the proportion who have improved by some (specified) amount (responders), for instance by 5 points on a 0 to 100 scale. Table 15.5.c shows some illustrative results from this method. The risk differences can then be converted to NNTs or to people per thousand using methods described in Section 15.4.4 .

Table 15.5.c Risk difference derived for specific SMDs for various given ‘proportions improved’ in the comparator group (Furukawa 1999, Guyatt et al 2013b). Reproduced with permission of Elsevier 

Situations in which the event is undesirable, reduction (or increase if intervention harmful) in adverse events with the intervention

−3%

−5%

−7%

−8%

−8%

−8%

−7%

−6%

−4%

−6%

−11%

−15%

−17%

−19%

−20%

−20%

−17%

−12%

−8%

−15%

−21%

−25%

−29%

−31%

−31%

−28%

−22%

−9%

−17%

−24%

−23%

−34%

−37%

−38%

−36%

−29%

Situations in which the event is desirable, increase (or decrease if intervention harmful) in positive responses to the intervention

4%

6%

7%

8%

8%

8%

7%

5%

3%

12%

17%

19%

20%

19%

17%

15%

11%

6%

22%

28%

31%

31%

29%

25%

21%

15%

8%

29%

36%

38%

38%

34%

30%

24%

17%

9%

                                   

15.5.3.4 Ratio of means

A more frequently used approach is based on calculation of a ratio of means between the intervention and comparator groups (Friedrich et al 2008) as discussed in Chapter 6, Section 6.5.1.3 . Interpretational advantages of this approach include the ability to pool studies with outcomes expressed in different units directly, to avoid the vulnerability of heterogeneous populations that limits approaches that rely on SD units, and for ease of clinical interpretation (row 2, Table 15.5.a and Table 15.5.b ). This method is currently designed for post-intervention scores only. However, it is possible to calculate a ratio of change scores if both intervention and comparator groups change in the same direction in each relevant study, and this ratio may sometimes be informative.

Limitations to this approach include its limited applicability to change scores (since it is unlikely that both intervention and comparator group changes are in the same direction in all studies) and the possibility of misleading results if the comparator group mean is very small, in which case even a modest difference from the intervention group will yield a large and therefore misleading ratio of means. It also requires that separate ratios of means be calculated for each included study, and then entered into a generic inverse variance meta-analysis (see Chapter 10, Section 10.3 ).

The ratio of means approach illustrated in Table 15.5.b suggests a relative reduction in pain of only 13%, meaning that those receiving steroids have a pain severity 87% of those in the comparator group, an effect that might be considered modest.

15.5.3.5 Presenting continuous results as minimally important difference units

To express results in MID units, review authors have two options. First, they can be combined across studies in the same way as the SMD, but instead of dividing the mean difference of each study by its SD, review authors divide by the MID associated with that outcome (Johnston et al 2010, Guyatt et al 2013b). Instead of SD units, the pooled results represent MID units (row 3, Table 15.5.a and Table 15.5.b ), and may be more easily interpretable. This approach avoids the problem of varying SDs across studies that may distort estimates of effect in approaches that rely on the SMD. The approach, however, relies on having well-established MIDs. The approach is also risky in that a difference less than the MID may be interpreted as trivial when a substantial proportion of patients may have achieved an important benefit.

The other approach makes a simple conversion (not shown in Table 15.5.b ), before undertaking the meta-analysis, of the means and SDs from each study to means and SDs on the scale of a particular familiar instrument whose MID is known. For example, one can rescale the mean and SD of other chronic respiratory disease instruments (e.g. rescaling a 0 to 100 score of an instrument) to a the 1 to 7 score in Chronic Respiratory Disease Questionnaire (CRQ) units (by assuming 0 equals 1 and 100 equals 7 on the CRQ). Given the MID of the CRQ of 0.5, a mean difference in change of 0.71 after rescaling of all studies suggests a substantial effect of the intervention (Guyatt et al 2013b). This approach, presenting in units of the most familiar instrument, may be the most desirable when the target audiences have extensive experience with that instrument, particularly if the MID is well established.

15.6 Drawing conclusions

15.6.1 conclusions sections of a cochrane review.

Authors’ conclusions in a Cochrane Review are divided into implications for practice and implications for research. While Cochrane Reviews about interventions can provide meaningful information and guidance for practice, decisions about the desirable and undesirable consequences of healthcare options require evidence and judgements for criteria that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). In describing the implications for practice and the development of recommendations, however, review authors may consider the certainty of the evidence, the balance of benefits and harms, and assumed values and preferences.

15.6.2 Implications for practice

Drawing conclusions about the practical usefulness of an intervention entails making trade-offs, either implicitly or explicitly, between the estimated benefits, harms and the values and preferences. Making such trade-offs, and thus making specific recommendations for an action in a specific context, goes beyond a Cochrane Review and requires additional evidence and informed judgements that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). Such judgements are typically the domain of clinical practice guideline developers for which Cochrane Reviews will provide crucial information (Graham et al 2011, Schünemann et al 2014, Zhang et al 2018a). Thus, authors of Cochrane Reviews should not make recommendations.

If review authors feel compelled to lay out actions that clinicians and patients could take, they should – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences. Other factors that might influence a decision should also be highlighted, including any known factors that would be expected to modify the effects of the intervention, the baseline risk or status of the patient, costs and who bears those costs, and the availability of resources. Review authors should ensure they consider all patient-important outcomes, including those for which limited data may be available. In the context of public health reviews the focus may be on population-important outcomes as the target may be an entire (non-diseased) population and include outcomes that are not measured in the population receiving an intervention (e.g. a reduction of transmission of infections from those receiving an intervention). This process implies a high level of explicitness in judgements about values or preferences attached to different outcomes and the certainty of the related evidence (Zhang et al 2018b, Zhang et al 2018c); this and a full cost-effectiveness analysis is beyond the scope of most Cochrane Reviews (although they might well be used for such analyses; see Chapter 20 ).

A review on the use of anticoagulation in cancer patients to increase survival (Akl et al 2011a) provides an example for laying out clinical implications for situations where there are important trade-offs between desirable and undesirable effects of the intervention: “The decision for a patient with cancer to start heparin therapy for survival benefit should balance the benefits and downsides and integrate the patient’s values and preferences. Patients with a high preference for a potential survival prolongation, limited aversion to potential bleeding, and who do not consider heparin (both UFH or LMWH) therapy a burden may opt to use heparin, while those with aversion to bleeding may not.”

15.6.3 Implications for research

The second category for authors’ conclusions in a Cochrane Review is implications for research. To help people make well-informed decisions about future healthcare research, the ‘Implications for research’ section should comment on the need for further research, and the nature of the further research that would be most desirable. It is helpful to consider the population, intervention, comparison and outcomes that could be addressed, or addressed more effectively in the future, in the context of the certainty of the evidence in the current review (Brown et al 2006):

  • P (Population): diagnosis, disease stage, comorbidity, risk factor, sex, age, ethnic group, specific inclusion or exclusion criteria, clinical setting;
  • I (Intervention): type, frequency, dose, duration, prognostic factor;
  • C (Comparison): placebo, routine care, alternative treatment/management;
  • O (Outcome): which clinical or patient-related outcomes will the researcher need to measure, improve, influence or accomplish? Which methods of measurement should be used?

While Cochrane Review authors will find the PICO domains helpful, the domains of the GRADE certainty framework further support understanding and describing what additional research will improve the certainty in the available evidence. Note that as the certainty of the evidence is likely to vary by outcome, these implications will be specific to certain outcomes in the review. Table 15.6.a shows how review authors may be aided in their interpretation of the body of evidence and drawing conclusions about future research and practice.

Table 15.6.a Implications for research and practice suggested by individual GRADE domains

Domain

Implications for research

Examples for research statements

Implications for practice

Risk of bias

Need for methodologically better designed and executed studies.

All studies suffered from lack of blinding of outcome assessors. Trials of this type are required.

The estimates of effect may be biased because of a lack of blinding of the assessors of the outcome.

Inconsistency

Unexplained inconsistency: need for individual participant data meta-analysis; need for studies in relevant subgroups.

Studies in patients with small cell lung cancer are needed to understand if the effects differ from those in patients with pancreatic cancer.

Unexplained inconsistency: consider and interpret overall effect estimates as for the overall certainty of a body of evidence.

Explained inconsistency (if results are not presented in strata): consider and interpret effects estimates by subgroup.

Indirectness

Need for studies that better fit the PICO question of interest.

Studies in patients with early cancer are needed because the evidence is from studies in patients with advanced cancer.

It is uncertain if the results directly apply to the patients or the way that the intervention is applied in a particular setting.

Imprecision

Need for more studies with more participants to reach optimal information size.

Studies with approximately 200 more events in the experimental intervention group and the comparator intervention group are required.

Same uncertainty interpretation as for certainty of a body of evidence: e.g. the true effect may be substantially different.

Publication bias

Need to investigate and identify unpublished data; large studies might help resolve this issue.

Large studies are required.

Same uncertainty interpretation as for certainty of a body of evidence (e.g. the true effect may be substantially different).

Large effects

No direct implications.

Not applicable.

The effect is large in the populations that were included in the studies and the true effect is likely going to cross important thresholds.

Dose effects

No direct implications.

Not applicable.

The greater the reduction in the exposure the larger is the expected harm (or benefit).

Opposing bias and confounding

Studies controlling for the residual bias and confounding are needed.

Studies controlling for possible confounders such as smoking and degree of education are required.

The effect could be even larger or smaller (depending on the direction of the results) than the one that is observed in the studies presented here.

The review of compression stockings for prevention of deep vein thrombosis (DVT) in airline passengers described in Chapter 14 provides an example where there is some convincing evidence of a benefit of the intervention: “This review shows that the question of the effects on symptomless DVT of wearing versus not wearing compression stockings in the types of people studied in these trials should now be regarded as answered. Further research may be justified to investigate the relative effects of different strengths of stockings or of stockings compared to other preventative strategies. Further randomised trials to address the remaining uncertainty about the effects of wearing versus not wearing compression stockings on outcomes such as death, pulmonary embolism and symptomatic DVT would need to be large.” (Clarke et al 2016).

A review of therapeutic touch for anxiety disorder provides an example of the implications for research when no eligible studies had been found: “This review highlights the need for randomized controlled trials to evaluate the effectiveness of therapeutic touch in reducing anxiety symptoms in people diagnosed with anxiety disorders. Future trials need to be rigorous in design and delivery, with subsequent reporting to include high quality descriptions of all aspects of methodology to enable appraisal and interpretation of results.” (Robinson et al 2007).

15.6.4 Reaching conclusions

A common mistake is to confuse ‘no evidence of an effect’ with ‘evidence of no effect’. When the confidence intervals are too wide (e.g. including no effect), it is wrong to claim that the experimental intervention has ‘no effect’ or is ‘no different’ from the comparator intervention. Review authors may also incorrectly ‘positively’ frame results for some effects but not others. For example, when the effect estimate is positive for a beneficial outcome but confidence intervals are wide, review authors may describe the effect as promising. However, when the effect estimate is negative for an outcome that is considered harmful but the confidence intervals include no effect, review authors report no effect. Another mistake is to frame the conclusion in wishful terms. For example, review authors might write, “there were too few people in the analysis to detect a reduction in mortality” when the included studies showed a reduction or even increase in mortality that was not ‘statistically significant’. One way of avoiding errors such as these is to consider the results blinded; that is, consider how the results would be presented and framed in the conclusions if the direction of the results was reversed. If the confidence interval for the estimate of the difference in the effects of the interventions overlaps with no effect, the analysis is compatible with both a true beneficial effect and a true harmful effect. If one of the possibilities is mentioned in the conclusion, the other possibility should be mentioned as well. Table 15.6.b suggests narrative statements for drawing conclusions based on the effect estimate from the meta-analysis and the certainty of the evidence.

Table 15.6.b Suggested narrative statements for phrasing conclusions

High certainty of the evidence

Large effect

X results in a large reduction/increase in outcome

Moderate effect

X reduces/increases outcome

X results in a reduction/increase in outcome

Small important effect

X reduces/increases outcome slightly

X results in a slight reduction/increase in outcome

Trivial, small unimportant effect or no effect

X results in little to no difference in outcome

X does not reduce/increase outcome

Moderate certainty of the evidence

Large effect

X likely results in a large reduction/increase in outcome

X probably results in a large reduction/increase in outcome

Moderate effect

X likely reduces/increases outcome

X probably reduces/increases outcome

X likely results in a reduction/increase in outcome

X probably results in a reduction/increase in outcome

Small important effect

X probably reduces/increases outcome slightly

X likely reduces/increases outcome slightly

X probably results in a slight reduction/increase in outcome

X likely results in a slight reduction/increase in outcome

Trivial, small unimportant effect or no effect

X likely results in little to no difference in outcome

X probably results in little to no difference in outcome

X likely does not reduce/increase outcome

X probably does not reduce/increase outcome

Low certainty of the evidence

Large effect

X may result in a large reduction/increase in outcome

The evidence suggests X results in a large reduction/increase in outcome

Moderate effect

X may reduce/increase outcome

The evidence suggests X reduces/increases outcome

X may result in a reduction/increase in outcome

The evidence suggests X results in a reduction/increase in outcome

Small important effect

X may reduce/increase outcome slightly

The evidence suggests X reduces/increases outcome slightly

X may result in a slight reduction/increase in outcome

The evidence suggests X results in a slight reduction/increase in outcome

Trivial, small unimportant effect or no effect

X may result in little to no difference in outcome

The evidence suggests that X results in little to no difference in outcome

X may not reduce/increase outcome

The evidence suggests that X does not reduce/increase outcome

Very low certainty of the evidence

Any effect

The evidence is very uncertain about the effect of X on outcome

X may reduce/increase/have little to no effect on outcome but the evidence is very uncertain

Another common mistake is to reach conclusions that go beyond the evidence. Often this is done implicitly, without referring to the additional information or judgements that are used in reaching conclusions about the implications of a review for practice. Even when additional information and explicit judgements support conclusions about the implications of a review for practice, review authors rarely conduct systematic reviews of the additional information. Furthermore, implications for practice are often dependent on specific circumstances and values that must be taken into consideration. As we have noted, review authors should always be cautious when drawing conclusions about implications for practice and they should not make recommendations.

15.7 Chapter information

Authors: Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Acknowledgements: Andrew Oxman, Jonathan Sterne, Michael Borenstein and Rob Scholten contributed text to earlier versions of this chapter.

Funding: This work was in part supported by funding from the Michael G DeGroote Cochrane Canada Centre and the Ontario Ministry of Health. JJD receives support from the National Institute for Health Research (NIHR) Birmingham Biomedical Research Centre at the University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham. JPTH receives support from the NIHR Biomedical Research Centre at University Hospitals Bristol NHS Foundation Trust and the University of Bristol. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

15.8 References

Aguilar MI, Hart R. Oral anticoagulants for preventing stroke in patients with non-valvular atrial fibrillation and no previous history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2005; 3 : CD001927.

Aguilar MI, Hart R, Pearce LA. Oral anticoagulants versus antiplatelet therapy for preventing stroke in patients with non-valvular atrial fibrillation and no history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2007; 3 : CD006186.

Akl EA, Gunukula S, Barba M, Yosuico VE, van Doormaal FF, Kuipers S, Middeldorp S, Dickinson HO, Bryant A, Schünemann H. Parenteral anticoagulation in patients with cancer who have no therapeutic or prophylactic indication for anticoagulation. Cochrane Database of Systematic Reviews 2011a; 1 : CD006652.

Akl EA, Oxman AD, Herrin J, Vist GE, Terrenato I, Sperati F, Costiniuk C, Blank D, Schünemann H. Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database of Systematic Reviews 2011b; 3 : CD006776.

Alonso-Coello P, Schünemann HJ, Moberg J, Brignardello-Petersen R, Akl EA, Davoli M, Treweek S, Mustafa RA, Rada G, Rosenbaum S, Morelli A, Guyatt GH, Oxman AD, Group GW. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ 2016; 353 : i2016.

Altman DG. Confidence intervals for the number needed to treat. BMJ 1998; 317 : 1309-1312.

Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, Guyatt GH, Harbour RT, Haugh MC, Henry D, Hill S, Jaeschke R, Leng G, Liberati A, Magrini N, Mason J, Middleton P, Mrukowicz J, O'Connell D, Oxman AD, Phillips B, Schünemann HJ, Edejer TT, Varonen H, Vist GE, Williams JW, Jr., Zaza S. Grading quality of evidence and strength of recommendations. BMJ 2004; 328 : 1490.

Brown P, Brunnhuber K, Chalkidou K, Chalmers I, Clarke M, Fenton M, Forbes C, Glanville J, Hicks NJ, Moody J, Twaddle S, Timimi H, Young P. How to formulate research recommendations. BMJ 2006; 333 : 804-806.

Cates C. Confidence intervals for the number needed to treat: Pooling numbers needed to treat may not be reliable. BMJ 1999; 318 : 1764-1765.

Clarke MJ, Broderick C, Hopewell S, Juszczak E, Eisinga A. Compression stockings for preventing deep vein thrombosis in airline passengers. Cochrane Database of Systematic Reviews 2016; 9 : CD004002.

Cohen J. Statistical Power Analysis in the Behavioral Sciences . 2nd edition ed. Hillsdale (NJ): Lawrence Erlbaum Associates, Inc.; 1988.

Coleman T, Chamberlain C, Davey MA, Cooper SE, Leonardi-Bee J. Pharmacological interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2015; 12 : CD010078.

Dans AM, Dans L, Oxman AD, Robinson V, Acuin J, Tugwell P, Dennis R, Kang D. Assessing equity in clinical practice guidelines. Journal of Clinical Epidemiology 2007; 60 : 540-546.

Friedman LM, Furberg CD, DeMets DL. Fundamentals of Clinical Trials . 2nd edition ed. Littleton (MA): John Wright PSG, Inc.; 1985.

Friedrich JO, Adhikari NK, Beyene J. The ratio of means method as an alternative to mean differences for analyzing continuous outcome variables in meta-analysis: a simulation study. BMC Medical Research Methodology 2008; 8 : 32.

Furukawa T. From effect size into number needed to treat. Lancet 1999; 353 : 1680.

Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, Board on Health Care Services: Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press; 2011.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2011a; 64 : 383-394.

Guyatt GH, Juniper EF, Walter SD, Griffith LE, Goldstein RS. Interpreting treatment effects in randomised trials. BMJ 1998; 316 : 690-693.

Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 924-926.

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, Alonso-Coello P, Falck-Ytter Y, Jaeschke R, Vist G, Akl EA, Post PN, Norris S, Meerpohl J, Shukla VK, Nasser M, Schünemann HJ. GRADE guidelines: 8. Rating the quality of evidence--indirectness. Journal of Clinical Epidemiology 2011b; 64 : 1303-1310.

Guyatt GH, Oxman AD, Santesso N, Helfand M, Vist G, Kunz R, Brozek J, Norris S, Meerpohl J, Djulbegovic B, Alonso-Coello P, Post PN, Busse JW, Glasziou P, Christensen R, Schünemann HJ. GRADE guidelines: 12. Preparing summary of findings tables-binary outcomes. Journal of Clinical Epidemiology 2013a; 66 : 158-172.

Guyatt GH, Thorlund K, Oxman AD, Walter SD, Patrick D, Furukawa TA, Johnston BC, Karanicolas P, Akl EA, Vist G, Kunz R, Brozek J, Kupper LL, Martin SL, Meerpohl JJ, Alonso-Coello P, Christensen R, Schünemann HJ. GRADE guidelines: 13. Preparing summary of findings tables and evidence profiles-continuous outcomes. Journal of Clinical Epidemiology 2013b; 66 : 173-183.

Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local context within a cluster randomised community intervention trial. Journal of Epidemiology and Community Health 2004; 58 : 788-793.

Hoffrage U, Lindsey S, Hertwig R, Gigerenzer G. Medicine. Communicating statistical information. Science 2000; 290 : 2261-2262.

Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Controlled Clinical Trials 1989; 10 : 407-415.

Johnston B, Thorlund K, Schünemann H, Xie F, Murad M, Montori V, Guyatt G. Improving the interpretation of health-related quality of life evidence in meta-analysis: The application of minimal important difference units. . Health Outcomes and Qualithy of Life 2010; 11 : 116.

Karanicolas PJ, Smith SE, Kanbur B, Davies E, Guyatt GH. The impact of prophylactic dexamethasone on nausea and vomiting after laparoscopic cholecystectomy: a systematic review and meta-analysis. Annals of Surgery 2008; 248 : 751-762.

Lumley J, Oliver SS, Chamberlain C, Oakley L. Interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2004; 4 : CD001055.

McQuay HJ, Moore RA. Using numerical results from systematic reviews in clinical practice. Annals of Internal Medicine 1997; 126 : 712-720.

Resnicow K, Cross D, Wynder E. The Know Your Body program: a review of evaluation studies. Bulletin of the New York Academy of Medicine 1993; 70 : 188-207.

Robinson J, Biley FC, Dolk H. Therapeutic touch for anxiety disorders. Cochrane Database of Systematic Reviews 2007; 3 : CD006240.

Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply?". Lancet 2005; 365 : 82-93.

Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, Garner P, Treweek S, Tovey D, Akl EA, Tugwell P, Brozek JL, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 3: detailed guidance for explanatory footnotes supports creating and understanding GRADE certainty in the evidence judgments. Journal of Clinical Epidemiology 2016; 74 : 28-39.

Schünemann HJ, Puhan M, Goldstein R, Jaeschke R, Guyatt GH. Measurement properties and interpretability of the Chronic respiratory disease questionnaire (CRQ). COPD: Journal of Chronic Obstructive Pulmonary Disease 2005; 2 : 81-89.

Schünemann HJ, Guyatt GH. Commentary--goodbye M(C)ID! Hello MID, where do you come from? Health Services Research 2005; 40 : 593-597.

Schünemann HJ, Fretheim A, Oxman AD. Improving the use of research evidence in guideline development: 13. Applicability, transferability and adaptation. Health Research Policy and Systems 2006; 4 : 25.

Schünemann HJ. Methodological idiosyncracies, frameworks and challenges of non-pharmaceutical and non-technical treatment interventions. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2013; 107 : 214-220.

Schünemann HJ, Tugwell P, Reeves BC, Akl EA, Santesso N, Spencer FA, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods 2013; 4 : 49-62.

Schünemann HJ, Wiercioch W, Etxeandia I, Falavigna M, Santesso N, Mustafa R, Ventresca M, Brignardello-Petersen R, Laisaar KT, Kowalski S, Baldeh T, Zhang Y, Raid U, Neumann I, Norris SL, Thornton J, Harbour R, Treweek S, Guyatt G, Alonso-Coello P, Reinap M, Brozek J, Oxman A, Akl EA. Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise. CMAJ: Canadian Medical Association Journal 2014; 186 : E123-142.

Schünemann HJ. Interpreting GRADE's levels of certainty or quality of the evidence: GRADE for statisticians, considering review information size or less emphasis on imprecision? Journal of Clinical Epidemiology 2016; 75 : 6-15.

Smeeth L, Haines A, Ebrahim S. Numbers needed to treat derived from meta-analyses--sometimes informative, usually misleading. BMJ 1999; 318 : 1548-1551.

Sun X, Briel M, Busse JW, You JJ, Akl EA, Mejza F, Bala MM, Bassler D, Mertz D, Diaz-Granados N, Vandvik PO, Malaga G, Srinathan SK, Dahm P, Johnston BC, Alonso-Coello P, Hassouneh B, Walter SD, Heels-Ansdell D, Bhatnagar N, Altman DG, Guyatt GH. Credibility of claims of subgroup effects in randomised controlled trials: systematic review. BMJ 2012; 344 : e1553.

Zhang Y, Akl EA, Schünemann HJ. Using systematic reviews in guideline development: the GRADE approach. Research Synthesis Methods 2018a: doi: 10.1002/jrsm.1313.

Zhang Y, Alonso-Coello P, Guyatt GH, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Schünemann HJ. GRADE Guidelines: 19. Assessing the certainty of evidence in the importance of outcomes or values and preferences-Risk of bias and indirectness. Journal of Clinical Epidemiology 2018b: doi: 10.1016/j.jclinepi.2018.1001.1013.

Zhang Y, Alonso Coello P, Guyatt G, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Xie F, Schünemann HJ. GRADE Guidelines: 20. Assessing the certainty of evidence in the importance of outcomes or values and preferences - Inconsistency, Imprecision, and other Domains. Journal of Clinical Epidemiology 2018c: doi: 10.1016/j.jclinepi.2018.1005.1011.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

drawing conclusions in research process

  • Spencer Greenberg
  • Nov 26, 2018
  • 11 min read

12 Ways To Draw Conclusions From Information

Updated: Sep 25, 2023

drawing conclusions in research process

There are a LOT of ways to make inferences – that is, for drawing conclusions based on information, evidence or data. In fact, there are many more than most people realize. All of them have strengths and weaknesses that render them more useful in some situations than in others.

Here's a brief key describing most popular methods of inference, to help you whenever you're trying to draw a conclusion for yourself. Do you rely more on some of these than you should, given their weaknesses? Are there others in this list that you could benefit from using more in your life, given their strengths? And what does drawing conclusions mean, really? As you'll learn in a moment, it encompasses a wide variety of techniques, so there isn't one single definition.

1. Deduction

Common in: philosophy, mathematics

If X, then Y, due to the definitions of X and Y.

X applies to this case.

Therefore Y applies to this case.

Example: “Plato is a mortal, and all mortals are, by definition, able to die; therefore Plato is able to die.”

Example: “For any number that is an integer, there exists another integer greater than that number. 1,000,000 is an integer. So there exists an integer greater than 1,000,000.”

Advantages: When you use deduction properly in an appropriate context, it is an airtight form of inference (e.g. in a mathematical proof with no mistakes).

Flaws: To apply deduction to the world, you need to rely on strong assumptions about how the world works, or else apply other methods of inference on top. So its range of applicability is limited.

2. Frequencies

Common in: applied statistics, data science

95% of the time that X occurred in the past, Y occurred also.

X occurred.

Therefore Y is likely to occur (with high probability).

Example: “95% of the time when we saw a bank transaction identical to this one, it was fraudulent. So this transaction is fraudulent.”

Advantages: This technique allows you to assign probabilities to events. When you have a lot of past data it can be easy to apply.

Flaws: You need to have a moderately large number of examples like the current one to perform calculations on. Also, the method assumes that those past examples were drawn from a process that is (statistically) just like the one that generated this latest example. Moreover, it is unclear sometimes what it means for “X”, the type of event you’re interested in, to have occurred. What if something that’s very similar to but not quite like X occurred? Should that be counted as X occurring? If we broaden our class of what counts as X or change to another class of event that still encompasses all of our prior examples, we’ll potentially get a different answer. Fortunately, there are plenty of opportunities to make inferences from frequencies where the correct class to use is fairly obvious.

drawing conclusions in research process

If you've found this article valuable so far, you may also like our free tool

Common in : financial engineering, risk modeling, environmental science

Given our probabilistic model of this thing, when X occurs, the probability of Y occurring is 0.95.

Example: “Given our multivariate Gaussian model of loan prices, when this loan defaults there is a 0.95 probability of this other loan defaulting.”

Example: "When we run the weather simulation model many times with randomization of the initial conditions, rain occurs tomorrow in that region 95% of the time."

Advantages: This technique can be used to make predictions in very complex scenarios (e.g. involving more variables than a human mind can take into account at once) as long as the dynamics of the systems underlying those scenarios are sufficiently well understood.

Flaws: This method hinges on the appropriateness of the model chosen; it may require a large amount of past data to estimate free model parameters, and may go haywire if modeling assumptions are unrealistic or suddenly violated by changes in the world. You may have to already understand the system deeply to be able to build the model in the first place (e.g. with weather modeling).

4. Classification

Common in: machine learning, data science

In prior data, as X1 and X2 increased, the likelihood of Y increased.

X1 and X2 are at high levels.

Therefore Y is likely to occur.

Example: “Height for children can be approximately predicted as an (increasing) linear function of age (X1) and weight (X2). This child is older and heavier than the others, so we predict he is likely to be tall.”

Example: "We've trained a neural network to predict whether a particular batch of concrete will be strong based on its constituents, mixture proportion, compaction, etc."

Advantages: This method can often produce accurate predictions for systems that you don't have much understanding of, as long as enough data is available to train the regression algorithm and that data contains sufficiently relevant variables.

Flaws: This method is often applied with simple assumptions (e.g. linearity) that may not capture the complexity of the inference problem, but very large amounts of data may be needed to apply much more complex models (e.g to use neural networks, which are non-linear). Regression also may produce results that are hard to interpret – you may not really understand why it does a good job of making predictions.

5. Bayesianism

Common in: the rationality community

Given my prior odds that Y is true...

And given evidence X...

And given my Bayes factor, which is my estimate of how much more likely X is to occur if Y is true than if Y is not true...

I calculate that Y is far more likely to be true than to not be true (by multiplying the prior odds by the Bayes factor to get the posterior odds).

Therefore Y is likely to be true (with high probability).

Example: “My prior odds that my boss is angry at me were 1 to 4, because he’s angry at me about 20% of the time. But then he came into my office shouting and flipped over my desk, which I estimate is 200 times more likely to occur if he’s angry at me compared to if he’s not. So now the odds of him being angry at me are 200 * (1/4) = 50 to 1 in favor of him being angry.”

Example: "Historically, companies in this situation have 2 to 1 odds of defaulting on their loans. But then evidence came out about this specific company showing that it is 3 times more likely to end up defaulting on its loans than similar companies. Hence now the odds of it defaulting are 6 to 1 since: (2/1) * (3/1) = 6. That means there is an 85% chance that it defaults since 0.85 = 6/(6+1)."

Advantages: If you can do the calculations in a given instance, and have a sensible way to set your prior probabilities, this is probably the mathematically optimal framework to use for probabilistic prediction. For instance, if you have a belief about the probability of something, then you gain some new evidence, you can prove mathematically that Bayes's rule tells you how to calculate what your new probability should now be that incorporates that evidence. In that sense, we can think of many of the other approaches on this list as (hopefully pragmatic) approximations of Bayesianism (sometimes good approximations, sometimes bad ones).

Flaws: It's sometimes hard to know how to set your prior odds, and it can be very hard in some cases to perform the Bayesian calculation. In practice, carrying out the calculation might end up relying on subjective estimates of the odds, which can be especially tricky to guess when the evidence is not binary (i.e not of the form “happened” vs. “didn’t happen”), or if you have lots of different pieces of evidence that are partially correlated.

If you’d like to learn more about using Bayesian inference in everyday life, try our mini-course on The Question of Evidence . For a more math-oriented explanation, check out our course on Understanding Bayes’s Theorem .

6. Theories

Common in: psychology, economics

Given our theory, when X occurs, Y occurs.

Therefore Y will occur.

Example: “One theory is that depressed people are most at risk for suicide when they are beginning to come out of a really bad depression. So as depression is remitting, patients should be carefully screened for potentially increasing suicide risk factors.”

Example: “A common theory is that when inflation rises, unemployment falls. Inflation is rising, so we should predict that unemployment will fall.”

Advantages: Theories can make systems far more understandable to the human mind, and can be taught to others. Sometimes even very complex systems can be pretty well approximated with a simple theory. Theories allow us to make predictions about what will happen while only having to focus on a small amount of relevant information, without being bogged down by thousands of details.

Flaws: It can be very challenging to come up with reliable theories, and often you will not know how accurate such a theory is. Even if it has substantial truth to it and is right often, there may be cases where the opposite of what was predicted actually happens, and for reasons the theory can’t explain. Theories usually only capture part of what is going on in a particular situation, ignoring many variables so as to be more understandable. People often get too attached to particular theories, forgetting that theories are only approximations of reality, and so pretty much always have exceptions.

Common in: engineering, biology, physics

We know that X causes Y to occur.

Example: “Rusting of gears causes increased friction, leading to greater wear and tear. In this case, the gears were heavily rusted, so we expect to find a lot of wear.”

Example: “This gene produces this phenotype, and we see that this gene is present, so we expect to see the phenotype in the offspring.”

Advantages: If you understand the causal structure of a system, you may be able to make many powerful predictions about it, including predicting what would happen in many hypothetical situations that have never occurred before, and predicting what would happen if you were to intervene on the system in a particular way. This contrasts with (probabilistic) models that may be able to accurately predict what happens in common situations, but perform badly at predicting what will happen in novel situations and in situations where you intervene on the system (e.g. what would happen to the system if I purposely changed X).

Flaws: It’s often extremely hard to figure out causality in a highly complex system, especially in “softer” or "messier" subjects like nutrition and the social sciences. Purely statistical information (even an infinite amount of it) is not enough on its own to fully describe the causality of a system; additional assumptions need to be added. Often in practice we can only answer questions about causality by running randomized experiments (e.g. randomized controlled trials), which are typically expensive and sometimes infeasible, or by attempting to carefully control for all the potential confounding variables, a challenging and error-prone process.

Common in: politics, economics

This expert (or prediction market, or prediction algorithm) X is 90% accurate at predicting things in this general domain of prediction.

X predicts Y.

Example: “This prediction market has been right 90% of the time when predicting recent baseball outcomes, and in this case predicts the Yankees will win.”

Advantages: If you can find an expert or algorithm that has been proven to make reliable predictions in a particular domain, you can simply use these predictions yourself without even understanding how they are made.

Flaws: We often don’t have access to the predictions of experts (or of prediction markets, or prediction algorithms), and when we do, we usually don’t have reliable measures of their past accuracy. What's more, many experts whose predictions are publicly available have no clear track record of performance, or even purposely avoid accountability for poor performance (e.g. by hiding past prediction failures and touting past successes).

9. Metaphors

Common in: self-help, ancient philosophy, science education

X, which is what we are dealing with now, is metaphorically a Z.

For Z, when W is true, then obviously Y is true.

Now W (or its metaphorical equivalent) is true for X.

Therefore Y is true for X.

Example: “Your life is but a boat, and you are riding on the waves of your experiences. When a raging storm hits, a boat can’t be under full sail. It can’t continue at its maximum speed. You are experiencing a storm now, and so you too must learn to slow down.”

Example: "To better understand the nature of gasses, imagine tons of ping pong balls all shooting around in straight lines in random directions, and bouncing off of each other whenever they collide. These ping pong balls represent molecules of gas. Assuming the system is not inside a container, ping pong balls at the edges of the system have nothing to collide with, so they just fly outward, expanding the whole system. Similarly, the volume of a gas expands when it is placed in a vacuum."

Advantages: Our brains are good at understanding metaphors, so they can save us mental energy when we try to grasp difficult concepts. If the two items being compared in the metaphor are sufficiently alike in relevant ways, then the metaphor may accurately reveal elements of how its subject works.

Flaws: Z working as a metaphor for X doesn’t mean that all (or even most) predictions that are accurate for situations involving Z are appropriate (or even make any sense) for X. Metaphor-based reasoning can seem profound and persuasive even in cases when it makes little sense.

10. Similarities

Common in: the study of history, machine learning

X occurred, and X is very similar to Z in properties A, B and C.

When things similar to Z in properties A, B, and C occur, Y usually occurs.

Example: “This conflict is similar to the Gulf War in various ways, and from what we've learned about wars like the Gulf War, we can expect these sorts of outcomes.”

Example: “This data point (with unknown label) is closest in feature space to this other data point which is labeled ‘cat’, and all the other labeled points around that point are also labeled ‘cat’, so this unlabeled point should also likely get the label ‘cat’.”

Advantages: This approach can be applied at both small scale (with small numbers of examples) and at large scale (with millions of examples, as in machine learning algorithms), though of course large numbers of examples tend to produce more robust results. It can be viewed as a more powerful generalization of "frequencies"-based reasoning.

Flaws: In the history case, it is difficult to know which features are the appropriate ones to use to evaluate the similarity of two cases, and often the conclusions this approach produces are based on a relatively small number of examples. In the machine learning case, a very large amount of data may be needed to train the model (and it still may be unclear how to measure which examples are similar to which other cases, even with a lot of data). The properties you're using to compare cases must be sufficiently relevant to the prediction being made for it to work.

11. Anecdotes

Common in: daily life

In this handful of examples (or perhaps even just one example) where X occurred, Y occurred.

Example: “The last time we took that so-called 'shortcut' home, we got stuck in traffic for an extra 45 minutes. Let's not make that mistake again.”

Example: “My friend Bob tried that supplement and said it gave him more energy. So maybe it will give me more energy too."

Advantages: Anecdotes are simple to use, and a few of them are often all we have to work with for inference.

Flaws: Unless we are in a situation with very little noise/variability, a few examples likely will not be enough to accurately generalize. For instance, a few examples is not enough to make a reliable judgement about how often something occurs.

12. Intuition

My intuition (that I may have trouble explaining) predicts that when X occurs, Y is true.

Therefore Y is true.

Example: “The tone of voice he used when he talked about his family gave me a bad vibe. My feeling is that anyone who talks about their family with that tone of voice probably does not really love them.”

Example: "I can't explain why, but I'm pretty sure he's going to win this election."

Advantages: Our intuitions can be very well honed in situations we’ve encountered many times, and that we've received feedback on (i.e. where there was some sort of answer we got about how well our intuition performed). For instance, a surgeon who has conducted thousands of heart surgeries may have very good intuitions about what to do during surgery, or about how the patient will fare, even potentially very accurate intuitions that she can't easily articulate.

Flaws: In novel situations, or in situations where we receive no feedback on how well our instincts are performing, our intuitions may be highly inaccurate (even though we may not feel any less confident about our correctness).

Do you want to learn more about drawing conclusions from data?

If you'd like to know more about when intuition is reliable, try our 7-question guide to determining when you can trust your intuition.

We also have a full podcast episode about Mental models that apply across disciplines that you may like:

Click here to access other streaming options and show notes.

Recent Posts

The Missing Heritability Problem: Are We About to Overturn 100 Years of Research?

Who is right about your money: traditional economists or self-help authors?

Does astrology work? We put 152 astrologers to the test

drawing conclusions in research process

How to Write a Conclusion for Research Papers (with Examples)

How to Write a Conclusion for Research Papers (with Examples)

The conclusion of a research paper is a crucial section that plays a significant role in the overall impact and effectiveness of your research paper. However, this is also the section that typically receives less attention compared to the introduction and the body of the paper. The conclusion serves to provide a concise summary of the key findings, their significance, their implications, and a sense of closure to the study. Discussing how can the findings be applied in real-world scenarios or inform policy, practice, or decision-making is especially valuable to practitioners and policymakers. The research paper conclusion also provides researchers with clear insights and valuable information for their own work, which they can then build on and contribute to the advancement of knowledge in the field.

The research paper conclusion should explain the significance of your findings within the broader context of your field. It restates how your results contribute to the existing body of knowledge and whether they confirm or challenge existing theories or hypotheses. Also, by identifying unanswered questions or areas requiring further investigation, your awareness of the broader research landscape can be demonstrated.

Remember to tailor the research paper conclusion to the specific needs and interests of your intended audience, which may include researchers, practitioners, policymakers, or a combination of these.

Table of Contents

What is a conclusion in a research paper, summarizing conclusion, editorial conclusion, externalizing conclusion, importance of a good research paper conclusion, how to write a conclusion for your research paper, research paper conclusion examples.

  • How to write a research paper conclusion with Paperpal? 

Frequently Asked Questions

A conclusion in a research paper is the final section where you summarize and wrap up your research, presenting the key findings and insights derived from your study. The research paper conclusion is not the place to introduce new information or data that was not discussed in the main body of the paper. When working on how to conclude a research paper, remember to stick to summarizing and interpreting existing content. The research paper conclusion serves the following purposes: 1

  • Warn readers of the possible consequences of not attending to the problem.
  • Recommend specific course(s) of action.
  • Restate key ideas to drive home the ultimate point of your research paper.
  • Provide a “take-home” message that you want the readers to remember about your study.

drawing conclusions in research process

Types of conclusions for research papers

In research papers, the conclusion provides closure to the reader. The type of research paper conclusion you choose depends on the nature of your study, your goals, and your target audience. I provide you with three common types of conclusions:

A summarizing conclusion is the most common type of conclusion in research papers. It involves summarizing the main points, reiterating the research question, and restating the significance of the findings. This common type of research paper conclusion is used across different disciplines.

An editorial conclusion is less common but can be used in research papers that are focused on proposing or advocating for a particular viewpoint or policy. It involves presenting a strong editorial or opinion based on the research findings and offering recommendations or calls to action.

An externalizing conclusion is a type of conclusion that extends the research beyond the scope of the paper by suggesting potential future research directions or discussing the broader implications of the findings. This type of conclusion is often used in more theoretical or exploratory research papers.

Align your conclusion’s tone with the rest of your research paper. Start Writing with Paperpal Now!  

The conclusion in a research paper serves several important purposes:

  • Offers Implications and Recommendations : Your research paper conclusion is an excellent place to discuss the broader implications of your research and suggest potential areas for further study. It’s also an opportunity to offer practical recommendations based on your findings.
  • Provides Closure : A good research paper conclusion provides a sense of closure to your paper. It should leave the reader with a feeling that they have reached the end of a well-structured and thought-provoking research project.
  • Leaves a Lasting Impression : Writing a well-crafted research paper conclusion leaves a lasting impression on your readers. It’s your final opportunity to leave them with a new idea, a call to action, or a memorable quote.

drawing conclusions in research process

Writing a strong conclusion for your research paper is essential to leave a lasting impression on your readers. Here’s a step-by-step process to help you create and know what to put in the conclusion of a research paper: 2

  • Research Statement : Begin your research paper conclusion by restating your research statement. This reminds the reader of the main point you’ve been trying to prove throughout your paper. Keep it concise and clear.
  • Key Points : Summarize the main arguments and key points you’ve made in your paper. Avoid introducing new information in the research paper conclusion. Instead, provide a concise overview of what you’ve discussed in the body of your paper.
  • Address the Research Questions : If your research paper is based on specific research questions or hypotheses, briefly address whether you’ve answered them or achieved your research goals. Discuss the significance of your findings in this context.
  • Significance : Highlight the importance of your research and its relevance in the broader context. Explain why your findings matter and how they contribute to the existing knowledge in your field.
  • Implications : Explore the practical or theoretical implications of your research. How might your findings impact future research, policy, or real-world applications? Consider the “so what?” question.
  • Future Research : Offer suggestions for future research in your area. What questions or aspects remain unanswered or warrant further investigation? This shows that your work opens the door for future exploration.
  • Closing Thought : Conclude your research paper conclusion with a thought-provoking or memorable statement. This can leave a lasting impression on your readers and wrap up your paper effectively. Avoid introducing new information or arguments here.
  • Proofread and Revise : Carefully proofread your conclusion for grammar, spelling, and clarity. Ensure that your ideas flow smoothly and that your conclusion is coherent and well-structured.

Write your research paper conclusion 2x faster with Paperpal. Try it now!

Remember that a well-crafted research paper conclusion is a reflection of the strength of your research and your ability to communicate its significance effectively. It should leave a lasting impression on your readers and tie together all the threads of your paper. Now you know how to start the conclusion of a research paper and what elements to include to make it impactful, let’s look at a research paper conclusion sample.

Summarizing ConclusionImpact of social media on adolescents’ mental healthIn conclusion, our study has shown that increased usage of social media is significantly associated with higher levels of anxiety and depression among adolescents. These findings highlight the importance of understanding the complex relationship between social media and mental health to develop effective interventions and support systems for this vulnerable population.
Editorial ConclusionEnvironmental impact of plastic wasteIn light of our research findings, it is clear that we are facing a plastic pollution crisis. To mitigate this issue, we strongly recommend a comprehensive ban on single-use plastics, increased recycling initiatives, and public awareness campaigns to change consumer behavior. The responsibility falls on governments, businesses, and individuals to take immediate actions to protect our planet and future generations.  
Externalizing ConclusionExploring applications of AI in healthcareWhile our study has provided insights into the current applications of AI in healthcare, the field is rapidly evolving. Future research should delve deeper into the ethical, legal, and social implications of AI in healthcare, as well as the long-term outcomes of AI-driven diagnostics and treatments. Furthermore, interdisciplinary collaboration between computer scientists, medical professionals, and policymakers is essential to harness the full potential of AI while addressing its challenges.

drawing conclusions in research process

How to write a research paper conclusion with Paperpal?

A research paper conclusion is not just a summary of your study, but a synthesis of the key findings that ties the research together and places it in a broader context. A research paper conclusion should be concise, typically around one paragraph in length. However, some complex topics may require a longer conclusion to ensure the reader is left with a clear understanding of the study’s significance. Paperpal, an AI writing assistant trusted by over 800,000 academics globally, can help you write a well-structured conclusion for your research paper. 

  • Sign Up or Log In: Create a new Paperpal account or login with your details.  
  • Navigate to Features : Once logged in, head over to the features’ side navigation pane. Click on Templates and you’ll find a suite of generative AI features to help you write better, faster.  
  • Generate an outline: Under Templates, select ‘Outlines’. Choose ‘Research article’ as your document type.  
  • Select your section: Since you’re focusing on the conclusion, select this section when prompted.  
  • Choose your field of study: Identifying your field of study allows Paperpal to provide more targeted suggestions, ensuring the relevance of your conclusion to your specific area of research. 
  • Provide a brief description of your study: Enter details about your research topic and findings. This information helps Paperpal generate a tailored outline that aligns with your paper’s content. 
  • Generate the conclusion outline: After entering all necessary details, click on ‘generate’. Paperpal will then create a structured outline for your conclusion, to help you start writing and build upon the outline.  
  • Write your conclusion: Use the generated outline to build your conclusion. The outline serves as a guide, ensuring you cover all critical aspects of a strong conclusion, from summarizing key findings to highlighting the research’s implications. 
  • Refine and enhance: Paperpal’s ‘Make Academic’ feature can be particularly useful in the final stages. Select any paragraph of your conclusion and use this feature to elevate the academic tone, ensuring your writing is aligned to the academic journal standards. 

By following these steps, Paperpal not only simplifies the process of writing a research paper conclusion but also ensures it is impactful, concise, and aligned with academic standards. Sign up with Paperpal today and write your research paper conclusion 2x faster .  

The research paper conclusion is a crucial part of your paper as it provides the final opportunity to leave a strong impression on your readers. In the research paper conclusion, summarize the main points of your research paper by restating your research statement, highlighting the most important findings, addressing the research questions or objectives, explaining the broader context of the study, discussing the significance of your findings, providing recommendations if applicable, and emphasizing the takeaway message. The main purpose of the conclusion is to remind the reader of the main point or argument of your paper and to provide a clear and concise summary of the key findings and their implications. All these elements should feature on your list of what to put in the conclusion of a research paper to create a strong final statement for your work.

A strong conclusion is a critical component of a research paper, as it provides an opportunity to wrap up your arguments, reiterate your main points, and leave a lasting impression on your readers. Here are the key elements of a strong research paper conclusion: 1. Conciseness : A research paper conclusion should be concise and to the point. It should not introduce new information or ideas that were not discussed in the body of the paper. 2. Summarization : The research paper conclusion should be comprehensive enough to give the reader a clear understanding of the research’s main contributions. 3 . Relevance : Ensure that the information included in the research paper conclusion is directly relevant to the research paper’s main topic and objectives; avoid unnecessary details. 4 . Connection to the Introduction : A well-structured research paper conclusion often revisits the key points made in the introduction and shows how the research has addressed the initial questions or objectives. 5. Emphasis : Highlight the significance and implications of your research. Why is your study important? What are the broader implications or applications of your findings? 6 . Call to Action : Include a call to action or a recommendation for future research or action based on your findings.

The length of a research paper conclusion can vary depending on several factors, including the overall length of the paper, the complexity of the research, and the specific journal requirements. While there is no strict rule for the length of a conclusion, but it’s generally advisable to keep it relatively short. A typical research paper conclusion might be around 5-10% of the paper’s total length. For example, if your paper is 10 pages long, the conclusion might be roughly half a page to one page in length.

In general, you do not need to include citations in the research paper conclusion. Citations are typically reserved for the body of the paper to support your arguments and provide evidence for your claims. However, there may be some exceptions to this rule: 1. If you are drawing a direct quote or paraphrasing a specific source in your research paper conclusion, you should include a citation to give proper credit to the original author. 2. If your conclusion refers to or discusses specific research, data, or sources that are crucial to the overall argument, citations can be included to reinforce your conclusion’s validity.

The conclusion of a research paper serves several important purposes: 1. Summarize the Key Points 2. Reinforce the Main Argument 3. Provide Closure 4. Offer Insights or Implications 5. Engage the Reader. 6. Reflect on Limitations

Remember that the primary purpose of the research paper conclusion is to leave a lasting impression on the reader, reinforcing the key points and providing closure to your research. It’s often the last part of the paper that the reader will see, so it should be strong and well-crafted.

  • Makar, G., Foltz, C., Lendner, M., & Vaccaro, A. R. (2018). How to write effective discussion and conclusion sections. Clinical spine surgery, 31(8), 345-346.
  • Bunton, D. (2005). The structure of PhD conclusion chapters.  Journal of English for academic purposes ,  4 (3), 207-224.

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • 5 Reasons for Rejection After Peer Review
  • Ethical Research Practices For Research with Human Subjects

7 Ways to Improve Your Academic Writing Process

  • Paraphrasing in Academic Writing: Answering Top Author Queries

Preflight For Editorial Desk: The Perfect Hybrid (AI + Human) Assistance Against Compromised Manuscripts

You may also like, how to cite in apa format (7th edition):..., how to write your research paper in apa..., how to choose a dissertation topic, how to write a phd research proposal, how to write an academic paragraph (step-by-step guide), research funding basics: what should a grant proposal..., how to write an abstract in research papers..., how to write dissertation acknowledgements, how to write the first draft of a..., mla works cited page: format, template & examples.

Banner

The Process of Writing a Research Paper Guide: The Conclusion

  • Types of Research Designs
  • Choosing a Research Topic
  • Preparing to Write
  • The Abstract
  • The Introduction
  • The Literature Review
  • The Methodology
  • The Results
  • The Discussion
  • The Conclusion
  • Proofreading Your Paper
  • Citing Sources
  • Annotated Bibliography
  • Giving an Oral Presentation
  • How to Manage Group Projects
  • Writing a Book Review
  • Writing a Research Proposal
  • Acknowledgements

The conclusion is intended to help the reader understand why your research should matter to them after they have finished reading the paper. A conclusion is not merely a summary of the main topics covered or a re-statement of your research problem, but a synthesis of key points and, if applicable, where you recommend new areas for future research. For most college-level research papers, one or two well-developed paragraphs is sufficient for a conclusion, although in some cases, three or more paragraphs may be required.

Conclusions . The Writing Center. University of North Carolina;  Conclusions . The Writing Lab and The OWL. Purdue University.

Importance of a Good Conclusion

A well-written conclusion provides you with important opportunities to demonstrate to the reader your understanding of the research problem. These include:

  • Presenting the last word on the issues you raised in your paper . Just as the introduction gives a first impression to your reader, the conclusion offers a chance to leave a lasting impression. Do this, for example, by highlighting key findings in your analysis or result section or by noting important or unexpected implications applied to practice.
  • Summarizing your thoughts and conveying the larger significance of your study . The conclusion is an opportunity to succinctly answer [or in some cases, to re-emphasize]  the "So What?" question by placing the study within the context of how your research advances past research about the topic.
  • Identifying how a gap in the literature has been addressed . The conclusion can be where you describe how a previously identified gap in the literature [described in your literature review section] has been filled by your research.
  • Demonstrating the importance of your ideas . Don't be shy. The conclusion offers you the opportunity to elaborate on the impact and significance of your findings.
  • Introducing possible new or expanded ways of thinking about the research problem . This does not refer to introducing new information [which should be avoided], but to offer new insight and creative approaches for framing or contextualizing the research problem based on the results of your study.

Bunton, David. “The Structure of PhD Conclusion Chapters.”  Journal of English for Academic Purposes  4 (July 2005): 207–224;  Conclusions . The Writing Center. University of North Carolina; Kretchmer, Paul.  Twelve Steps to Writing an Effective Conclusion . San Francisco Edit, 2003-2008;  Conclusions . The Writing Lab and The OWL. Purdue University.

Structure and Writing Style

I.  General Rules

The function of your paper's conclusion is to restate the main argument . It reminds the reader of the strengths of your main argument(s) and reiterates the most important evidence supporting those argument(s). Do this by stating clearly the context, background, and necessity of pursuing the research problem you investigated in relation to an issue, controversy, or a gap found in the literature. Make sure, however, that your conclusion is not simply a repetitive summary of the findings. This reduces the impact of the argument(s) you have developed in your essay.

When writing the conclusion to your paper, follow these general rules:

  • State your conclusions in clear, simple language. Re-state the purpose of your study then state how your findings differ or support those of other studies and why [i.e., what were the unique or new contributions your study made to the overall research about your topic?].
  • Do not simply reiterate your results or the discussion of your results. Provide a synthesis of arguments presented in the paper to show how these converge to address the research problem and the overall objectives of your study
  • Indicate opportunities for future research if you haven't already done so in the discussion section of your paper. Highlighting the need for further research provides the reader with evidence that you have an in-depth awareness of the research problem.

Consider the following points to help ensure your conclusion is presented well:

  • If the argument or purpose of your paper is complex, you may need to summarize the argument for your reader.
  • If, prior to your conclusion, you have not yet explained the significance of your findings or if you are proceeding inductively, use the end of your paper to describe your main points and explain their significance.
  • Move from a detailed to a general level of consideration that returns the topic to the context provided by the introduction or within a new context that emerges from the data.

The conclusion also provides a place for you to persuasively and succinctly restate your research problem, given that the reader has now been presented with all the information about the topic . Depending on the discipline you are writing in, the concluding paragraph may   contain your reflections on the evidence presented, or on the essay's central research problem. However, the nature of being introspective about the research you have done will depend on the topic and whether your professor wants you to express your observations in this way.

NOTE : If asked to think introspectively about the topics, do not delve into idle speculation. Being introspective means looking within yourself as an author to try and understand an issue more deeply, not to guess at possible outcomes or make up scenarios not supported by evidence.

II.  Developing a Compelling Conclusion

Although an effective conclusion needs to be clear and succinct, it does not need to be written passively or lack a compelling narrative. Strategies to help you move beyond merely summarizing the key points of your research paper may include any of the following strategies:

  • If your essay deals with a contemporary problem, warn readers of the possible consequences of not attending to the problem.
  • Recommend a specific course or courses of action that, if adopted, could address a specific problem in practice or in the development of new knowledge.
  • Cite a relevant quotation or expert opinion already noted in your paper in order to lend authority to the conclusion you have reached [a good place to look is research from your literature review].
  • Explain the consequences of your research in a way that elicits action or demonstrates urgency in seeking change.
  • Restate a key statistic, fact, or visual image to emphasize the ultimate point of your paper.
  • If your discipline encourages personal reflection, illustrate your concluding point with a relevant narrative drawn from your own life experiences.
  • Return to an anecdote, an example, or a quotation that you presented in your introduction, but add further insight derived from the findings of your study; use your interpretation of results to recast it in new or important ways.
  • Provide a "take-home" message in the form of a strong, succinct statement that you want the reader to remember about your study.

III. Problems to Avoid

Failure to be concise Your conclusion section should be  concise  and to the point. Conclusions that are too lengthy often have unnecessary information in them. The conclusion is not the place for details about your methodology or results. Although you should give a summary of what was learned from your research, this summary should be relatively brief, since the emphasis in the conclusion is on the implications, evaluations, insights, and other forms of analysis that you make. Strategies for writing concisely can be found  here .

Failure to comment on larger, more significant issues In the introduction, your task was to move from the general [the field of study] to the specific [the research problem]. However, in the conclusion, your task is to move from a specific discussion [your research problem] back to a general discussion [i.e., how your research contributes new understanding or fills an important gap in the literature]. In short, the conclusion is where you should place your research within a larger context [visualize your paper as an hourglass--start with a broad introduction and review of the literature, move to the specific analysis and discussion, conclude with a broad summary of the study's implications and significance].

Failure to reveal problems and negative results Negative aspects of the research process should  never  be ignored. Problems, drawbacks, and challenges encountered during your study should be summarized as a way of qualifying your overall conclusions. If you encountered negative or unintended results [i.e., findings that are validated outside the research context in which they were generated], you must report them in the results section and discuss their implications in the discussion section of your paper. In the conclusion, use your summary of the negative results as an opportunity to explain their possible significance and/or how they may form the basis for future research.

Failure to provide a clear summary of what was learned In order to be able to discuss how your research fits back into your field of study [and possibly the world at large], you need to summarize briefly and succinctly how it contributes to new knowledge or a new understanding about the research problem. This element of your conclusion may be only a few sentences long.

Failure to match the objectives of your research Often research objectives in the social sciences change while the research is being carried out. This is not a problem unless you forget to go back and refine the original objectives in your introduction. As these changes emerge they must be documented so that they accurately reflect what you were trying to accomplish in your research [not what you thought you might accomplish when you began].

Resist the urge to apologize If you've immersed yourself in studying the research problem, you presumably should know a good deal about it, perhaps even more than your professor! Nevertheless, by the time you have finished writing, you may be having some doubts about what you have produced. Repress those doubts!  Don't undermine your authority by saying something like, "This is just one approach to examining this problem; there may be other, much better approaches that...." The overall tone of your conclusion should convey confidence to the reader.

Assan, Joseph.  Writing the Conclusion Chapter: The Good, the Bad and the Missing . Department of Geography, University of Liverpool;  Concluding Paragraphs . College Writing Center at Meramec. St. Louis Community College;  Conclusions . The Writing Center. University of North Carolina;  Conclusions . The Writing Lab and The OWL. Purdue University; Freedman, Leora  and Jerry Plotnick.  Introductions and Conclusions . The Lab Report. University College Writing Centre. University of Toronto; Leibensperger, Summer.  Draft Your Conclusion . Academic Center, the University of Houston-Victoria, 2003;  Make Your Last Words Count . The Writer’s Handbook. Writing Center. University of Wisconsin, Madison;  Tips for Writing a Good Conclusion . Writing@CSU. Colorado State University; Kretchmer, Paul.  Twelve Steps to Writing an Effective Conclusion . San Francisco Edit, 2003-2008;  Writing Conclusions . Writing Tutorial Services, Center for Innovative Teaching and Learning. Indiana University;  Writing: Considering Structure and Organization . Institute for Writing Rhetoric. Dartmouth College.

  • << Previous: The Discussion
  • Next: Proofreading Your Paper >>
  • Last Updated: Oct 1, 2019 1:26 PM
  • URL: https://midlakes.libguides.com/c.php?g=972151

The Psychology Institute

Drawing Conclusions in Psychological Research: From Data to Insights

drawing conclusions in research process

Table of Contents

Have you ever wondered how a hunch transforms into a scientific understanding? In the realm of psychological research , this transformation hinges on a crucial step: drawing conclusions. This step is not about making wild guesses but about making sense of the data collected through meticulous research and testing hypotheses . It’s the moment where pieces of the puzzle come together, providing answers and sometimes raising even more questions. Let’s embark on a journey to understand how researchers in psychology navigate from raw data to insightful conclusions that can influence theories , practices, and our very understanding of human behavior.

What happens after data analysis in psychological research?

Once the numbers have been crunched, and the analyses are complete, researchers stand at a critical juncture. They must interpret the data in a meaningful way. This process involves looking at the results from various angles, considering alternative explanations, and determining the relevance of the findings to the original research questions . It’s a step that requires not just statistical know-how but also a deep understanding of human psychology and the theories that frame our understanding of it.

How are conclusions synthesized in psychology?

The synthesis of conclusions in psychological research is an art as much as it is a science. Researchers meticulously review their findings, considering the context of the study, the limitations of their methods , and the patterns that have emerged. They may discover that their results support their initial hypothesis , or they may be taken by surprise by what the data reveals. In either case, they must construct a narrative that aligns with the evidence and contributes to the broader conversation in the field of psychology.

The reflective process of drawing conclusions

Drawing conclusions is inherently reflective. It’s a time for researchers to look back at their work and ask crucial questions. Did the study design work as intended? Were the methods appropriate? Is there a need for further research? This reflection is not only about assessing the success of the study but also about understanding its place within the larger body of psychological research. It’s about taking the new knowledge gleaned and fitting it into the existing puzzle—or realizing that perhaps the puzzle itself needs to be redefined.

Relating outcomes to existing theories

Every conclusion drawn from a psychological study has the potential to affirm, challenge, or refine existing theories. This is where research moves beyond data points and becomes part of the ongoing dialogue that shapes our understanding of the mind and behavior. Researchers must consider how their conclusions align with or diverge from the predictions made by current theories and what this means for the field. Does the study reinforce the credibility of a theory, or does it suggest that revisions are necessary?

Potentially modifying theories based on new evidence

When data introduce new perspectives or contradict prevailing theories, it’s a sign that the field may be on the cusp of change. Researchers must be prepared to propose modifications to existing theories or even suggest new ones. This can be a contentious process, as it challenges the status quo and requires a strong foundation of evidence. However, it’s through this process that psychology continues to evolve and refine its understanding of human behavior.

Understanding the implications of research

The conclusions of a psychological study are not confined to academic papers—they have real-world implications. Researchers must consider how their findings can inform clinical practices , educational strategies , or even policy decisions . They need to communicate the relevance of their work to various audiences, from fellow scientists to practitioners and policymakers. The implications can be profound, influencing how we approach mental health , learning , and social interaction .

The role of peer review in drawing conclusions

Peer review acts as a critical checkpoint in the process of drawing conclusions. Other experts in the field scrutinize the study to ensure that the methodology is sound, the analysis is robust, and the conclusions are justified. This collaborative effort helps to maintain the integrity of psychological research and ensures that conclusions are based on a solid foundation of evidence.

Drawing conclusions is a defining moment in psychological research. It’s the culmination of a complex process that starts with a question and, through careful design and analysis, ends with insights that can deepen our understanding of the human mind and behavior. This step is not the end of the journey; it’s a bridge to further inquiry, discussion, and discovery that propels the field forward. As researchers continue to piece together the vast puzzle of psychology, each study adds another piece, slowly bringing into focus the intricate picture of human nature.

How do you think the process of drawing conclusions in research affects the way we understand human behavior? And, what role do you believe new evidence should play in challenging or reinforcing existing psychological theories?

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

Research Methods in Psychology

1 Introduction to Psychological Research – Objectives and Goals, Problems, Hypothesis and Variables

  • Nature of Psychological Research
  • The Context of Discovery
  • Context of Justification
  • Characteristics of Psychological Research
  • Goals and Objectives of Psychological Research

2 Introduction to Psychological Experiments and Tests

  • Independent and Dependent Variables
  • Extraneous Variables
  • Experimental and Control Groups
  • Introduction of Test
  • Types of Psychological Test
  • Uses of Psychological Tests

3 Steps in Research

  • Research Process
  • Identification of the Problem
  • Review of Literature
  • Formulating a Hypothesis
  • Identifying Manipulating and Controlling Variables
  • Formulating a Research Design
  • Constructing Devices for Observation and Measurement
  • Sample Selection and Data Collection
  • Data Analysis and Interpretation
  • Hypothesis Testing
  • Drawing Conclusion

4 Types of Research and Methods of Research

  • Historical Research
  • Descriptive Research
  • Correlational Research
  • Qualitative Research
  • Ex-Post Facto Research
  • True Experimental Research
  • Quasi-Experimental Research

5 Definition and Description Research Design, Quality of Research Design

  • Research Design
  • Purpose of Research Design
  • Design Selection
  • Criteria of Research Design
  • Qualities of Research Design

6 Experimental Design (Control Group Design and Two Factor Design)

  • Experimental Design
  • Control Group Design
  • Two Factor Design

7 Survey Design

  • Survey Research Designs
  • Steps in Survey Design
  • Structuring and Designing the Questionnaire
  • Interviewing Methodology
  • Data Analysis
  • Final Report

8 Single Subject Design

  • Single Subject Design: Definition and Meaning
  • Phases Within Single Subject Design
  • Requirements of Single Subject Design
  • Characteristics of Single Subject Design
  • Types of Single Subject Design
  • Advantages of Single Subject Design
  • Disadvantages of Single Subject Design

9 Observation Method

  • Definition and Meaning of Observation
  • Characteristics of Observation
  • Types of Observation
  • Advantages and Disadvantages of Observation
  • Guides for Observation Method

10 Interview and Interviewing

  • Definition of Interview
  • Types of Interview
  • Aspects of Qualitative Research Interviews
  • Interview Questions
  • Convergent Interviewing as Action Research
  • Research Team

11 Questionnaire Method

  • Definition and Description of Questionnaires
  • Types of Questionnaires
  • Purpose of Questionnaire Studies
  • Designing Research Questionnaires
  • The Methods to Make a Questionnaire Efficient
  • The Types of Questionnaire to be Included in the Questionnaire
  • Advantages and Disadvantages of Questionnaire
  • When to Use a Questionnaire?

12 Case Study

  • Definition and Description of Case Study Method
  • Historical Account of Case Study Method
  • Designing Case Study
  • Requirements for Case Studies
  • Guideline to Follow in Case Study Method
  • Other Important Measures in Case Study Method
  • Case Reports

13 Report Writing

  • Purpose of a Report
  • Writing Style of the Report
  • Report Writing – the Do’s and the Don’ts
  • Format for Report in Psychology Area
  • Major Sections in a Report

14 Review of Literature

  • Purposes of Review of Literature
  • Sources of Review of Literature
  • Types of Literature
  • Writing Process of the Review of Literature
  • Preparation of Index Card for Reviewing and Abstracting

15 Methodology

  • Definition and Purpose of Methodology
  • Participants (Sample)
  • Apparatus and Materials

16 Result, Analysis and Discussion of the Data

  • Definition and Description of Results
  • Statistical Presentation
  • Tables and Figures

17 Summary and Conclusion

  • Summary Definition and Description
  • Guidelines for Writing a Summary
  • Writing the Summary and Choosing Words
  • A Process for Paraphrasing and Summarising
  • Summary of a Report
  • Writing Conclusions

18 References in Research Report

  • Reference List (the Format)
  • References (Process of Writing)
  • Reference List and Print Sources
  • Electronic Sources
  • Book on CD Tape and Movie
  • Reference Specifications
  • General Guidelines to Write References

Share on Mastodon

Logo for Portland State University Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Drawing Conclusions and Reporting the Results

Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton

Learning Objectives

  • Identify the conclusions researchers can make based on the outcome of their studies.
  • Describe why scientists avoid the term “scientific proof.”
  • Explain the different ways that scientists share their findings.

Drawing Conclusions

Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research.

If the results are statistically significant and consistent with the hypothesis and the theory that was used to generate the hypothesis, then researchers can conclude that the theory is supported. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this avoidance is that the result may reflect a type I error.  Another reason for this  avoidance  is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A third reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This  difficulty  is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if  A  then  B ” and “not  B ” necessarily lead to the conclusion “not  A .” If  A  is the theory and  B  is the hypothesis (“if  A  then  B ”), then disconfirming the hypothesis (“not  B ”) must mean that the theory is incorrect (“not  A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a missed opportunity (the result of a type II error) or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable.

A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems. That is, the evidence from a study can be used to modify a theory.  This practice does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually must abandon their theories and replace them with ones that are more successful.

The bottom line here is that because statistics are probabilistic in nature and because all research studies have flaws there is no such thing as scientific proof, there is only scientific evidence.

Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by their poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.

Drawing Conclusions and Reporting the Results Copyright © by Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

2.7 Drawing Conclusions and Reporting the Results

Learning objectives.

  • Identify the conclusions researchers can make based on the outcome of their studies.
  • Describe why scientists avoid the term “scientific proof.”
  • Explain the different ways that scientists share their findings.

Drawing Conclusions

Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research.

If the results are statistically significant and consistent with the hypothesis and the theory that was used to generate the hypothesis, then researchers can conclude that the theory is supported. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this avoidance is that the result may reflect a type I error.  Another reason for this  avoidance  is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A third reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This  difficulty  is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if  A  then  B ” and “not  B ” necessarily lead to the conclusion “not  A .” If  A  is the theory and  B  is the hypothesis (“if  A  then  B ”), then disconfirming the hypothesis (“not  B ”) must mean that the theory is incorrect (“not  A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a missed opportunity (the result of a type II error) or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable.

A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems. That is, the evidence from a study can be used to modify a theory.  This practice does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually must abandon their theories and replace them with ones that are more successful.

The bottom line here is that because statistics are probabilistic in nature and because all research studies have flaws there is no such thing as scientific proof, there is only scientific evidence.

Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by his or her poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.

Creative Commons License

Share This Book

  • Increase Font Size

MAKE ME ANALYST

Research Methodology

  • Introduction to Research Methodology
  • Research Approaches
  • Concepts of Theory and Empiricism
  • Characteristics of scientific method
  • Understanding the Language of Research

11 Steps in Research Process

  • Research Design
  • Different Research Designs
  • Compare and Contrast the Main Types of Research Designs
  • Cross-sectional research design
  • Qualitative and Quantitative Research
  • Descriptive Research VS Qualitative Research
  • Experimental Research VS Quantitative Research
  • Sampling Design
  • Probability VS Non-Probability Sampling
  • 40 MCQ on Research Methodology
  • MCQ on research Process
  • MCQ on Research Design
  • 18 MCQ on Quantitative Research
  • 30 MCQ on Qualitative Research
  • 45 MCQ on Sampling Methods
  • 20 MCQ on Principles And Planning For Research

Research process refers to the systematic and organized series of steps taken to investigate and study a specific topic or problem in order to gain knowledge and find answers to questions. It is a methodical approach followed by researchers to collect, analyze, and interpret data to arrive at meaningful conclusions and contribute to the existing body of knowledge in a particular field.

drawing conclusions in research process

The chart shows that the research process consists of several activities marked from I to VII. These activities are closely related and often overlap instead of following a strict order. Sometimes, the first step determines how the last step will be done. If certain important steps are not considered early on, it can cause serious problems and even stop the research from being completed.

It’s essential to understand that the steps involved in the research process are not completely separate from each other. They do not always follow a fixed order, and the researcher needs to be prepared for the requirements of the next steps at each stage of the research process.

Interpret data to arrive at meaningful conclusions and contribute to the existing body of knowledge in a particular field.

The research process typically involves the following key steps:

  • Formulating the Research Problem: Identifying and defining the research question or problem that needs to be addressed.
  • Literature Review: Conducting a thorough review of existing literature and research related to the topic to understand what has already been studied and discovered.
  • Developing the Hypothesis: Creating a clear and testable statement that predicts the relationship between variables in the research.
  • Research Design: Planning the overall structure and approach of the study, including selecting the research methods and data collection techniques.
  • Sample Design: Determining the sample size and selecting the participants or subjects that will be part of the study.
  • Data Collection: Gathering relevant data through various methods, such as surveys, interviews, experiments, or observations.
  • Execution of the Project: Implementing the research plan and collecting the data as per the designed approach.
  • Data Analysis: Analyzing the collected data using appropriate statistical or qualitative techniques to draw meaningful conclusions.
  • Hypothesis Testing: Evaluating the hypothesis based on the analysis to determine whether it is supported or rejected.
  • Generalizations and Interpretation: Making broader connections and interpretations of the findings in the context of the research problem.
  • Conclusion and Recommendations: Summarizing the research results, drawing conclusions, and suggesting potential future research or practical implications.

Throughout the research process, researchers must maintain objectivity, rigor, and ethical considerations to ensure the validity and reliability of the results. Each step contributes to a comprehensive understanding of the research topic and the generation of new knowledge in the field.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Inductive Reasoning | Types, Examples, Explanation

Published on January 12, 2022 by Pritha Bhandari . Revised on June 22, 2023.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning , where you go from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

Note Inductive reasoning is often confused with deductive reasoning. However, in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Table of contents

What is inductive reasoning, inductive reasoning in research, types of inductive reasoning, inductive generalization, statistical generalization, causal reasoning, sign reasoning, analogical reasoning, inductive vs. deductive reasoning, other interesting articles, frequently asked questions about inductive reasoning.

Inductive reasoning is a logical approach to making inferences, or conclusions. People often use inductive reasoning informally in everyday situations.

Inductive Reasoning

You may have come across inductive logic examples that come in a set of three statements. These start with one specific observation, add a general pattern, and end with a conclusion.

Examples: Inductive reasoning
Stage Example 1 Example 2
Specific observation Nala is an orange cat and she purrs loudly. Baby Jack said his first word at the age of 12 months.
Pattern recognition Every orange cat I’ve met purrs loudly. All babies say their first word at the age of 12 months.
General conclusion All orange cats purr loudly. All babies say their first word at the age of 12 months.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

In inductive research, you start by making observations or gathering data. Then , you take a broad view of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

You distribute a survey to pet owners. You ask about the type of animal they have and any behavioral changes they’ve noticed in their pets since they started working from home. These data make up your observations.

To analyze your data, you create a procedure to categorize the survey responses so you can pick up on repeated themes. You notice a pattern : most pets became more needy and clingy or agitated and aggressive.

Inductive reasoning is commonly linked to qualitative research , but both quantitative and qualitative research use a mix of different types of reasoning.

There are many different types of inductive reasoning that people use formally or informally, so we’ll cover just a few in this article:

Inductive reasoning generalizations can vary from weak to strong, depending on the number and quality of observations and arguments used.

Inductive generalizations use observations about a sample to come to a conclusion about the population it came from.

Inductive generalizations are also called induction by enumeration.

  • The flamingos here are all pink.
  • All flamingos I’ve ever seen are pink.
  • All flamingos must be pink.

Inductive generalizations are evaluated using several criteria:

  • Large sample: Your sample should be large for a solid set of observations.
  • Random sampling: Probability sampling methods let you generalize your findings.
  • Variety: Your observations should be externally valid .
  • Counterevidence: Any observations that refute yours falsify your generalization.

Statistical generalizations use specific numbers to make statements about populations, while non-statistical generalizations aren’t as specific.

These generalizations are a subtype of inductive generalizations, and they’re also called statistical syllogisms.

Here’s an example of a statistical generalization contrasted with a non-statistical generalization.

Example: Statistical vs. non-statistical generalization
Specific observation 73% of students from a sample in a local university prefer hybrid learning environments. Most students from a sample in a local university prefer hybrid learning environments.
Inductive generalization 73% of all students in the university prefer hybrid learning environments. Most students in the university prefer hybrid learning environments.

Causal reasoning means making cause-and-effect links between different things.

A causal reasoning statement often follows a standard setup:

  • You start with a premise about a correlation (two events that co-occur).
  • You put forward the specific direction of causality or refute any other direction.
  • You conclude with a causal statement about the relationship between two things.
  • All of my white clothes turn pink when I put a red cloth in the washing machine with them.
  • My white clothes don’t turn pink when I wash them on their own.
  • Putting colorful clothes with light colors causes the colors to run and stain the light-colored clothes.

Good causal inferences meet a couple of criteria:

  • Direction: The direction of causality should be clear and unambiguous based on your observations.
  • Strength: There’s ideally a strong relationship between the cause and the effect.

Sign reasoning involves making correlational connections between different things.

Using inductive reasoning, you infer a purely correlational relationship where nothing causes the other thing to occur. Instead, one event may act as a “sign” that another event will occur or is currently occurring.

  • Every time Punxsutawney Phil casts a shadow on Groundhog Day, winter lasts six more weeks.
  • Punxsutawney Phil doesn’t cause winter to be extended six more weeks.
  • His shadow is a sign that we’ll have six more weeks of wintery weather.

It’s best to be careful when making correlational links between variables . Build your argument on strong evidence, and eliminate any confounding variables , or you may be on shaky ground.

Analogical reasoning means drawing conclusions about something based on its similarities to another thing. You first link two things together and then conclude that some attribute of one thing must also hold true for the other thing.

Analogical reasoning can be literal (closely similar) or figurative (abstract), but you’ll have a much stronger case when you use a literal comparison.

Analogical reasoning is also called comparison reasoning.

  • Humans and laboratory rats are extremely similar biologically, sharing over 90% of their DNA.
  • Lab rats show promising results when treated with a new drug for managing Parkinson’s disease.
  • Therefore, humans will also show promising results when treated with the drug.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

In deductive reasoning, you make inferences by going from general premises to specific conclusions. You start with a theory, and you might develop a hypothesis that you test empirically. You collect data from many observations and use a statistical test to come to a conclusion about your hypothesis.

Inductive research is usually exploratory in nature, because your generalizations help you develop theories. In contrast, deductive research is generally confirmatory.

Sometimes, both inductive and deductive approaches are combined within a single research study.

Inductive reasoning approach

You begin by using qualitative methods to explore the research topic, taking an inductive reasoning approach. You collect observations by interviewing workers on the subject and analyze the data to spot any patterns. Then, you develop a theory to test in a follow-up study.

Deductive reasoning approach

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Inductive Reasoning | Types, Examples, Explanation. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/methodology/inductive-reasoning/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, inductive vs. deductive research approach | steps & examples, exploratory research | definition, guide, & examples, correlation vs. causation | difference, designs & examples, what is your plagiarism score.

  • Privacy Policy

Research Method

Home » Data Interpretation – Process, Methods and Questions

Data Interpretation – Process, Methods and Questions

Table of Contents

Data Interpretation

Data Interpretation

Definition :

Data interpretation refers to the process of making sense of data by analyzing and drawing conclusions from it. It involves examining data in order to identify patterns, relationships, and trends that can help explain the underlying phenomena being studied. Data interpretation can be used to make informed decisions and solve problems across a wide range of fields, including business, science, and social sciences.

Data Interpretation Process

Here are the steps involved in the data interpretation process:

  • Define the research question : The first step in data interpretation is to clearly define the research question. This will help you to focus your analysis and ensure that you are interpreting the data in a way that is relevant to your research objectives.
  • Collect the data: The next step is to collect the data. This can be done through a variety of methods such as surveys, interviews, observation, or secondary data sources.
  • Clean and organize the data : Once the data has been collected, it is important to clean and organize it. This involves checking for errors, inconsistencies, and missing data. Data cleaning can be a time-consuming process, but it is essential to ensure that the data is accurate and reliable.
  • Analyze the data: The next step is to analyze the data. This can involve using statistical software or other tools to calculate summary statistics, create graphs and charts, and identify patterns in the data.
  • Interpret the results: Once the data has been analyzed, it is important to interpret the results. This involves looking for patterns, trends, and relationships in the data. It also involves drawing conclusions based on the results of the analysis.
  • Communicate the findings : The final step is to communicate the findings. This can involve creating reports, presentations, or visualizations that summarize the key findings of the analysis. It is important to communicate the findings in a way that is clear and concise, and that is tailored to the audience’s needs.

Types of Data Interpretation

There are various types of data interpretation techniques used for analyzing and making sense of data. Here are some of the most common types:

Descriptive Interpretation

This type of interpretation involves summarizing and describing the key features of the data. This can involve calculating measures of central tendency (such as mean, median, and mode), measures of dispersion (such as range, variance, and standard deviation), and creating visualizations such as histograms, box plots, and scatterplots.

Inferential Interpretation

This type of interpretation involves making inferences about a larger population based on a sample of the data. This can involve hypothesis testing, where you test a hypothesis about a population parameter using sample data, or confidence interval estimation, where you estimate a range of values for a population parameter based on sample data.

Predictive Interpretation

This type of interpretation involves using data to make predictions about future outcomes. This can involve building predictive models using statistical techniques such as regression analysis, time-series analysis, or machine learning algorithms.

Exploratory Interpretation

This type of interpretation involves exploring the data to identify patterns and relationships that were not previously known. This can involve data mining techniques such as clustering analysis, principal component analysis, or association rule mining.

Causal Interpretation

This type of interpretation involves identifying causal relationships between variables in the data. This can involve experimental designs, such as randomized controlled trials, or observational studies, such as regression analysis or propensity score matching.

Data Interpretation Methods

There are various methods for data interpretation that can be used to analyze and make sense of data. Here are some of the most common methods:

Statistical Analysis

This method involves using statistical techniques to analyze the data. Statistical analysis can involve descriptive statistics (such as measures of central tendency and dispersion), inferential statistics (such as hypothesis testing and confidence interval estimation), and predictive modeling (such as regression analysis and time-series analysis).

Data Visualization

This method involves using visual representations of the data to identify patterns and trends. Data visualization can involve creating charts, graphs, and other visualizations, such as heat maps or scatterplots.

Text Analysis

This method involves analyzing text data, such as survey responses or social media posts, to identify patterns and themes. Text analysis can involve techniques such as sentiment analysis, topic modeling, and natural language processing.

Machine Learning

This method involves using algorithms to identify patterns in the data and make predictions or classifications. Machine learning can involve techniques such as decision trees, neural networks, and random forests.

Qualitative Analysis

This method involves analyzing non-numeric data, such as interviews or focus group discussions, to identify themes and patterns. Qualitative analysis can involve techniques such as content analysis, grounded theory, and narrative analysis.

Geospatial Analysis

This method involves analyzing spatial data, such as maps or GPS coordinates, to identify patterns and relationships. Geospatial analysis can involve techniques such as spatial autocorrelation, hot spot analysis, and clustering.

Applications of Data Interpretation

Data interpretation has a wide range of applications across different fields, including business, healthcare, education, social sciences, and more. Here are some examples of how data interpretation is used in different applications:

  • Business : Data interpretation is widely used in business to inform decision-making, identify market trends, and optimize operations. For example, businesses may analyze sales data to identify the most popular products or customer demographics, or use predictive modeling to forecast demand and adjust pricing accordingly.
  • Healthcare : Data interpretation is critical in healthcare for identifying disease patterns, evaluating treatment effectiveness, and improving patient outcomes. For example, healthcare providers may use electronic health records to analyze patient data and identify risk factors for certain diseases or conditions.
  • Education : Data interpretation is used in education to assess student performance, identify areas for improvement, and evaluate the effectiveness of instructional methods. For example, schools may analyze test scores to identify students who are struggling and provide targeted interventions to improve their performance.
  • Social sciences : Data interpretation is used in social sciences to understand human behavior, attitudes, and perceptions. For example, researchers may analyze survey data to identify patterns in public opinion or use qualitative analysis to understand the experiences of marginalized communities.
  • Sports : Data interpretation is increasingly used in sports to inform strategy and improve performance. For example, coaches may analyze performance data to identify areas for improvement or use predictive modeling to assess the likelihood of injuries or other risks.

When to use Data Interpretation

Data interpretation is used to make sense of complex data and to draw conclusions from it. It is particularly useful when working with large datasets or when trying to identify patterns or trends in the data. Data interpretation can be used in a variety of settings, including scientific research, business analysis, and public policy.

In scientific research, data interpretation is often used to draw conclusions from experiments or studies. Researchers use statistical analysis and data visualization techniques to interpret their data and to identify patterns or relationships between variables. This can help them to understand the underlying mechanisms of their research and to develop new hypotheses.

In business analysis, data interpretation is used to analyze market trends and consumer behavior. Companies can use data interpretation to identify patterns in customer buying habits, to understand market trends, and to develop marketing strategies that target specific customer segments.

In public policy, data interpretation is used to inform decision-making and to evaluate the effectiveness of policies and programs. Governments and other organizations use data interpretation to track the impact of policies and programs over time, to identify areas where improvements are needed, and to develop evidence-based policy recommendations.

In general, data interpretation is useful whenever large amounts of data need to be analyzed and understood in order to make informed decisions.

Data Interpretation Examples

Here are some real-time examples of data interpretation:

  • Social media analytics : Social media platforms generate vast amounts of data every second, and businesses can use this data to analyze customer behavior, track sentiment, and identify trends. Data interpretation in social media analytics involves analyzing data in real-time to identify patterns and trends that can help businesses make informed decisions about marketing strategies and customer engagement.
  • Healthcare analytics: Healthcare organizations use data interpretation to analyze patient data, track outcomes, and identify areas where improvements are needed. Real-time data interpretation can help healthcare providers make quick decisions about patient care, such as identifying patients who are at risk of developing complications or adverse events.
  • Financial analysis: Real-time data interpretation is essential for financial analysis, where traders and analysts need to make quick decisions based on changing market conditions. Financial analysts use data interpretation to track market trends, identify opportunities for investment, and develop trading strategies.
  • Environmental monitoring : Real-time data interpretation is important for environmental monitoring, where data is collected from various sources such as satellites, sensors, and weather stations. Data interpretation helps to identify patterns and trends that can help predict natural disasters, track changes in the environment, and inform decision-making about environmental policies.
  • Traffic management: Real-time data interpretation is used for traffic management, where traffic sensors collect data on traffic flow, congestion, and accidents. Data interpretation helps to identify areas where traffic congestion is high, and helps traffic management authorities make decisions about road maintenance, traffic signal timing, and other strategies to improve traffic flow.

Data Interpretation Questions

Data Interpretation Questions samples:

  • Medical : What is the correlation between a patient’s age and their risk of developing a certain disease?
  • Environmental Science: What is the trend in the concentration of a certain pollutant in a particular body of water over the past 10 years?
  • Finance : What is the correlation between a company’s stock price and its quarterly revenue?
  • Education : What is the trend in graduation rates for a particular high school over the past 5 years?
  • Marketing : What is the correlation between a company’s advertising budget and its sales revenue?
  • Sports : What is the trend in the number of home runs hit by a particular baseball player over the past 3 seasons?
  • Social Science: What is the correlation between a person’s level of education and their income level?

In order to answer these questions, you would need to analyze and interpret the data using statistical methods, graphs, and other visualization tools.

Purpose of Data Interpretation

The purpose of data interpretation is to make sense of complex data by analyzing and drawing insights from it. The process of data interpretation involves identifying patterns and trends, making comparisons, and drawing conclusions based on the data. The ultimate goal of data interpretation is to use the insights gained from the analysis to inform decision-making.

Data interpretation is important because it allows individuals and organizations to:

  • Understand complex data : Data interpretation helps individuals and organizations to make sense of complex data sets that would otherwise be difficult to understand.
  • Identify patterns and trends : Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships.
  • Make informed decisions: Data interpretation provides individuals and organizations with the information they need to make informed decisions based on the insights gained from the data analysis.
  • Evaluate performance : Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made.
  • Communicate findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.

Characteristics of Data Interpretation

Here are some characteristics of data interpretation:

  • Contextual : Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.
  • Iterative : Data interpretation is an iterative process, meaning that it often involves multiple rounds of analysis and refinement as more data becomes available or as new insights are gained from the analysis.
  • Subjective : Data interpretation is often subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. It is important to acknowledge and address these biases when interpreting data.
  • Analytical : Data interpretation involves the use of analytical tools and techniques to analyze and draw insights from data. These may include statistical analysis, data visualization, and other data analysis methods.
  • Evidence-based : Data interpretation is evidence-based, meaning that it is based on the data and the insights gained from the analysis. It is important to ensure that the data used in the analysis is accurate, relevant, and reliable.
  • Actionable : Data interpretation is actionable, meaning that it provides insights that can be used to inform decision-making and to drive action. The ultimate goal of data interpretation is to use the insights gained from the analysis to improve performance or to achieve specific goals.

Advantages of Data Interpretation

Data interpretation has several advantages, including:

  • Improved decision-making: Data interpretation provides insights that can be used to inform decision-making. By analyzing data and drawing insights from it, individuals and organizations can make informed decisions based on evidence rather than intuition.
  • Identification of patterns and trends: Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships. This information can be used to improve performance or to achieve specific goals.
  • Evaluation of performance: Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made. By analyzing data, organizations can identify strengths and weaknesses and make changes to improve their performance.
  • Communication of findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.
  • Better resource allocation: Data interpretation can help organizations allocate resources more efficiently by identifying areas where resources are needed most. By analyzing data, organizations can identify areas where resources are being underutilized or where additional resources are needed to improve performance.
  • Improved competitiveness : Data interpretation can give organizations a competitive advantage by providing insights that help to improve performance, reduce costs, or identify new opportunities for growth.

Limitations of Data Interpretation

Data interpretation has some limitations, including:

  • Limited by the quality of data: The quality of data used in data interpretation can greatly impact the accuracy of the insights gained from the analysis. Poor quality data can lead to incorrect conclusions and decisions.
  • Subjectivity: Data interpretation can be subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. This can lead to different interpretations of the same data.
  • Limited by analytical tools: The analytical tools and techniques used in data interpretation can also limit the accuracy of the insights gained from the analysis. Different analytical tools may yield different results, and some tools may not be suitable for certain types of data.
  • Time-consuming: Data interpretation can be a time-consuming process, particularly for large and complex data sets. This can make it difficult to quickly make decisions based on the insights gained from the analysis.
  • Incomplete data: Data interpretation can be limited by incomplete data sets, which may not provide a complete picture of the situation being analyzed. Incomplete data can lead to incorrect conclusions and decisions.
  • Limited by context: Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.

Difference between Data Interpretation and Data Analysis

Data interpretation and data analysis are two different but closely related processes in data-driven decision-making.

Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover patterns, relationships, and trends that can help in understanding the underlying phenomena.

Data interpretation, on the other hand, refers to the process of making sense of the findings from the data analysis by contextualizing them within the larger problem domain. It involves identifying the key takeaways from the data analysis, assessing their relevance and significance to the problem at hand, and communicating the insights in a clear and actionable manner.

In short, data analysis is about uncovering insights from the data, while data interpretation is about making sense of those insights and translating them into actionable recommendations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Design

Research Design – Types, Methods and Examples

Research Topic

Research Topics – Ideas and Examples

Assignment

Assignment – Types, Examples and Writing Guide

Research Questions

Research Questions – Types, Examples and Writing...

Data Verification

Data Verification – Process, Types and Examples

Informed Consent in Research

Informed Consent in Research – Types, Templates...

SLO campus power outage. Classes starting at 8:45 AM and earlier cancelled today. Next update by 8:15am

  • Cuesta College Home
  • Current Students
  • Student Success Centers

Study Guides

  • Reading Comprehension
  • Inferences and Conclusions

Making Inferences and Drawing Conclusions

Read with purpose and meaning.

Drawing conclusions refers to information that is implied or inferred.  This means that the information is never clearly stated.

Writers often tell you more than they say directly.  They give you hints or clues that help you "read between the lines."  Using these clues to give you a deeper understanding of your reading is called inferring . When you infer , you go beyond the surface details to see other meanings that the details suggest or imply (not stated).  When the meanings of words are not stated clearly in the context of the text, they may be implied  – that is, suggested or hinted at.  When meanings are implied, you may infer them.

Inference is just a big word that means a conclusion or judgement .  If you infer that something has happened, you do not see, hear, feel, smell, or taste the actual event.  But from what you know, it makes sense to think that it has happened.  You make inferences everyday.  Most of the time you do so without thinking about it.  Suppose you are sitting in your car stopped at a red signal light.  You hear screeching tires, then a loud crash and breaking glass.  You see nothing , but you infer that there has been a car accident.  We all know the sounds of screeching tires and a crash.  We know that these sounds almost always mean a car accident.  But there could be some other reason, and therefore another explanation, for the sounds.  Perhaps it was not an accident involving two moving vehicles.  Maybe an angry driver rammed a parked car.  Or maybe someone played the sound of a car crash from a recording.  Making inferences means choosing the most likely explanation from the facts at hand.

There are several ways to help you draw conclusions from what an author may be implying.  The following are descriptions of the various ways to aid you in reaching a conclusion.

General Sense

The meaning of a word may be implied by the general sense of its context, as the meaning of the word incarcerated is implied in the following sentence:

Murderers are usually incarcerated for longer periods of time than robbers.

You may infer the meaning of incarcerated by answering the question "What usually happens to those found guilty of murder or robbery?"  What have you inferred as the meaning of the word incarcerated ?

If you answered that they are locked up in jail, prison, or a penitentiary, you correctly inferred the meaning of incarcerated .

When the meaning of the word is not implied by the general sense of its context, it may be implied by examples.  For instance,

Those who enjoy belonging to clubs, going to parties, and inviting friends often to their homes for dinner are gregarious .

You may infer the meaning of gregarious by answering the question, "What word or words describe people who belong to clubs, go to parties a lot, and often invite friends over to their homes for dinner?"  What have you inferred as the meaning of the word gregarious ?

If you answered social or something like: "people who enjoy the company of others", you correctly inferred the meaning of gregarious .

Antonyms and Contrasts

When the meaning of a word is not implied by the general sense of its context or by examples, it may be implied by an antonym or by a contrasting thought in a context.  Antonyms are words that have opposite meanings, such as happy and sad.  For instance,

Ben is fearless, but his brother is timorous .

You may infer the meaning of timorous by answering the question, "If Ben is fearless and Jim is very different from Ben with regard to fear, then what word describes Jim?"

If you answered a word such as timid , or afraid , or fearful , you inferred the meaning of timorous .

A contrast in the following sentence implies the meaning of credence :

Dad gave credence to my story, but Mom's reaction was one of total disbelief.

You may infer the meaning of credence by answering the question, "If Mom's reaction was disbelief and Dad's reaction was very different from Mom's, what was Dad's reaction?"

If you answered that Dad believed the story, you correctly inferred the meaning of credence; it means belief .

Be Careful of the Meaning You Infer!

When a sentence contains an unfamiliar word, it is sometimes possible to infer the general meaning of the sentence without inferring the exact meaning of the unknown word.  For instance,

When we invite the Paulsons for dinner, they never invite us to their home for a meal; however, when we have the Browns to dinner, they always reciprocate .

In reading this sentence, some students infer that the Browns are more desirable dinner guests than the Paulsons without inferring the exact meaning of reciprocate .  Other students conclude that the Browns differ from the Paulsons in that they do something in return when they are invited for dinner; these students conclude correctly that reciprocate means "to do something in return."

In drawing conclusions (making inferences), you are really getting at the ultimate meaning of things – what is important, why it is important, how one event influences another, how one happening leads to another. 

Simply getting the facts in reading is not enough. You must think about what those facts mean to you.

  • Uses of Critical Thinking
  • Critically Evaluating the Logic and Validity of Information
  • Recognizing Propaganda Techniques and Errors of Faulty Logic
  • Developing the Ability to Analyze Historical and Contemporary Information
  • Recognize and Value Various Viewpoints
  • Appreciating the Complexities Involved in Decision-Making and Problem-Solving
  • Being a Responsible Critical Thinker & Collaborating with Others
  • Suggestions
  • Read the Textbook
  • When to Take Notes
  • 10 Steps to Tests
  • Studying for Exams
  • Test-Taking Errors
  • Test Anxiety
  • Objective Tests
  • Essay Tests
  • The Reading Process
  • Levels of Comprehension
  • Strengthen Your Reading Comprehension
  • Reading Rate
  • How to Read a Textbook
  • Organizational Patterns of a Paragraph
  • Topics, Main Ideas, and Support
  • Interpreting What You Read
  • Concentrating and Remembering
  • Converting Words into Pictures
  • Spelling and the Dictionary
  • Eight Essential Spelling Rules
  • Exceptions to the Rules
  • Motivation and Goal Setting
  • Effective Studying
  • Time Management
  • Listening and Note-Taking
  • Memory and Learning Styles
  • Textbook Reading Strategies
  • Memory Tips
  • Test-Taking Strategies
  • The First Step
  • Study System
  • Maximize Comprehension
  • Different Reading Modes
  • Paragraph Patterns
  • An Effective Strategy
  • Finding the Main Idea
  • Read a Medical Text
  • Read in the Sciences
  • Read University Level
  • Textbook Study Strategies
  • The Origin of Words
  • Using a Dictionary
  • Interpreting a Dictionary Entry
  • Structure Analysis
  • Common Roots
  • Word Relationships
  • Using Word Relationships
  • Context Clues
  • The Importance of Reading
  • Vocabulary Analogies
  • Guide to Talking with Instructors
  • Writing Help

Miossi Art Gallery Presents: "The Outsiders from the Other Side"

Cuesta college's fall semester begins aug. 12, dr. elizabeth coria selected for aspen institute fellowship.

Culinary students

Short term classes available now.

Register Today!

How to write a strong conclusion for your research paper

Last updated

17 February 2024

Reviewed by

Short on time? Get an AI generated summary of this article instead

Writing a research paper is a chance to share your knowledge and hypothesis. It's an opportunity to demonstrate your many hours of research and prove your ability to write convincingly.

Ideally, by the end of your research paper, you'll have brought your readers on a journey to reach the conclusions you've pre-determined. However, if you don't stick the landing with a good conclusion, you'll risk losing your reader’s trust.

Writing a strong conclusion for your research paper involves a few important steps, including restating the thesis and summing up everything properly.

Find out what to include and what to avoid, so you can effectively demonstrate your understanding of the topic and prove your expertise.

  • Why is a good conclusion important?

A good conclusion can cement your paper in the reader’s mind. Making a strong impression in your introduction can draw your readers in, but it's the conclusion that will inspire them.

  • What to include in a research paper conclusion

There are a few specifics you should include in your research paper conclusion. Offer your readers some sense of urgency or consequence by pointing out why they should care about the topic you have covered. Discuss any common problems associated with your topic and provide suggestions as to how these problems can be solved or addressed.

The conclusion should include a restatement of your initial thesis. Thesis statements are strengthened after you’ve presented supporting evidence (as you will have done in the paper), so make a point to reintroduce it at the end.

Finally, recap the main points of your research paper, highlighting the key takeaways you want readers to remember. If you've made multiple points throughout the paper, refer to the ones with the strongest supporting evidence.

  • Steps for writing a research paper conclusion

Many writers find the conclusion the most challenging part of any research project . By following these three steps, you'll be prepared to write a conclusion that is effective and concise.

  • Step 1: Restate the problem

Always begin by restating the research problem in the conclusion of a research paper. This serves to remind the reader of your hypothesis and refresh them on the main point of the paper. 

When restating the problem, take care to avoid using exactly the same words you employed earlier in the paper.

  • Step 2: Sum up the paper

After you've restated the problem, sum up the paper by revealing your overall findings. The method for this differs slightly, depending on whether you're crafting an argumentative paper or an empirical paper.

Argumentative paper: Restate your thesis and arguments

Argumentative papers involve introducing a thesis statement early on. In crafting the conclusion for an argumentative paper, always restate the thesis, outlining the way you've developed it throughout the entire paper.

It might be appropriate to mention any counterarguments in the conclusion, so you can demonstrate how your thesis is correct or how the data best supports your main points.

Empirical paper: Summarize research findings

Empirical papers break down a series of research questions. In your conclusion, discuss the findings your research revealed, including any information that surprised you.

Be clear about the conclusions you reached, and explain whether or not you expected to arrive at these particular ones.

  • Step 3: Discuss the implications of your research

Argumentative papers and empirical papers also differ in this part of a research paper conclusion. Here are some tips on crafting conclusions for argumentative and empirical papers.

Argumentative paper: Powerful closing statement

In an argumentative paper, you'll have spent a great deal of time expressing the opinions you formed after doing a significant amount of research. Make a strong closing statement in your argumentative paper's conclusion to share the significance of your work.

You can outline the next steps through a bold call to action, or restate how powerful your ideas turned out to be.

Empirical paper: Directions for future research

Empirical papers are broader in scope. They usually cover a variety of aspects and can include several points of view.

To write a good conclusion for an empirical paper, suggest the type of research that could be done in the future, including methods for further investigation or outlining ways other researchers might proceed.

If you feel your research had any limitations, even if they were outside your control, you could mention these in your conclusion.

After you finish outlining your conclusion, ask someone to read it and offer feedback. In any research project you're especially close to, it can be hard to identify problem areas. Having a close friend or someone whose opinion you value read the research paper and provide honest feedback can be invaluable. Take note of any suggested edits and consider incorporating them into your paper if they make sense.

  • Things to avoid in a research paper conclusion

Keep these aspects to avoid in mind as you're writing your conclusion and refer to them after you've created an outline.

Dry summary

Writing a memorable, succinct conclusion is arguably more important than a strong introduction. Take care to avoid just rephrasing your main points, and don't fall into the trap of repeating dry facts or citations.

You can provide a new perspective for your readers to think about or contextualize your research. Either way, make the conclusion vibrant and interesting, rather than a rote recitation of your research paper’s highlights.

Clichéd or generic phrasing

Your research paper conclusion should feel fresh and inspiring. Avoid generic phrases like "to sum up" or "in conclusion." These phrases tend to be overused, especially in an academic context and might turn your readers off.

The conclusion also isn't the time to introduce colloquial phrases or informal language. Retain a professional, confident tone consistent throughout your paper’s conclusion so it feels exciting and bold.

New data or evidence

While you should present strong data throughout your paper, the conclusion isn't the place to introduce new evidence. This is because readers are engaged in actively learning as they read through the body of your paper.

By the time they reach the conclusion, they will have formed an opinion one way or the other (hopefully in your favor!). Introducing new evidence in the conclusion will only serve to surprise or frustrate your reader.

Ignoring contradictory evidence

If your research reveals contradictory evidence, don't ignore it in the conclusion. This will damage your credibility as an expert and might even serve to highlight the contradictions.

Be as transparent as possible and admit to any shortcomings in your research, but don't dwell on them for too long.

Ambiguous or unclear resolutions

The point of a research paper conclusion is to provide closure and bring all your ideas together. You should wrap up any arguments you introduced in the paper and tie up any loose ends, while demonstrating why your research and data are strong.

Use direct language in your conclusion and avoid ambiguity. Even if some of the data and sources you cite are inconclusive or contradictory, note this in your conclusion to come across as confident and trustworthy.

  • Examples of research paper conclusions

Your research paper should provide a compelling close to the paper as a whole, highlighting your research and hard work. While the conclusion should represent your unique style, these examples offer a starting point:

Ultimately, the data we examined all point to the same conclusion: Encouraging a good work-life balance improves employee productivity and benefits the company overall. The research suggests that when employees feel their personal lives are valued and respected by their employers, they are more likely to be productive when at work. In addition, company turnover tends to be reduced when employees have a balance between their personal and professional lives. While additional research is required to establish ways companies can support employees in creating a stronger work-life balance, it's clear the need is there.

Social media is a primary method of communication among young people. As we've seen in the data presented, most young people in high school use a variety of social media applications at least every hour, including Instagram and Facebook. While social media is an avenue for connection with peers, research increasingly suggests that social media use correlates with body image issues. Young girls with lower self-esteem tend to use social media more often than those who don't log onto social media apps every day. As new applications continue to gain popularity, and as more high school students are given smartphones, more research will be required to measure the effects of prolonged social media use.

What are the different kinds of research paper conclusions?

There are no formal types of research paper conclusions. Ultimately, the conclusion depends on the outline of your paper and the type of research you’re presenting. While some experts note that research papers can end with a new perspective or commentary, most papers should conclude with a combination of both. The most important aspect of a good research paper conclusion is that it accurately represents the body of the paper.

Can I present new arguments in my research paper conclusion?

Research paper conclusions are not the place to introduce new data or arguments. The body of your paper is where you should share research and insights, where the reader is actively absorbing the content. By the time a reader reaches the conclusion of the research paper, they should have formed their opinion. Introducing new arguments in the conclusion can take a reader by surprise, and not in a positive way. It might also serve to frustrate readers.

How long should a research paper conclusion be?

There's no set length for a research paper conclusion. However, it's a good idea not to run on too long, since conclusions are supposed to be succinct. A good rule of thumb is to keep your conclusion around 5 to 10 percent of the paper's total length. If your paper is 10 pages, try to keep your conclusion under one page.

What should I include in a research paper conclusion?

A good research paper conclusion should always include a sense of urgency, so the reader can see how and why the topic should matter to them. You can also note some recommended actions to help fix the problem and some obstacles they might encounter. A conclusion should also remind the reader of the thesis statement, along with the main points you covered in the paper. At the end of the conclusion, add a powerful closing statement that helps cement the paper in the mind of the reader.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next, log in or sign up.

Get started for free

IMAGES

  1. how to draw conclusion in research

    drawing conclusions in research process

  2. 🐈 Scientific conclusion steps. What is scientific conclusion. 2022-10-10

    drawing conclusions in research process

  3. How to Write a Conclusion for a Research Paper: Effective Tips and

    drawing conclusions in research process

  4. Practical Research Drawing Conclusions from Patterns and Themes

    drawing conclusions in research process

  5. How to Write a Conclusion for a Research Paper: Effective Tips and

    drawing conclusions in research process

  6. How to Write a Conclusion for a Research Paper

    drawing conclusions in research process

VIDEO

  1. Análisis Meaning In English

  2. Research Conclusion| Drawing Conclusion| Lesson1| Practical Research 2|Quantitative| Chapter5|

  3. Research Methodology: Philosophically Explained!

  4. HOW TO WRITE RESEARCH/THESIS RESULTS AND DISCUSSIONS, SUMMARY, CONCLUSION, & RECOMMENDATION

  5. How to write a research paper conclusion

  6. Steps in Research Process (MPC-005)

COMMENTS

  1. Statistical Methods: What It Is, Process, Analyze & Present

    Techniques like factor analysis, regression, and correlation are used to analyze survey data, identify trends, and draw conclusions about populations. Behavioral Studies: Researchers use statistical methods to explore underlying patterns in human behavior, such as consumer preferences, social interactions, and decision-making processes.

  2. Black Family Members' Experiences and Interpretations of Supportive

    The researcher (ENM) conducted the preliminary analysis process in collaboration with co-authors, who served as members of the research committee. Weekly meetings, typically held via Zoom, were conducted between the first author and the second author (RB) to review data analysis and development of themes.

  3. TRAILS Faculty Launch New Study on Perception Bias and AI Systems

    Perception bias is a cognitive bias that occurs when we subconsciously draw conclusions based on what we expect to see or experience. It has been studied extensively, particularly as it relates to health information, the workplace environment, and even social gatherings.

  4. Bosses want to work from home more than employees do, says new survey

    It's still hard to draw any definitive conclusions about employees' and managers' remote work preferences.

  5. Planning and Writing a Research Paper: Draw Conclusions

    Key Takeaways. Because research generates further research, the conclusions you draw from your research are important. To test the validity of your conclusions, you will have to review both the content of your paper and the way in which you arrived at the content. Mailing Address: 3501 University Blvd. East, Adelphi, MD 20783.

  6. Drawing Conclusions

    Drawing Conclusions. For any research project and any scientific discipline, drawing conclusions is the final, and most important, part of the process. Whichever reasoning processes and research methods were used, the final conclusion is critical, determining success or failure. If an otherwise excellent experiment is summarized by a weak ...

  7. Research Process

    Test hypotheses: The research process allows researchers to test hypotheses and make evidence-based conclusions. Through the systematic analysis of data, researchers can draw conclusions about the relationships between variables and develop new theories or models.

  8. Writing a Research Paper Conclusion

    Step 1: Restate the problem. The first task of your conclusion is to remind the reader of your research problem. You will have discussed this problem in depth throughout the body, but now the point is to zoom back out from the details to the bigger picture. While you are restating a problem you've already introduced, you should avoid phrasing ...

  9. Drawing Conclusions and Reporting the Results

    The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences. The most prestigious way to report one's findings is by writing a manuscript and having it published in a ...

  10. Chapter 15: Interpreting results and drawing conclusions

    This process implies a high level of explicitness in judgements about values or preferences attached to different outcomes ... Table 15.6.a shows how review authors may be aided in their interpretation of the body of evidence and drawing conclusions about future research and practice. Table 15.6.a Implications for research and practice ...

  11. 12 Ways To Draw Conclusions From Information

    As you'll learn in a moment, it encompasses a wide variety of techniques, so there isn't one single definition. 1. Deduction. Common in: philosophy, mathematics. Structure: If X, then Y, due to the definitions of X and Y. X applies to this case. Therefore Y applies to this case.

  12. How to Write a Conclusion for Research Papers (with Examples)

    The conclusion in a research paper is the final section, where you need to summarize your research, presenting the key findings and insights derived from your study. ... Here's a step-by-step process to help you create and know what to put in the conclusion of a research paper: 2. ... If you are drawing a direct quote or paraphrasing a ...

  13. The Process of Writing a Research Paper Guide: The Conclusion

    The conclusion is intended to help the reader understand why your research should matter to them after they have finished reading the paper. A conclusion is not merely a summary of the main topics covered or a re-statement of your research problem, but a synthesis of key points and, if applicable, where you recommend new areas for future ...

  14. Drawing Conclusions in Psychological Research: From Data to Insights

    Conclusion. Drawing conclusions is a defining moment in psychological research. It's the culmination of a complex process that starts with a question and, through careful design and analysis, ends with insights that can deepen our understanding of the human mind and behavior. This step is not the end of the journey; it's a bridge to further ...

  15. Drawing Conclusions and Reporting the Results

    Drawing Conclusions. Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research. If the results are statistically significant and ...

  16. 2.7 Drawing Conclusions and Reporting the Results

    The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences. The most prestigious way to report one's findings is by writing a manuscript and having it published in a ...

  17. Analyzing, Applying, and Drawing Conclusions From Research to Make

    One of the most critical parts of the research is to be able to analyze, apply and draw conclusions from the information and then ultimately make the best recommendations. This is the ability to ...

  18. 11 Steps in Research Process

    Conclusion and Recommendations: Summarizing the research results, drawing conclusions, and suggesting potential future research or practical implications. Throughout the research process, researchers must maintain objectivity, rigor, and ethical considerations to ensure the validity and reliability of the results.

  19. Research Paper Conclusion

    Research Paper Conclusion. Definition: A research paper conclusion is the final section of a research paper that summarizes the key findings, significance, and implications of the research. It is the writer's opportunity to synthesize the information presented in the paper, draw conclusions, and make recommendations for future research or ...

  20. Inductive Reasoning

    Inductive reasoning in research. In inductive research, you start by making observations or gathering data. Then, you take a broad view of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

  21. Data Interpretation

    The purpose of data interpretation is to make sense of complex data by analyzing and drawing insights from it. The process of data interpretation involves identifying patterns and trends, making comparisons, and drawing conclusions based on the data. The ultimate goal of data interpretation is to use the insights gained from the analysis to ...

  22. Inferences and Conclusions

    In drawing conclusions (making inferences), you are really getting at the ultimate meaning of things - what is important, why it is important, how one event influences another, how one happening leads to another. Simply getting the facts in reading is not enough.You must think about what those facts mean to you. Inferences and Conclusions.

  23. How to Write a Conclusion for Your Research Paper

    Step 1: Restate the problem. Always begin by restating the research problem in the conclusion of a research paper. This serves to remind the reader of your hypothesis and refresh them on the main point of the paper. When restating the problem, take care to avoid using exactly the same words you employed earlier in the paper.

  24. PDF Module 6: Summarizing Results and Drawing Conclusions

    Summarizing Results and Drawing Conclusions 5. What new questions do you have? Your findings might also help to drive future research studies by generating new questions. Follow the guidance below and try to answer the questions asked as they apply to your results. Most research uncovers more questions than answers. This is one of the most