• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

method of analysis in research paper

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

Jotform vs SurveyMonkey

Jotform vs SurveyMonkey: Which Is Best in 2024

Aug 15, 2024

method of analysis in research paper

360 Degree Feedback Spider Chart is Back!

Aug 14, 2024

Jotform vs Wufoo

Jotform vs Wufoo: Comparison of Features and Prices

Aug 13, 2024

method of analysis in research paper

Product or Service: Which is More Important? — Tuesday CX Thoughts

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Educational resources and simple solutions for your research journey

How to write the methods section of a research paper

How to Write the Methods Section of a Research Paper

How to write the methods section of a research paper

Writing a research paper is both an art and a skill, and knowing how to write the methods section of a research paper is the first crucial step in mastering scientific writing. If, like the majority of early career researchers, you believe that the methods section is the simplest to write and needs little in the way of careful consideration or thought, this article will help you understand it is not 1 .

We have all probably asked our supervisors, coworkers, or search engines “ how to write a methods section of a research paper ” at some point in our scientific careers, so you are not alone if that’s how you ended up here.  Even for seasoned researchers, selecting what to include in the methods section from a wealth of experimental information can occasionally be a source of distress and perplexity.   

Additionally, journal specifications, in some cases, may make it more of a requirement rather than a choice to provide a selective yet descriptive account of the experimental procedure. Hence, knowing these nuances of how to write the methods section of a research paper is critical to its success. The methods section of the research paper is not supposed to be a detailed heavy, dull section that some researchers tend to write; rather, it should be the central component of the study that justifies the validity and reliability of the research.

Are you still unsure of how the methods section of a research paper forms the basis of every investigation? Consider the last article you read but ignore the methods section and concentrate on the other parts of the paper . Now think whether you could repeat the study and be sure of the credibility of the findings despite knowing the literature review and even having the data in front of you. You have the answer!   

method of analysis in research paper

Having established the importance of the methods section , the next question is how to write the methods section of a research paper that unifies the overall study. The purpose of the methods section , which was earlier called as Materials and Methods , is to describe how the authors went about answering the “research question” at hand. Here, the objective is to tell a coherent story that gives a detailed account of how the study was conducted, the rationale behind specific experimental procedures, the experimental setup, objects (variables) involved, the research protocol employed, tools utilized to measure, calculations and measurements, and the analysis of the collected data 2 .

In this article, we will take a deep dive into this topic and provide a detailed overview of how to write the methods section of a research paper . For the sake of clarity, we have separated the subject into various sections with corresponding subheadings.  

Table of Contents

What is the methods section of a research paper ?  

The methods section is a fundamental section of any paper since it typically discusses the ‘ what ’, ‘ how ’, ‘ which ’, and ‘ why ’ of the study, which is necessary to arrive at the final conclusions. In a research article, the introduction, which serves to set the foundation for comprehending the background and results is usually followed by the methods section, which precedes the result and discussion sections. The methods section must explicitly state what was done, how it was done, which equipment, tools and techniques were utilized, how were the measurements/calculations taken, and why specific research protocols, software, and analytical methods were employed.  

Why is the methods section important?  

The primary goal of the methods section is to provide pertinent details about the experimental approach so that the reader may put the results in perspective and, if necessary, replicate the findings 3 .  This section offers readers the chance to evaluate the reliability and validity of any study. In short, it also serves as the study’s blueprint, assisting researchers who might be unsure about any other portion in establishing the study’s context and validity. The methods plays a rather crucial role in determining the fate of the article; an incomplete and unreliable methods section can frequently result in early rejections and may lead to numerous rounds of modifications during the publication process. This means that the reviewers also often use methods section to assess the reliability and validity of the research protocol and the data analysis employed to address the research topic. In other words, the purpose of the methods section is to demonstrate the research acumen and subject-matter expertise of the author(s) in their field.  

Structure of methods section of a research paper  

Similar to the research paper, the methods section also follows a defined structure; this may be dictated by the guidelines of a specific journal or can be presented in a chronological or thematic manner based on the study type. When writing the methods section , authors should keep in mind that they are telling a story about how the research was conducted. They should only report relevant information to avoid confusing the reader and include details that would aid in connecting various aspects of the entire research activity together. It is generally advisable to present experiments in the order in which they were conducted. This facilitates the logical flow of the research and allows readers to follow the progression of the study design.   

method of analysis in research paper

It is also essential to clearly state the rationale behind each experiment and how the findings of earlier experiments informed the design or interpretation of later experiments. This allows the readers to understand the overall purpose of the study design and the significance of each experiment within that context. However, depending on the particular research question and method, it may make sense to present information in a different order; therefore, authors must select the best structure and strategy for their individual studies.   

In cases where there is a lot of information, divide the sections into subheadings to cover the pertinent details. If the journal guidelines pose restrictions on the word limit , additional important information can be supplied in the supplementary files. A simple rule of thumb for sectioning the method section is to begin by explaining the methodological approach ( what was done ), describing the data collection methods ( how it was done ), providing the analysis method ( how the data was analyzed ), and explaining the rationale for choosing the methodological strategy. This is described in detail in the upcoming sections.    

How to write the methods section of a research paper  

Contrary to widespread assumption, the methods section of a research paper should be prepared once the study is complete to prevent missing any key parameter. Hence, please make sure that all relevant experiments are done before you start writing a methods section . The next step for authors is to look up any applicable academic style manuals or journal-specific standards to ensure that the methods section is formatted correctly. The methods section of a research paper typically constitutes materials and methods; while writing this section, authors usually arrange the information under each category.

The materials category describes the samples, materials, treatments, and instruments, while experimental design, sample preparation, data collection, and data analysis are a part of the method category. According to the nature of the study, authors should include additional subsections within the methods section, such as ethical considerations like the declaration of Helsinki (for studies involving human subjects), demographic information of the participants, and any other crucial information that can affect the output of the study. Simply put, the methods section has two major components: content and format. Here is an easy checklist for you to consider if you are struggling with how to write the methods section of a research paper .   

  • Explain the research design, subjects, and sample details  
  • Include information on inclusion and exclusion criteria  
  • Mention ethical or any other permission required for the study  
  • Include information about materials, experimental setup, tools, and software  
  • Add details of data collection and analysis methods  
  • Incorporate how research biases were avoided or confounding variables were controlled  
  • Evaluate and justify the experimental procedure selected to address the research question  
  • Provide precise and clear details of each experiment  
  • Flowcharts, infographics, or tables can be used to present complex information     
  • Use past tense to show that the experiments have been done   
  • Follow academic style guides (such as APA or MLA ) to structure the content  
  • Citations should be included as per standard protocols in the field  

Now that you know how to write the methods section of a research paper , let’s address another challenge researchers face while writing the methods section —what to include in the methods section .  How much information is too much is not always obvious when it comes to trying to include data in the methods section of a paper. In the next section, we examine this issue and explore potential solutions.   

method of analysis in research paper

What to include in the methods section of a research paper  

The technical nature of the methods section occasionally makes it harder to present the information clearly and concisely while staying within the study context. Many young researchers tend to veer off subject significantly, and they frequently commit the sin of becoming bogged down in itty bitty details, making the text harder to read and impairing its overall flow. However, the best way to write the methods section is to start with crucial components of the experiments. If you have trouble deciding which elements are essential, think about leaving out those that would make it more challenging to comprehend the context or replicate the results. The top-down approach helps to ensure all relevant information is incorporated and vital information is not lost in technicalities. Next, remember to add details that are significant to assess the validity and reliability of the study. Here is a simple checklist for you to follow ( bonus tip: you can also make a checklist for your own study to avoid missing any critical information while writing the methods section ).  

  • Structuring the methods section : Authors should diligently follow journal guidelines and adhere to the specific author instructions provided when writing the methods section . Journals typically have specific guidelines for formatting the methods section ; for example, Frontiers in Plant Sciences advises arranging the materials and methods section by subheading and citing relevant literature. There are several standardized checklists available for different study types in the biomedical field, including CONSORT (Consolidated Standards of Reporting Trials) for randomized clinical trials, PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analysis) for systematic reviews and meta-analysis, and STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) for cohort, case-control, cross-sectional studies. Before starting the methods section , check the checklist available in your field that can function as a guide.     
  • Organizing different sections to tell a story : Once you are sure of the format required for structuring the methods section , the next is to present the sections in a logical manner; as mentioned earlier, the sections can be organized according to the chronology or themes. In the chronological arrangement, you should discuss the methods in accordance with how the experiments were carried out. An example of the method section of a research paper of an animal study should first ideally include information about the species, weight, sex, strain, and age. Next, the number of animals, their initial conditions, and their living and housing conditions should also be mentioned. Second, how the groups are assigned and the intervention (drug treatment, stress, or other) given to each group, and finally, the details of tools and techniques used to measure, collect, and analyze the data. Experiments involving animal or human subjects should additionally state an ethics approval statement. It is best to arrange the section using the thematic approach when discussing distinct experiments not following a sequential order.  
  • Define and explain the objects and procedure: Experimental procedure should clearly be stated in the methods section . Samples, necessary preparations (samples, treatment, and drug), and methods for manipulation need to be included. All variables (control, dependent, independent, and confounding) must be clearly defined, particularly if the confounding variables can affect the outcome of the study.  
  • Match the order of the methods section with the order of results: Though not mandatory, organizing the manuscript in a logical and coherent manner can improve the readability and clarity of the paper. This can be done by following a consistent structure throughout the manuscript; readers can easily navigate through the different sections and understand the methods and results in relation to each other. Using experiment names as headings for both the methods and results sections can also make it simpler for readers to locate specific information and corroborate it if needed.   
  • Relevant information must always be included: The methods section should have information on all experiments conducted and their details clearly mentioned. Ask the journal whether there is a way to offer more information in the supplemental files or external repositories if your target journal has strict word limitations. For example, Nature communications encourages authors to deposit their step-by-step protocols in an open-resource depository, Protocol Exchange which allows the protocols to be linked with the manuscript upon publication. Providing access to detailed protocols also helps to increase the transparency and reproducibility of the research.  
  • It’s all in the details: The methods section should meticulously list all the materials, tools, instruments, and software used for different experiments. Specify the testing equipment on which data was obtained, together with its manufacturer’s information, location, city, and state or any other stimuli used to manipulate the variables. Provide specifics on the research process you employed; if it was a standard protocol, cite previous studies that also used the protocol.  Include any protocol modifications that were made, as well as any other factors that were taken into account when planning the study or gathering data. Any new or modified techniques should be explained by the authors. Typically, readers evaluate the reliability and validity of the procedures using the cited literature, and a widely accepted checklist helps to support the credibility of the methodology. Note: Authors should include a statement on sample size estimation (if applicable), which is often missed. It enables the reader to determine how many subjects will be required to detect the expected change in the outcome variables within a given confidence interval.  
  • Write for the audience: While explaining the details in the methods section , authors should be mindful of their target audience, as some of the rationale or assumptions on which specific procedures are based might not always be obvious to the audience, particularly for a general audience. Therefore, when in doubt, the objective of a procedure should be specified either in relation to the research question or to the entire protocol.  
  • Data interpretation and analysis : Information on data processing, statistical testing, levels of significance, and analysis tools and software should be added. Mention if the recommendations and expertise of an experienced statistician were followed. Also, evaluate and justify the preferred statistical method used in the study and its significance.  

What NOT to include in the methods section of a research paper  

To address “ how to write the methods section of a research paper ”, authors should not only pay careful attention to what to include but also what not to include in the methods section of a research paper . Here is a list of do not’s when writing the methods section :  

  • Do not elaborate on specifics of standard methods/procedures: You should refrain from adding unnecessary details of experiments and practices that are well established and cited previously.  Instead, simply cite relevant literature or mention if the manufacturer’s protocol was followed.  
  • Do not add unnecessary details : Do not include minute details of the experimental procedure and materials/instruments used that are not significant for the outcome of the experiment. For example, there is no need to mention the brand name of the water bath used for incubation.    
  • Do not discuss the results: The methods section is not to discuss the results or refer to the tables and figures; save it for the results and discussion section. Also, focus on the methods selected to conduct the study and avoid diverting to other methods or commenting on their pros or cons.  
  • Do not make the section bulky : For extensive methods and protocols, provide the essential details and share the rest of the information in the supplemental files. The writing should be clear yet concise to maintain the flow of the section.  

We hope that by this point, you understand how crucial it is to write a thoughtful and precise methods section and the ins and outs of how to write the methods section of a research paper . To restate, the entire purpose of the methods section is to enable others to reproduce the results or verify the research. We sincerely hope that this post has cleared up any confusion and given you a fresh perspective on the methods section .

As a parting gift, we’re leaving you with a handy checklist that will help you understand how to write the methods section of a research paper . Feel free to download this checklist and use or share this with those who you think may benefit from it.  

method of analysis in research paper

References  

  • Bhattacharya, D. How to write the Methods section of a research paper. Editage Insights, 2018. https://www.editage.com/insights/how-to-write-the-methods-section-of-a-research-paper (2018).
  • Kallet, R. H. How to Write the Methods Section of a Research Paper. Respiratory Care 49, 1229–1232 (2004). https://pubmed.ncbi.nlm.nih.gov/15447808/
  • Grindstaff, T. L. & Saliba, S. A. AVOIDING MANUSCRIPT MISTAKES. Int J Sports Phys Ther 7, 518–524 (2012). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3474299/

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

research funding sources

What are the Best Research Funding Sources

inductive research

Inductive vs. Deductive Research Approach

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Dissertation
  • What Is a Research Methodology? | Steps & Tips

What Is a Research Methodology? | Steps & Tips

Published on 25 February 2019 by Shona McCombes . Revised on 10 October 2022.

Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.

It should include:

  • The type of research you conducted
  • How you collected and analysed your data
  • Any tools or materials you used in the research
  • Why you chose these methods
  • Your methodology section should generally be written in the past tense .
  • Academic style guides in your field may provide detailed guidelines on what to include for different types of studies.
  • Your citation style might provide guidelines for your methodology section (e.g., an APA Style methods section ).

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, frequently asked questions about methodology.

Prevent plagiarism, run a free check.

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalisable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalised your concepts and measured your variables. Discuss your sampling method or inclusion/exclusion criteria, as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on 4–8 July 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyse?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness shop’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods here.

The only proofreading tool specialized in correcting academic writing

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

method of analysis in research paper

Correct my document today

Next, you should indicate how you processed and analysed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analysing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorising and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviours, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalised beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalisable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives  and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

Methodology refers to the overarching strategy and rationale of your research. Developing your methodology involves studying the research methods used in your field and the theories or principles that underpin them, in order to choose the approach that best matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. interviews, experiments , surveys , statistical tests ).

In a dissertation or scientific paper, the methodology chapter or methods section comes after the introduction and before the results , discussion and conclusion .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/thesis-dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a literature review | guide, template, & examples, what is a theoretical framework | a step-by-step guide.

Quantitative Analysis: the guide for beginners

Julián Cárdenas at University of Valencia

  • University of Valencia

Abstract and Figures

Research process

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Iwan Suhardjo

  • Meiliana Meiliana
  • Justin Justin
  • Viandi Agustinus
  • Tshireletso Philemon Kgosiemang

Samuel Khoza

  • Akanksha Kamble
  • Vinaya Dandekar
  • Dr. Parag Sawant
  • Alan Bryman
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

method of analysis in research paper

Qualitative Data Analysis Methods 101:

The “big 6” methods + examples.

By: Kerryn Warren (PhD) | Reviewed By: Eunice Rautenbach (D.Tech) | May 2020 (Updated April 2023)

Qualitative data analysis methods. Wow, that’s a mouthful. 

If you’re new to the world of research, qualitative data analysis can look rather intimidating. So much bulky terminology and so many abstract, fluffy concepts. It certainly can be a minefield!

Don’t worry – in this post, we’ll unpack the most popular analysis methods , one at a time, so that you can approach your analysis with confidence and competence – whether that’s for a dissertation, thesis or really any kind of research project.

Qualitative data analysis methods

What (exactly) is qualitative data analysis?

To understand qualitative data analysis, we need to first understand qualitative data – so let’s step back and ask the question, “what exactly is qualitative data?”.

Qualitative data refers to pretty much any data that’s “not numbers” . In other words, it’s not the stuff you measure using a fixed scale or complex equipment, nor do you analyse it using complex statistics or mathematics.

So, if it’s not numbers, what is it?

Words, you guessed? Well… sometimes , yes. Qualitative data can, and often does, take the form of interview transcripts, documents and open-ended survey responses – but it can also involve the interpretation of images and videos. In other words, qualitative isn’t just limited to text-based data.

So, how’s that different from quantitative data, you ask?

Simply put, qualitative research focuses on words, descriptions, concepts or ideas – while quantitative research focuses on numbers and statistics . Qualitative research investigates the “softer side” of things to explore and describe , while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them. If you’re keen to learn more about the differences between qual and quant, we’ve got a detailed post over here .

qualitative data analysis vs quantitative data analysis

So, qualitative analysis is easier than quantitative, right?

Not quite. In many ways, qualitative data can be challenging and time-consuming to analyse and interpret. At the end of your data collection phase (which itself takes a lot of time), you’ll likely have many pages of text-based data or hours upon hours of audio to work through. You might also have subtle nuances of interactions or discussions that have danced around in your mind, or that you scribbled down in messy field notes. All of this needs to work its way into your analysis.

Making sense of all of this is no small task and you shouldn’t underestimate it. Long story short – qualitative analysis can be a lot of work! Of course, quantitative analysis is no piece of cake either, but it’s important to recognise that qualitative analysis still requires a significant investment in terms of time and effort.

Need a helping hand?

method of analysis in research paper

In this post, we’ll explore qualitative data analysis by looking at some of the most common analysis methods we encounter. We’re not going to cover every possible qualitative method and we’re not going to go into heavy detail – we’re just going to give you the big picture. That said, we will of course includes links to loads of extra resources so that you can learn more about whichever analysis method interests you.

Without further delay, let’s get into it.

The “Big 6” Qualitative Analysis Methods 

There are many different types of qualitative data analysis, all of which serve different purposes and have unique strengths and weaknesses . We’ll start by outlining the analysis methods and then we’ll dive into the details for each.

The 6 most popular methods (or at least the ones we see at Grad Coach) are:

  • Content analysis
  • Narrative analysis
  • Discourse analysis
  • Thematic analysis
  • Grounded theory (GT)
  • Interpretive phenomenological analysis (IPA)

Let’s take a look at each of them…

QDA Method #1: Qualitative Content Analysis

Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

With content analysis, you could, for instance, identify the frequency with which an idea is shared or spoken about – like the number of times a Kardashian is mentioned on Twitter. Or you could identify patterns of deeper underlying interpretations – for instance, by identifying phrases or words in tourist pamphlets that highlight India as an ancient country.

Because content analysis can be used in such a wide variety of ways, it’s important to go into your analysis with a very specific question and goal, or you’ll get lost in the fog. With content analysis, you’ll group large amounts of text into codes , summarise these into categories, and possibly even tabulate the data to calculate the frequency of certain concepts or variables. Because of this, content analysis provides a small splash of quantitative thinking within a qualitative method.

Naturally, while content analysis is widely useful, it’s not without its drawbacks . One of the main issues with content analysis is that it can be very time-consuming , as it requires lots of reading and re-reading of the texts. Also, because of its multidimensional focus on both qualitative and quantitative aspects, it is sometimes accused of losing important nuances in communication.

Content analysis also tends to concentrate on a very specific timeline and doesn’t take into account what happened before or after that timeline. This isn’t necessarily a bad thing though – just something to be aware of. So, keep these factors in mind if you’re considering content analysis. Every analysis method has its limitations , so don’t be put off by these – just be aware of them ! If you’re interested in learning more about content analysis, the video below provides a good starting point.

QDA Method #2: Narrative Analysis 

As the name suggests, narrative analysis is all about listening to people telling stories and analysing what that means . Since stories serve a functional purpose of helping us make sense of the world, we can gain insights into the ways that people deal with and make sense of reality by analysing their stories and the ways they’re told.

You could, for example, use narrative analysis to explore whether how something is being said is important. For instance, the narrative of a prisoner trying to justify their crime could provide insight into their view of the world and the justice system. Similarly, analysing the ways entrepreneurs talk about the struggles in their careers or cancer patients telling stories of hope could provide powerful insights into their mindsets and perspectives . Simply put, narrative analysis is about paying attention to the stories that people tell – and more importantly, the way they tell them.

Of course, the narrative approach has its weaknesses , too. Sample sizes are generally quite small due to the time-consuming process of capturing narratives. Because of this, along with the multitude of social and lifestyle factors which can influence a subject, narrative analysis can be quite difficult to reproduce in subsequent research. This means that it’s difficult to test the findings of some of this research.

Similarly, researcher bias can have a strong influence on the results here, so you need to be particularly careful about the potential biases you can bring into your analysis when using this method. Nevertheless, narrative analysis is still a very useful qualitative analysis method – just keep these limitations in mind and be careful not to draw broad conclusions . If you’re keen to learn more about narrative analysis, the video below provides a great introduction to this qualitative analysis method.

QDA Method #3: Discourse Analysis 

Discourse is simply a fancy word for written or spoken language or debate . So, discourse analysis is all about analysing language within its social context. In other words, analysing language – such as a conversation, a speech, etc – within the culture and society it takes place. For example, you could analyse how a janitor speaks to a CEO, or how politicians speak about terrorism.

To truly understand these conversations or speeches, the culture and history of those involved in the communication are important factors to consider. For example, a janitor might speak more casually with a CEO in a company that emphasises equality among workers. Similarly, a politician might speak more about terrorism if there was a recent terrorist incident in the country.

So, as you can see, by using discourse analysis, you can identify how culture , history or power dynamics (to name a few) have an effect on the way concepts are spoken about. So, if your research aims and objectives involve understanding culture or power dynamics, discourse analysis can be a powerful method.

Because there are many social influences in terms of how we speak to each other, the potential use of discourse analysis is vast . Of course, this also means it’s important to have a very specific research question (or questions) in mind when analysing your data and looking for patterns and themes, or you might land up going down a winding rabbit hole.

Discourse analysis can also be very time-consuming  as you need to sample the data to the point of saturation – in other words, until no new information and insights emerge. But this is, of course, part of what makes discourse analysis such a powerful technique. So, keep these factors in mind when considering this QDA method. Again, if you’re keen to learn more, the video below presents a good starting point.

QDA Method #4: Thematic Analysis

Thematic analysis looks at patterns of meaning in a data set – for example, a set of interviews or focus group transcripts. But what exactly does that… mean? Well, a thematic analysis takes bodies of data (which are often quite large) and groups them according to similarities – in other words, themes . These themes help us make sense of the content and derive meaning from it.

Let’s take a look at an example.

With thematic analysis, you could analyse 100 online reviews of a popular sushi restaurant to find out what patrons think about the place. By reviewing the data, you would then identify the themes that crop up repeatedly within the data – for example, “fresh ingredients” or “friendly wait staff”.

So, as you can see, thematic analysis can be pretty useful for finding out about people’s experiences , views, and opinions . Therefore, if your research aims and objectives involve understanding people’s experience or view of something, thematic analysis can be a great choice.

Since thematic analysis is a bit of an exploratory process, it’s not unusual for your research questions to develop , or even change as you progress through the analysis. While this is somewhat natural in exploratory research, it can also be seen as a disadvantage as it means that data needs to be re-reviewed each time a research question is adjusted. In other words, thematic analysis can be quite time-consuming – but for a good reason. So, keep this in mind if you choose to use thematic analysis for your project and budget extra time for unexpected adjustments.

Thematic analysis takes bodies of data and groups them according to similarities (themes), which help us make sense of the content.

QDA Method #5: Grounded theory (GT) 

Grounded theory is a powerful qualitative analysis method where the intention is to create a new theory (or theories) using the data at hand, through a series of “ tests ” and “ revisions ”. Strictly speaking, GT is more a research design type than an analysis method, but we’ve included it here as it’s often referred to as a method.

What’s most important with grounded theory is that you go into the analysis with an open mind and let the data speak for itself – rather than dragging existing hypotheses or theories into your analysis. In other words, your analysis must develop from the ground up (hence the name). 

Let’s look at an example of GT in action.

Assume you’re interested in developing a theory about what factors influence students to watch a YouTube video about qualitative analysis. Using Grounded theory , you’d start with this general overarching question about the given population (i.e., graduate students). First, you’d approach a small sample – for example, five graduate students in a department at a university. Ideally, this sample would be reasonably representative of the broader population. You’d interview these students to identify what factors lead them to watch the video.

After analysing the interview data, a general pattern could emerge. For example, you might notice that graduate students are more likely to read a post about qualitative methods if they are just starting on their dissertation journey, or if they have an upcoming test about research methods.

From here, you’ll look for another small sample – for example, five more graduate students in a different department – and see whether this pattern holds true for them. If not, you’ll look for commonalities and adapt your theory accordingly. As this process continues, the theory would develop . As we mentioned earlier, what’s important with grounded theory is that the theory develops from the data – not from some preconceived idea.

So, what are the drawbacks of grounded theory? Well, some argue that there’s a tricky circularity to grounded theory. For it to work, in principle, you should know as little as possible regarding the research question and population, so that you reduce the bias in your interpretation. However, in many circumstances, it’s also thought to be unwise to approach a research question without knowledge of the current literature . In other words, it’s a bit of a “chicken or the egg” situation.

Regardless, grounded theory remains a popular (and powerful) option. Naturally, it’s a very useful method when you’re researching a topic that is completely new or has very little existing research about it, as it allows you to start from scratch and work your way from the ground up .

Grounded theory is used to create a new theory (or theories) by using the data at hand, as opposed to existing theories and frameworks.

QDA Method #6:   Interpretive Phenomenological Analysis (IPA)

Interpretive. Phenomenological. Analysis. IPA . Try saying that three times fast…

Let’s just stick with IPA, okay?

IPA is designed to help you understand the personal experiences of a subject (for example, a person or group of people) concerning a major life event, an experience or a situation . This event or experience is the “phenomenon” that makes up the “P” in IPA. Such phenomena may range from relatively common events – such as motherhood, or being involved in a car accident – to those which are extremely rare – for example, someone’s personal experience in a refugee camp. So, IPA is a great choice if your research involves analysing people’s personal experiences of something that happened to them.

It’s important to remember that IPA is subject – centred . In other words, it’s focused on the experiencer . This means that, while you’ll likely use a coding system to identify commonalities, it’s important not to lose the depth of experience or meaning by trying to reduce everything to codes. Also, keep in mind that since your sample size will generally be very small with IPA, you often won’t be able to draw broad conclusions about the generalisability of your findings. But that’s okay as long as it aligns with your research aims and objectives.

Another thing to be aware of with IPA is personal bias . While researcher bias can creep into all forms of research, self-awareness is critically important with IPA, as it can have a major impact on the results. For example, a researcher who was a victim of a crime himself could insert his own feelings of frustration and anger into the way he interprets the experience of someone who was kidnapped. So, if you’re going to undertake IPA, you need to be very self-aware or you could muddy the analysis.

IPA can help you understand the personal experiences of a person or group concerning a major life event, an experience or a situation.

How to choose the right analysis method

In light of all of the qualitative analysis methods we’ve covered so far, you’re probably asking yourself the question, “ How do I choose the right one? ”

Much like all the other methodological decisions you’ll need to make, selecting the right qualitative analysis method largely depends on your research aims, objectives and questions . In other words, the best tool for the job depends on what you’re trying to build. For example:

  • Perhaps your research aims to analyse the use of words and what they reveal about the intention of the storyteller and the cultural context of the time.
  • Perhaps your research aims to develop an understanding of the unique personal experiences of people that have experienced a certain event, or
  • Perhaps your research aims to develop insight regarding the influence of a certain culture on its members.

As you can probably see, each of these research aims are distinctly different , and therefore different analysis methods would be suitable for each one. For example, narrative analysis would likely be a good option for the first aim, while grounded theory wouldn’t be as relevant. 

It’s also important to remember that each method has its own set of strengths, weaknesses and general limitations. No single analysis method is perfect . So, depending on the nature of your research, it may make sense to adopt more than one method (this is called triangulation ). Keep in mind though that this will of course be quite time-consuming.

As we’ve seen, all of the qualitative analysis methods we’ve discussed make use of coding and theme-generating techniques, but the intent and approach of each analysis method differ quite substantially. So, it’s very important to come into your research with a clear intention before you decide which analysis method (or methods) to use.

Start by reviewing your research aims , objectives and research questions to assess what exactly you’re trying to find out – then select a qualitative analysis method that fits. Never pick a method just because you like it or have experience using it – your analysis method (or methods) must align with your broader research aims and objectives.

No single analysis method is perfect, so it can often make sense to adopt more than one  method (this is called triangulation).

Let’s recap on QDA methods…

In this post, we looked at six popular qualitative data analysis methods:

  • First, we looked at content analysis , a straightforward method that blends a little bit of quant into a primarily qualitative analysis.
  • Then we looked at narrative analysis , which is about analysing how stories are told.
  • Next up was discourse analysis – which is about analysing conversations and interactions.
  • Then we moved on to thematic analysis – which is about identifying themes and patterns.
  • From there, we went south with grounded theory – which is about starting from scratch with a specific question and using the data alone to build a theory in response to that question.
  • And finally, we looked at IPA – which is about understanding people’s unique experiences of a phenomenon.

Of course, these aren’t the only options when it comes to qualitative data analysis, but they’re a great starting point if you’re dipping your toes into qualitative research for the first time.

If you’re still feeling a bit confused, consider our private coaching service , where we hold your hand through the research process to help you develop your best work.

method of analysis in research paper

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

87 Comments

Richard N

This has been very helpful. Thank you.

netaji

Thank you madam,

Mariam Jaiyeola

Thank you so much for this information

Nzube

I wonder it so clear for understand and good for me. can I ask additional query?

Lee

Very insightful and useful

Susan Nakaweesi

Good work done with clear explanations. Thank you.

Titilayo

Thanks so much for the write-up, it’s really good.

Hemantha Gunasekara

Thanks madam . It is very important .

Gumathandra

thank you very good

Faricoh Tushera

Great presentation

Pramod Bahulekar

This has been very well explained in simple language . It is useful even for a new researcher.

Derek Jansen

Great to hear that. Good luck with your qualitative data analysis, Pramod!

Adam Zahir

This is very useful information. And it was very a clear language structured presentation. Thanks a lot.

Golit,F.

Thank you so much.

Emmanuel

very informative sequential presentation

Shahzada

Precise explanation of method.

Alyssa

Hi, may we use 2 data analysis methods in our qualitative research?

Thanks for your comment. Most commonly, one would use one type of analysis method, but it depends on your research aims and objectives.

Dr. Manju Pandey

You explained it in very simple language, everyone can understand it. Thanks so much.

Phillip

Thank you very much, this is very helpful. It has been explained in a very simple manner that even a layman understands

Anne

Thank nicely explained can I ask is Qualitative content analysis the same as thematic analysis?

Thanks for your comment. No, QCA and thematic are two different types of analysis. This article might help clarify – https://onlinelibrary.wiley.com/doi/10.1111/nhs.12048

Rev. Osadare K . J

This is my first time to come across a well explained data analysis. so helpful.

Tina King

I have thoroughly enjoyed your explanation of the six qualitative analysis methods. This is very helpful. Thank you!

Bromie

Thank you very much, this is well explained and useful

udayangani

i need a citation of your book.

khutsafalo

Thanks a lot , remarkable indeed, enlighting to the best

jas

Hi Derek, What other theories/methods would you recommend when the data is a whole speech?

M

Keep writing useful artikel.

Adane

It is important concept about QDA and also the way to express is easily understandable, so thanks for all.

Carl Benecke

Thank you, this is well explained and very useful.

Ngwisa

Very helpful .Thanks.

Hajra Aman

Hi there! Very well explained. Simple but very useful style of writing. Please provide the citation of the text. warm regards

Hillary Mophethe

The session was very helpful and insightful. Thank you

This was very helpful and insightful. Easy to read and understand

Catherine

As a professional academic writer, this has been so informative and educative. Keep up the good work Grad Coach you are unmatched with quality content for sure.

Keep up the good work Grad Coach you are unmatched with quality content for sure.

Abdulkerim

Its Great and help me the most. A Million Thanks you Dr.

Emanuela

It is a very nice work

Noble Naade

Very insightful. Please, which of this approach could be used for a research that one is trying to elicit students’ misconceptions in a particular concept ?

Karen

This is Amazing and well explained, thanks

amirhossein

great overview

Tebogo

What do we call a research data analysis method that one use to advise or determining the best accounting tool or techniques that should be adopted in a company.

Catherine Shimechero

Informative video, explained in a clear and simple way. Kudos

Van Hmung

Waoo! I have chosen method wrong for my data analysis. But I can revise my work according to this guide. Thank you so much for this helpful lecture.

BRIAN ONYANGO MWAGA

This has been very helpful. It gave me a good view of my research objectives and how to choose the best method. Thematic analysis it is.

Livhuwani Reineth

Very helpful indeed. Thanku so much for the insight.

Storm Erlank

This was incredibly helpful.

Jack Kanas

Very helpful.

catherine

very educative

Wan Roslina

Nicely written especially for novice academic researchers like me! Thank you.

Talash

choosing a right method for a paper is always a hard job for a student, this is a useful information, but it would be more useful personally for me, if the author provide me with a little bit more information about the data analysis techniques in type of explanatory research. Can we use qualitative content analysis technique for explanatory research ? or what is the suitable data analysis method for explanatory research in social studies?

ramesh

that was very helpful for me. because these details are so important to my research. thank you very much

Kumsa Desisa

I learnt a lot. Thank you

Tesfa NT

Relevant and Informative, thanks !

norma

Well-planned and organized, thanks much! 🙂

Dr. Jacob Lubuva

I have reviewed qualitative data analysis in a simplest way possible. The content will highly be useful for developing my book on qualitative data analysis methods. Cheers!

Nyi Nyi Lwin

Clear explanation on qualitative and how about Case study

Ogobuchi Otuu

This was helpful. Thank you

Alicia

This was really of great assistance, it was just the right information needed. Explanation very clear and follow.

Wow, Thanks for making my life easy

C. U

This was helpful thanks .

Dr. Alina Atif

Very helpful…. clear and written in an easily understandable manner. Thank you.

Herb

This was so helpful as it was easy to understand. I’m a new to research thank you so much.

cissy

so educative…. but Ijust want to know which method is coding of the qualitative or tallying done?

Ayo

Thank you for the great content, I have learnt a lot. So helpful

Tesfaye

precise and clear presentation with simple language and thank you for that.

nneheng

very informative content, thank you.

Oscar Kuebutornye

You guys are amazing on YouTube on this platform. Your teachings are great, educative, and informative. kudos!

NG

Brilliant Delivery. You made a complex subject seem so easy. Well done.

Ankit Kumar

Beautifully explained.

Thanks a lot

Kidada Owen-Browne

Is there a video the captures the practical process of coding using automated applications?

Thanks for the comment. We don’t recommend using automated applications for coding, as they are not sufficiently accurate in our experience.

Mathewos Damtew

content analysis can be qualitative research?

Hend

THANK YOU VERY MUCH.

Dev get

Thank you very much for such a wonderful content

Kassahun Aman

do you have any material on Data collection

Prince .S. mpofu

What a powerful explanation of the QDA methods. Thank you.

Kassahun

Great explanation both written and Video. i have been using of it on a day to day working of my thesis project in accounting and finance. Thank you very much for your support.

BORA SAMWELI MATUTULI

very helpful, thank you so much

ngoni chibukire

The tutorial is useful. I benefited a lot.

Thandeka Hlatshwayo

This is an eye opener for me and very informative, I have used some of your guidance notes on my Thesis, I wonder if you can assist with your 1. name of your book, year of publication, topic etc., this is for citing in my Bibliography,

I certainly hope to hear from you

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Research Paper Analysis: How to Analyze a Research Article + Example

Why might you need to analyze research? First of all, when you analyze a research article, you begin to understand your assigned reading better. It is also the first step toward learning how to write your own research articles and literature reviews. However, if you have never written a research paper before, it may be difficult for you to analyze one. After all, you may not know what criteria to use to evaluate it. But don’t panic! We will help you figure it out!

In this article, our team has explained how to analyze research papers quickly and effectively. At the end, you will also find a research analysis paper example to see how everything works in practice.

  • 🔤 Research Analysis Definition

📊 How to Analyze a Research Article

✍️ how to write a research analysis.

  • 📝 Analysis Example
  • 🔎 More Examples

🔗 References

🔤 research paper analysis: what is it.

A research paper analysis is an academic writing assignment in which you analyze a scholarly article’s methodology, data, and findings. In essence, “to analyze” means to break something down into components and assess each of them individually and in relation to each other. The goal of an analysis is to gain a deeper understanding of a subject. So, when you analyze a research article, you dissect it into elements like data sources , research methods, and results and evaluate how they contribute to the study’s strengths and weaknesses.

📋 Research Analysis Format

A research analysis paper has a pretty straightforward structure. Check it out below!

This section should state the analyzed article’s title and author and outline its main idea. The introduction should end with a strong , presenting your conclusions about the article’s strengths, weaknesses, or scientific value.
Here, you need to summarize the major concepts presented in your research article. This section should be brief.
The analysis should contain your evaluation of the paper. It should explain whether the research meets its intentions and purpose and whether it provides a clear and valid interpretation of results.
The closing paragraph should include a rephrased thesis, a summary of core ideas, and an explanation of the analyzed article’s relevance and importance.
At the end of your work, you should add a reference list. It should include the analyzed article’s citation in your required format (APA, MLA, etc.). If you’ve cited other sources in your paper, they must also be indicated in the list.

Research articles usually include the following sections: introduction, methods, results, and discussion. In the following paragraphs, we will discuss how to analyze a scientific article with a focus on each of its parts.

This image shows the main sections of a research article.

How to Analyze a Research Paper: Purpose

The purpose of the study is usually outlined in the introductory section of the article. Analyzing the research paper’s objectives is critical to establish the context for the rest of your analysis.

When analyzing the research aim, you should evaluate whether it was justified for the researchers to conduct the study. In other words, you should assess whether their research question was significant and whether it arose from existing literature on the topic.

Here are some questions that may help you analyze a research paper’s purpose:

  • Why was the research carried out?
  • What gaps does it try to fill, or what controversies to settle?
  • How does the study contribute to its field?
  • Do you agree with the author’s justification for approaching this particular question in this way?

How to Analyze a Paper: Methods

When analyzing the methodology section , you should indicate the study’s research design (qualitative, quantitative, or mixed) and methods used (for example, experiment, case study, correlational research, survey, etc.). After that, you should assess whether these methods suit the research purpose. In other words, do the chosen methods allow scholars to answer their research questions within the scope of their study?

For example, if scholars wanted to study US students’ average satisfaction with their higher education experience, they could conduct a quantitative survey . However, if they wanted to gain an in-depth understanding of the factors influencing US students’ satisfaction with higher education, qualitative interviews would be more appropriate.

When analyzing methods, you should also look at the research sample . Did the scholars use randomization to select study participants? Was the sample big enough for the results to be generalizable to a larger population?

You can also answer the following questions in your methodology analysis:

  • Is the methodology valid? In other words, did the researchers use methods that accurately measure the variables of interest?
  • Is the research methodology reliable? A research method is reliable if it can produce stable and consistent results under the same circumstances.
  • Is the study biased in any way?
  • What are the limitations of the chosen methodology?

How to Analyze Research Articles’ Results

You should start the analysis of the article results by carefully reading the tables, figures, and text. Check whether the findings correspond to the initial research purpose. See whether the results answered the author’s research questions or supported the hypotheses stated in the introduction.

To analyze the results section effectively, answer the following questions:

  • What are the major findings of the study?
  • Did the author present the results clearly and unambiguously?
  • Are the findings statistically significant ?
  • Does the author provide sufficient information on the validity and reliability of the results?
  • Have you noticed any trends or patterns in the data that the author did not mention?

How to Analyze Research: Discussion

Finally, you should analyze the authors’ interpretation of results and its connection with research objectives. Examine what conclusions the authors drew from their study and whether these conclusions answer the original question.

You should also pay attention to how the authors used findings to support their conclusions. For example, you can reflect on why their findings support that particular inference and not another one. Moreover, more than one conclusion can sometimes be made based on the same set of results. If that’s the case with your article, you should analyze whether the authors addressed other interpretations of their findings .

Here are some useful questions you can use to analyze the discussion section:

  • What findings did the authors use to support their conclusions?
  • How do the researchers’ conclusions compare to other studies’ findings?
  • How does this study contribute to its field?
  • What future research directions do the authors suggest?
  • What additional insights can you share regarding this article? For example, do you agree with the results? What other questions could the researchers have answered?

This image shows how to analyze a research article.

Now, you know how to analyze an article that presents research findings. However, it’s just a part of the work you have to do to complete your paper. So, it’s time to learn how to write research analysis! Check out the steps below!

1. Introduce the Article

As with most academic assignments, you should start your research article analysis with an introduction. Here’s what it should include:

  • The article’s publication details . Specify the title of the scholarly work you are analyzing, its authors, and publication date. Remember to enclose the article’s title in quotation marks and write it in title case .
  • The article’s main point . State what the paper is about. What did the authors study, and what was their major finding?
  • Your thesis statement . End your introduction with a strong claim summarizing your evaluation of the article. Consider briefly outlining the research paper’s strengths, weaknesses, and significance in your thesis.

Keep your introduction brief. Save the word count for the “meat” of your paper — that is, for the analysis.

2. Summarize the Article

Now, you should write a brief and focused summary of the scientific article. It should be shorter than your analysis section and contain all the relevant details about the research paper.

Here’s what you should include in your summary:

  • The research purpose . Briefly explain why the research was done. Identify the authors’ purpose and research questions or hypotheses .
  • Methods and results . Summarize what happened in the study. State only facts, without the authors’ interpretations of them. Avoid using too many numbers and details; instead, include only the information that will help readers understand what happened.
  • The authors’ conclusions . Outline what conclusions the researchers made from their study. In other words, describe how the authors explained the meaning of their findings.

If you need help summarizing an article, you can use our free summary generator .

3. Write Your Research Analysis

The analysis of the study is the most crucial part of this assignment type. Its key goal is to evaluate the article critically and demonstrate your understanding of it.

We’ve already covered how to analyze a research article in the section above. Here’s a quick recap:

  • Analyze whether the study’s purpose is significant and relevant.
  • Examine whether the chosen methodology allows for answering the research questions.
  • Evaluate how the authors presented the results.
  • Assess whether the authors’ conclusions are grounded in findings and answer the original research questions.

Although you should analyze the article critically, it doesn’t mean you only should criticize it. If the authors did a good job designing and conducting their study, be sure to explain why you think their work is well done. Also, it is a great idea to provide examples from the article to support your analysis.

4. Conclude Your Analysis of Research Paper

A conclusion is your chance to reflect on the study’s relevance and importance. Explain how the analyzed paper can contribute to the existing knowledge or lead to future research. Also, you need to summarize your thoughts on the article as a whole. Avoid making value judgments — saying that the paper is “good” or “bad.” Instead, use more descriptive words and phrases such as “This paper effectively showed…”

Need help writing a compelling conclusion? Try our free essay conclusion generator !

5. Revise and Proofread

Last but not least, you should carefully proofread your paper to find any punctuation, grammar, and spelling mistakes. Start by reading your work out loud to ensure that your sentences fit together and sound cohesive. Also, it can be helpful to ask your professor or peer to read your work and highlight possible weaknesses or typos.

This image shows how to write a research analysis.

📝 Research Paper Analysis Example

We have prepared an analysis of a research paper example to show how everything works in practice.

No Homework Policy: Research Article Analysis Example

This paper aims to analyze the research article entitled “No Assignment: A Boon or a Bane?” by Cordova, Pagtulon-an, and Tan (2019). This study examined the effects of having and not having assignments on weekends on high school students’ performance and transmuted mean scores. This article effectively shows the value of homework for students, but larger studies are needed to support its findings.

Cordova et al. (2019) conducted a descriptive quantitative study using a sample of 115 Grade 11 students of the Central Mindanao University Laboratory High School in the Philippines. The sample was divided into two groups: the first received homework on weekends, while the second didn’t. The researchers compared students’ performance records made by teachers and found that students who received assignments performed better than their counterparts without homework.

The purpose of this study is highly relevant and justified as this research was conducted in response to the debates about the “No Homework Policy” in the Philippines. Although the descriptive research design used by the authors allows to answer the research question, the study could benefit from an experimental design. This way, the authors would have firm control over variables. Additionally, the study’s sample size was not large enough for the findings to be generalized to a larger population.

The study results are presented clearly, logically, and comprehensively and correspond to the research objectives. The researchers found that students’ mean grades decreased in the group without homework and increased in the group with homework. Based on these findings, the authors concluded that homework positively affected students’ performance. This conclusion is logical and grounded in data.

This research effectively showed the importance of homework for students’ performance. Yet, since the sample size was relatively small, larger studies are needed to ensure the authors’ conclusions can be generalized to a larger population.

🔎 More Research Analysis Paper Examples

Do you want another research analysis example? Check out the best analysis research paper samples below:

  • Gracious Leadership Principles for Nurses: Article Analysis
  • Effective Mental Health Interventions: Analysis of an Article
  • Nursing Turnover: Article Analysis
  • Nursing Practice Issue: Qualitative Research Article Analysis
  • Quantitative Article Critique in Nursing
  • LIVE Program: Quantitative Article Critique
  • Evidence-Based Practice Beliefs and Implementation: Article Critique
  • “Differential Effectiveness of Placebo Treatments”: Research Paper Analysis
  • “Family-Based Childhood Obesity Prevention Interventions”: Analysis Research Paper Example
  • “Childhood Obesity Risk in Overweight Mothers”: Article Analysis
  • “Fostering Early Breast Cancer Detection” Article Analysis
  • Space and the Atom: Article Analysis
  • “Democracy and Collective Identity in the EU and the USA”: Article Analysis
  • China’s Hegemonic Prospects: Article Review
  • Article Analysis: Fear of Missing Out
  • Codependence, Narcissism, and Childhood Trauma: Analysis of the Article
  • Relationship Between Work Intensity, Workaholism, Burnout, and MSC: Article Review

We hope that our article on research paper analysis has been helpful. If you liked it, please share this article with your friends!

  • Analyzing Research Articles: A Guide for Readers and Writers | Sam Mathews
  • Summary and Analysis of Scientific Research Articles | San José State University Writing Center
  • Analyzing Scholarly Articles | Texas A&M University
  • Article Analysis Assignment | University of Wisconsin-Madison
  • How to Summarize a Research Article | University of Connecticut
  • Critique/Review of Research Articles | University of Calgary
  • Art of Reading a Journal Article: Methodically and Effectively | PubMed Central
  • Write a Critical Review of a Scientific Journal Article | McLaughlin Library
  • How to Read and Understand a Scientific Paper: A Guide for Non-scientists | LSE
  • How to Analyze Journal Articles | Classroom

How to Write an Animal Testing Essay: Tips for Argumentative & Persuasive Papers

Descriptive essay topics: examples, outline, & more.

  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

Choosing the Right Research Methodology: A Guide for Researchers

  • 3 minute read
  • 48.8K views

Table of Contents

Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.

Understanding different research methods:

There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.

Qualitative vs quantitative research:

When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories. 

Qualitative research methodology:

Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. 

Quantitative research methodology:

The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.

Analysing qualitative vs quantitative data:

The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.

When to use qualitative vs quantitative research:

The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable. 

Conclusion:

In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.

Why is data validation important in research

Why is data validation important in research?

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

PW Skills | Blog

Data Analysis Techniques in Research – Methods, Tools & Examples

' src=

Varun Saharawat is a seasoned professional in the fields of SEO and content writing. With a profound knowledge of the intricate aspects of these disciplines, Varun has established himself as a valuable asset in the world of digital marketing and online content creation.

data analysis techniques in research

Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.

Data Analysis Techniques in Research : While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence. Data analysis involves refining, transforming, and interpreting raw data to derive actionable insights that guide informed decision-making for businesses.

Data Analytics Course

A straightforward illustration of data analysis emerges when we make everyday decisions, basing our choices on past experiences or predictions of potential outcomes.

If you want to learn more about this topic and acquire valuable skills that will set you apart in today’s data-driven world, we highly recommend enrolling in the Data Analytics Course by Physics Wallah . And as a special offer for our readers, use the coupon code “READER” to get a discount on this course.

Table of Contents

What is Data Analysis?

Data analysis is the systematic process of inspecting, cleaning, transforming, and interpreting data with the objective of discovering valuable insights and drawing meaningful conclusions. This process involves several steps:

  • Inspecting : Initial examination of data to understand its structure, quality, and completeness.
  • Cleaning : Removing errors, inconsistencies, or irrelevant information to ensure accurate analysis.
  • Transforming : Converting data into a format suitable for analysis, such as normalization or aggregation.
  • Interpreting : Analyzing the transformed data to identify patterns, trends, and relationships.

Types of Data Analysis Techniques in Research

Data analysis techniques in research are categorized into qualitative and quantitative methods, each with its specific approaches and tools. These techniques are instrumental in extracting meaningful insights, patterns, and relationships from data to support informed decision-making, validate hypotheses, and derive actionable recommendations. Below is an in-depth exploration of the various types of data analysis techniques commonly employed in research:

1) Qualitative Analysis:

Definition: Qualitative analysis focuses on understanding non-numerical data, such as opinions, concepts, or experiences, to derive insights into human behavior, attitudes, and perceptions.

  • Content Analysis: Examines textual data, such as interview transcripts, articles, or open-ended survey responses, to identify themes, patterns, or trends.
  • Narrative Analysis: Analyzes personal stories or narratives to understand individuals’ experiences, emotions, or perspectives.
  • Ethnographic Studies: Involves observing and analyzing cultural practices, behaviors, and norms within specific communities or settings.

2) Quantitative Analysis:

Quantitative analysis emphasizes numerical data and employs statistical methods to explore relationships, patterns, and trends. It encompasses several approaches:

Descriptive Analysis:

  • Frequency Distribution: Represents the number of occurrences of distinct values within a dataset.
  • Central Tendency: Measures such as mean, median, and mode provide insights into the central values of a dataset.
  • Dispersion: Techniques like variance and standard deviation indicate the spread or variability of data.

Diagnostic Analysis:

  • Regression Analysis: Assesses the relationship between dependent and independent variables, enabling prediction or understanding causality.
  • ANOVA (Analysis of Variance): Examines differences between groups to identify significant variations or effects.

Predictive Analysis:

  • Time Series Forecasting: Uses historical data points to predict future trends or outcomes.
  • Machine Learning Algorithms: Techniques like decision trees, random forests, and neural networks predict outcomes based on patterns in data.

Prescriptive Analysis:

  • Optimization Models: Utilizes linear programming, integer programming, or other optimization techniques to identify the best solutions or strategies.
  • Simulation: Mimics real-world scenarios to evaluate various strategies or decisions and determine optimal outcomes.

Specific Techniques:

  • Monte Carlo Simulation: Models probabilistic outcomes to assess risk and uncertainty.
  • Factor Analysis: Reduces the dimensionality of data by identifying underlying factors or components.
  • Cohort Analysis: Studies specific groups or cohorts over time to understand trends, behaviors, or patterns within these groups.
  • Cluster Analysis: Classifies objects or individuals into homogeneous groups or clusters based on similarities or attributes.
  • Sentiment Analysis: Uses natural language processing and machine learning techniques to determine sentiment, emotions, or opinions from textual data.

Also Read: AI and Predictive Analytics: Examples, Tools, Uses, Ai Vs Predictive Analytics

Data Analysis Techniques in Research Examples

To provide a clearer understanding of how data analysis techniques are applied in research, let’s consider a hypothetical research study focused on evaluating the impact of online learning platforms on students’ academic performance.

Research Objective:

Determine if students using online learning platforms achieve higher academic performance compared to those relying solely on traditional classroom instruction.

Data Collection:

  • Quantitative Data: Academic scores (grades) of students using online platforms and those using traditional classroom methods.
  • Qualitative Data: Feedback from students regarding their learning experiences, challenges faced, and preferences.

Data Analysis Techniques Applied:

1) Descriptive Analysis:

  • Calculate the mean, median, and mode of academic scores for both groups.
  • Create frequency distributions to represent the distribution of grades in each group.

2) Diagnostic Analysis:

  • Conduct an Analysis of Variance (ANOVA) to determine if there’s a statistically significant difference in academic scores between the two groups.
  • Perform Regression Analysis to assess the relationship between the time spent on online platforms and academic performance.

3) Predictive Analysis:

  • Utilize Time Series Forecasting to predict future academic performance trends based on historical data.
  • Implement Machine Learning algorithms to develop a predictive model that identifies factors contributing to academic success on online platforms.

4) Prescriptive Analysis:

  • Apply Optimization Models to identify the optimal combination of online learning resources (e.g., video lectures, interactive quizzes) that maximize academic performance.
  • Use Simulation Techniques to evaluate different scenarios, such as varying student engagement levels with online resources, to determine the most effective strategies for improving learning outcomes.

5) Specific Techniques:

  • Conduct Factor Analysis on qualitative feedback to identify common themes or factors influencing students’ perceptions and experiences with online learning.
  • Perform Cluster Analysis to segment students based on their engagement levels, preferences, or academic outcomes, enabling targeted interventions or personalized learning strategies.
  • Apply Sentiment Analysis on textual feedback to categorize students’ sentiments as positive, negative, or neutral regarding online learning experiences.

By applying a combination of qualitative and quantitative data analysis techniques, this research example aims to provide comprehensive insights into the effectiveness of online learning platforms.

Also Read: Learning Path to Become a Data Analyst in 2024

Data Analysis Techniques in Quantitative Research

Quantitative research involves collecting numerical data to examine relationships, test hypotheses, and make predictions. Various data analysis techniques are employed to interpret and draw conclusions from quantitative data. Here are some key data analysis techniques commonly used in quantitative research:

1) Descriptive Statistics:

  • Description: Descriptive statistics are used to summarize and describe the main aspects of a dataset, such as central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution (skewness, kurtosis).
  • Applications: Summarizing data, identifying patterns, and providing initial insights into the dataset.

2) Inferential Statistics:

  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. This technique includes hypothesis testing, confidence intervals, t-tests, chi-square tests, analysis of variance (ANOVA), regression analysis, and correlation analysis.
  • Applications: Testing hypotheses, making predictions, and generalizing findings from a sample to a larger population.

3) Regression Analysis:

  • Description: Regression analysis is a statistical technique used to model and examine the relationship between a dependent variable and one or more independent variables. Linear regression, multiple regression, logistic regression, and nonlinear regression are common types of regression analysis .
  • Applications: Predicting outcomes, identifying relationships between variables, and understanding the impact of independent variables on the dependent variable.

4) Correlation Analysis:

  • Description: Correlation analysis is used to measure and assess the strength and direction of the relationship between two or more variables. The Pearson correlation coefficient, Spearman rank correlation coefficient, and Kendall’s tau are commonly used measures of correlation.
  • Applications: Identifying associations between variables and assessing the degree and nature of the relationship.

5) Factor Analysis:

  • Description: Factor analysis is a multivariate statistical technique used to identify and analyze underlying relationships or factors among a set of observed variables. It helps in reducing the dimensionality of data and identifying latent variables or constructs.
  • Applications: Identifying underlying factors or constructs, simplifying data structures, and understanding the underlying relationships among variables.

6) Time Series Analysis:

  • Description: Time series analysis involves analyzing data collected or recorded over a specific period at regular intervals to identify patterns, trends, and seasonality. Techniques such as moving averages, exponential smoothing, autoregressive integrated moving average (ARIMA), and Fourier analysis are used.
  • Applications: Forecasting future trends, analyzing seasonal patterns, and understanding time-dependent relationships in data.

7) ANOVA (Analysis of Variance):

  • Description: Analysis of variance (ANOVA) is a statistical technique used to analyze and compare the means of two or more groups or treatments to determine if they are statistically different from each other. One-way ANOVA, two-way ANOVA, and MANOVA (Multivariate Analysis of Variance) are common types of ANOVA.
  • Applications: Comparing group means, testing hypotheses, and determining the effects of categorical independent variables on a continuous dependent variable.

8) Chi-Square Tests:

  • Description: Chi-square tests are non-parametric statistical tests used to assess the association between categorical variables in a contingency table. The Chi-square test of independence, goodness-of-fit test, and test of homogeneity are common chi-square tests.
  • Applications: Testing relationships between categorical variables, assessing goodness-of-fit, and evaluating independence.

These quantitative data analysis techniques provide researchers with valuable tools and methods to analyze, interpret, and derive meaningful insights from numerical data. The selection of a specific technique often depends on the research objectives, the nature of the data, and the underlying assumptions of the statistical methods being used.

Also Read: Analysis vs. Analytics: How Are They Different?

Data Analysis Methods

Data analysis methods refer to the techniques and procedures used to analyze, interpret, and draw conclusions from data. These methods are essential for transforming raw data into meaningful insights, facilitating decision-making processes, and driving strategies across various fields. Here are some common data analysis methods:

  • Description: Descriptive statistics summarize and organize data to provide a clear and concise overview of the dataset. Measures such as mean, median, mode, range, variance, and standard deviation are commonly used.
  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. Techniques such as hypothesis testing, confidence intervals, and regression analysis are used.

3) Exploratory Data Analysis (EDA):

  • Description: EDA techniques involve visually exploring and analyzing data to discover patterns, relationships, anomalies, and insights. Methods such as scatter plots, histograms, box plots, and correlation matrices are utilized.
  • Applications: Identifying trends, patterns, outliers, and relationships within the dataset.

4) Predictive Analytics:

  • Description: Predictive analytics use statistical algorithms and machine learning techniques to analyze historical data and make predictions about future events or outcomes. Techniques such as regression analysis, time series forecasting, and machine learning algorithms (e.g., decision trees, random forests, neural networks) are employed.
  • Applications: Forecasting future trends, predicting outcomes, and identifying potential risks or opportunities.

5) Prescriptive Analytics:

  • Description: Prescriptive analytics involve analyzing data to recommend actions or strategies that optimize specific objectives or outcomes. Optimization techniques, simulation models, and decision-making algorithms are utilized.
  • Applications: Recommending optimal strategies, decision-making support, and resource allocation.

6) Qualitative Data Analysis:

  • Description: Qualitative data analysis involves analyzing non-numerical data, such as text, images, videos, or audio, to identify themes, patterns, and insights. Methods such as content analysis, thematic analysis, and narrative analysis are used.
  • Applications: Understanding human behavior, attitudes, perceptions, and experiences.

7) Big Data Analytics:

  • Description: Big data analytics methods are designed to analyze large volumes of structured and unstructured data to extract valuable insights. Technologies such as Hadoop, Spark, and NoSQL databases are used to process and analyze big data.
  • Applications: Analyzing large datasets, identifying trends, patterns, and insights from big data sources.

8) Text Analytics:

  • Description: Text analytics methods involve analyzing textual data, such as customer reviews, social media posts, emails, and documents, to extract meaningful information and insights. Techniques such as sentiment analysis, text mining, and natural language processing (NLP) are used.
  • Applications: Analyzing customer feedback, monitoring brand reputation, and extracting insights from textual data sources.

These data analysis methods are instrumental in transforming data into actionable insights, informing decision-making processes, and driving organizational success across various sectors, including business, healthcare, finance, marketing, and research. The selection of a specific method often depends on the nature of the data, the research objectives, and the analytical requirements of the project or organization.

Also Read: Quantitative Data Analysis: Types, Analysis & Examples

Data Analysis Tools

Data analysis tools are essential instruments that facilitate the process of examining, cleaning, transforming, and modeling data to uncover useful information, make informed decisions, and drive strategies. Here are some prominent data analysis tools widely used across various industries:

1) Microsoft Excel:

  • Description: A spreadsheet software that offers basic to advanced data analysis features, including pivot tables, data visualization tools, and statistical functions.
  • Applications: Data cleaning, basic statistical analysis, visualization, and reporting.

2) R Programming Language:

  • Description: An open-source programming language specifically designed for statistical computing and data visualization.
  • Applications: Advanced statistical analysis, data manipulation, visualization, and machine learning.

3) Python (with Libraries like Pandas, NumPy, Matplotlib, and Seaborn):

  • Description: A versatile programming language with libraries that support data manipulation, analysis, and visualization.
  • Applications: Data cleaning, statistical analysis, machine learning, and data visualization.

4) SPSS (Statistical Package for the Social Sciences):

  • Description: A comprehensive statistical software suite used for data analysis, data mining, and predictive analytics.
  • Applications: Descriptive statistics, hypothesis testing, regression analysis, and advanced analytics.

5) SAS (Statistical Analysis System):

  • Description: A software suite used for advanced analytics, multivariate analysis, and predictive modeling.
  • Applications: Data management, statistical analysis, predictive modeling, and business intelligence.

6) Tableau:

  • Description: A data visualization tool that allows users to create interactive and shareable dashboards and reports.
  • Applications: Data visualization , business intelligence , and interactive dashboard creation.

7) Power BI:

  • Description: A business analytics tool developed by Microsoft that provides interactive visualizations and business intelligence capabilities.
  • Applications: Data visualization, business intelligence, reporting, and dashboard creation.

8) SQL (Structured Query Language) Databases (e.g., MySQL, PostgreSQL, Microsoft SQL Server):

  • Description: Database management systems that support data storage, retrieval, and manipulation using SQL queries.
  • Applications: Data retrieval, data cleaning, data transformation, and database management.

9) Apache Spark:

  • Description: A fast and general-purpose distributed computing system designed for big data processing and analytics.
  • Applications: Big data processing, machine learning, data streaming, and real-time analytics.

10) IBM SPSS Modeler:

  • Description: A data mining software application used for building predictive models and conducting advanced analytics.
  • Applications: Predictive modeling, data mining, statistical analysis, and decision optimization.

These tools serve various purposes and cater to different data analysis needs, from basic statistical analysis and data visualization to advanced analytics, machine learning, and big data processing. The choice of a specific tool often depends on the nature of the data, the complexity of the analysis, and the specific requirements of the project or organization.

Also Read: How to Analyze Survey Data: Methods & Examples

Importance of Data Analysis in Research

The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process:

  • Data analysis helps ensure that the results obtained are valid and reliable. By systematically examining the data, researchers can identify any inconsistencies or anomalies that may affect the credibility of the findings.
  • Effective data analysis provides researchers with the necessary information to make informed decisions. By interpreting the collected data, researchers can draw conclusions, make predictions, or formulate recommendations based on evidence rather than intuition or guesswork.
  • Data analysis allows researchers to identify patterns, trends, and relationships within the data. This can lead to a deeper understanding of the research topic, enabling researchers to uncover insights that may not be immediately apparent.
  • In empirical research, data analysis plays a critical role in testing hypotheses. Researchers collect data to either support or refute their hypotheses, and data analysis provides the tools and techniques to evaluate these hypotheses rigorously.
  • Transparent and well-executed data analysis enhances the credibility of research findings. By clearly documenting the data analysis methods and procedures, researchers allow others to replicate the study, thereby contributing to the reproducibility of research findings.
  • In fields such as business or healthcare, data analysis helps organizations allocate resources more efficiently. By analyzing data on consumer behavior, market trends, or patient outcomes, organizations can make strategic decisions about resource allocation, budgeting, and planning.
  • In public policy and social sciences, data analysis is instrumental in developing and evaluating policies and interventions. By analyzing data on social, economic, or environmental factors, policymakers can assess the effectiveness of existing policies and inform the development of new ones.
  • Data analysis allows for continuous improvement in research methods and practices. By analyzing past research projects, identifying areas for improvement, and implementing changes based on data-driven insights, researchers can refine their approaches and enhance the quality of future research endeavors.

However, it is important to remember that mastering these techniques requires practice and continuous learning. That’s why we highly recommend the Data Analytics Course by Physics Wallah . Not only does it cover all the fundamentals of data analysis, but it also provides hands-on experience with various tools such as Excel, Python, and Tableau. Plus, if you use the “ READER ” coupon code at checkout, you can get a special discount on the course.

For Latest Tech Related Information, Join Our Official Free Telegram Group : PW Skills Telegram Group

Data Analysis Techniques in Research FAQs

What are the 5 techniques for data analysis.

The five techniques for data analysis include: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis Qualitative Analysis

What are techniques of data analysis in research?

Techniques of data analysis in research encompass both qualitative and quantitative methods. These techniques involve processes like summarizing raw data, investigating causes of events, forecasting future outcomes, offering recommendations based on predictions, and examining non-numerical data to understand concepts or experiences.

What are the 3 methods of data analysis?

The three primary methods of data analysis are: Qualitative Analysis Quantitative Analysis Mixed-Methods Analysis

What are the four types of data analysis techniques?

The four types of data analysis techniques are: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis

card-img

  • Analysis of Algorithm in Data Structure

analysis of algorithm in data structure

Analysis of Algorithms in Data Structure involves evaluating the efficiency and performance of algorithms when applied to various data structures.

  • Differences Between Data Science and Analytics: Process, Skills, Responsibilities

Data Science and Analytics

Confused between data science and analytics? Understand the differences between data science and analytics and learn how each field contributes…

  • Data Science in Sports Analytics: Importance, Uses, Examples

method of analysis in research paper

Data Science in Sports: In the ever-evolving landscape of sports, the emergence of data science has redefined the way teams…

right adv

Related Articles

  • Scope And Future of Data Analytics in 2025 And Beyond
  • 5+ Best Data Analytics Certifications for Success in Your Career 2024!
  • Best 5 Unique Strategies to Use Artificial Intelligence Data Analytics
  • What is Business Analytics?
  • Book About Data Analytics: Top 15 Books You Should Read
  • Which Course is Best for Business Analyst? (Business Analysts Online Courses)
  • AI and Data Analytics: Tools, Uses, Importance, Salary, and more!

bottom banner

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

• Assessing complex multi-component interventions or systems (of change)

• What works for whom when, how and why?

• Focussing on intervention improvement

• Document study

• Observations (participant or non-participant)

• Interviews (especially semi-structured)

• Focus groups

• Transcription of audio-recordings and field notes into transcripts and protocols

• Coding of protocols

• Using qualitative data management software

• Combinations of quantitative and/or qualitative methods, e.g.:

• : quali and quanti in parallel

• : quanti followed by quali

• : quali followed by quanti

• Checklists

• Reflexivity

• Sampling strategies

• Piloting

• Co-coding

• Member checking

• Stakeholder involvement

• Protocol adherence

• Sample size

• Randomization

• Interrater reliability, variability and other “objectivity checks”

• Not being quantitative research

Acknowledgements

Abbreviations.

EVTEndovascular treatment
RCTRandomised Controlled Trial
SOPStandard Operating Procedure
SRQRStandards for Reporting Qualitative Research

Authors’ contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

American Psychological Association

Title Page Setup

A title page is required for all APA Style papers. There are both student and professional versions of the title page. Students should use the student version of the title page unless their instructor or institution has requested they use the professional version. APA provides a student title page guide (PDF, 199KB) to assist students in creating their title pages.

Student title page

The student title page includes the paper title, author names (the byline), author affiliation, course number and name for which the paper is being submitted, instructor name, assignment due date, and page number, as shown in this example.

diagram of a student page

Title page setup is covered in the seventh edition APA Style manuals in the Publication Manual Section 2.3 and the Concise Guide Section 1.6

method of analysis in research paper

Related handouts

  • Student Title Page Guide (PDF, 263KB)
  • Student Paper Setup Guide (PDF, 3MB)

Student papers do not include a running head unless requested by the instructor or institution.

Follow the guidelines described next to format each element of the student title page.

Paper title

Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms.

Author names

Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name.

Cecily J. Sinclair and Adam Gonzaga

Author affiliation

For a student paper, the affiliation is the institution where the student attends school. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author name(s).

Department of Psychology, University of Georgia

Course number and name

Provide the course number as shown on instructional materials, followed by a colon and the course name. Center the course number and name on the next double-spaced line after the author affiliation.

PSY 201: Introduction to Psychology

Instructor name

Provide the name of the instructor for the course using the format shown on instructional materials. Center the instructor name on the next double-spaced line after the course number and name.

Dr. Rowan J. Estes

Assignment due date

Provide the due date for the assignment. Center the due date on the next double-spaced line after the instructor name. Use the date format commonly used in your country.

October 18, 2020
18 October 2020

Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header.

1

Professional title page

The professional title page includes the paper title, author names (the byline), author affiliation(s), author note, running head, and page number, as shown in the following example.

diagram of a professional title page

Follow the guidelines described next to format each element of the professional title page.

Paper title

Place the title three to four lines down from the top of the title page. Center it and type it in bold font. Capitalize of the title. Place the main title and any subtitle on separate double-spaced lines if desired. There is no maximum length for titles; however, keep titles focused and include key terms.

Author names

 

Place one double-spaced blank line between the paper title and the author names. Center author names on their own line. If there are two authors, use the word “and” between authors; if there are three or more authors, place a comma between author names and use the word “and” before the final author name.

Francesca Humboldt

When different authors have different affiliations, use superscript numerals after author names to connect the names to the appropriate affiliation(s). If all authors have the same affiliation, superscript numerals are not used (see Section 2.3 of the for more on how to set up bylines and affiliations).

Tracy Reuter , Arielle Borovsky , and Casey Lew-Williams

Author affiliation

 

For a professional paper, the affiliation is the institution at which the research was conducted. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author names; when there are multiple affiliations, center each affiliation on its own line.

 

Department of Nursing, Morrigan University

When different authors have different affiliations, use superscript numerals before affiliations to connect the affiliations to the appropriate author(s). Do not use superscript numerals if all authors share the same affiliations (see Section 2.3 of the for more).

Department of Psychology, Princeton University
Department of Speech, Language, and Hearing Sciences, Purdue University

Author note

Place the author note in the bottom half of the title page. Center and bold the label “Author Note.” Align the paragraphs of the author note to the left. For further information on the contents of the author note, see Section 2.7 of the .

n/a

The running head appears in all-capital letters in the page header of all pages, including the title page. Align the running head to the left margin. Do not use the label “Running head:” before the running head.

Prediction errors support children’s word learning

Use the page number 1 on the title page. Use the automatic page-numbering function of your word processing program to insert page numbers in the top right corner of the page header.

1

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

method of analysis in research paper

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Sustainability constraints on rural road infrastructure.

method of analysis in research paper

1. Introduction

2. methodology, 2.1. defining the indicator system.

Serial NumberMain ContributionsReference
1Complex topography is not conducive to infrastructure constructionLi et al., 2022 [ ]
2The diversity of geological formations will affect the safety of the infrastructureZhu et al., 2022 [ ]
3Development and construction costs have a direct impact on its sustainabilityChen, 2022 [ ]
4There is a lack of funding for construction due to regional developmentBueno et al. 2015 [ ]
5Local government financial support may not be sufficient to meet sustainability needsYuan et al. 2019 [ ]
6The presence of fewer local sources of financing may not be able to support infrastructure developmentWang et al., 2018 [ ]
7Remote areas are generally economically weakWang, 2018 [ ]
8Rural economic decline leads to an inability to support sustainable infrastructure developmentWang et al., 2022 [ ]
9Rural infrastructure makes it difficult to attract investorsWang et al., 2016 [ ]
10Rural resources for infrastructure sustainability are relatively scarceShe et al., 2022 [ ]
11Rural development potential is generally lowGao et al., 2018 [ ]
12Inadequate management capacity of rural administrationsShen et al., 2011 [ ]
13Rural infrastructure maintenance regulations are relatively weakYan et al., 2014 [ ]
14There is a lack of incentive in rural areasWong et al., 2017 [ ]
15Participation of residents in infrastructure sustainability is lowWong et al., 2017 [ ]
16Communes pay little attention to rural infrastructure developmentHuang et al., 2023 [ ]
17Infrastructure do not have appropriate management or are low-efficiencyShen et al., 2011 [ ]
18There is a lack of motivation of residents for infrastructure sustainabilityWong et al., 2017 [ ]
19There is inadequate maintenance of infrastructure in rural areasXie, 2011 [ ]
20The wages for labor to build and operate infrastructure in rural areas are lowWong et al., 2013 [ ]
21Rural infrastructure construction technology is relatively underdevelopedZhou et al., 2023 [ ]
22Early planning for infrastructure in rural areas is inadequateSima et al., 2011 [ ]
23There is inadequate awareness of sustainable development in rural areasLiu et al., 2023 [ ]
24There is a low level of education in rural areasHannum et al., 2003 [ ]
25The number of permanent residents in rural areas is constantly changingZhang et al., 2018 [ ]

2.2. Key Indicators

2.3. questionnaire survey, 3. data processing, 3.1. model building, 3.2. modeling steps, 3.3. model construction, 3.4. model experiment, 3.5. modeling amendments, 4. discussion, 4.1. improving economic and management mechanisms, 4.2. sound policy support and public participation, 4.3. optimizing the construction layout plan, 5. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Slee, B. Collaborative Action, Policy Support and Rural Sustainability Transitions in Advanced Western Economies: The Case of Scotland. Sustainability 2024 , 16 , 870. [ Google Scholar ] [ CrossRef ]
  • Yeganeh, A.; McCoy, A.; Agee, P.; Hankey, S. The role of new green construction in neighborhood change and gentrification. Cities 2024 , 150 , 105101. [ Google Scholar ] [ CrossRef ]
  • Ullah, A.; Hao, P.W.; Ullah, Z.; Ali, B.; Khan, D. Evaluation of high modulus asphalts in China, France, and USA for durable road infrastructure, a theoretical approach. Constr. Build. Mater. 2024 , 432 , 136622. [ Google Scholar ] [ CrossRef ]
  • Koirala, S.; Jakus, M.P.; Watson, P. Identifying Constraints to Rural Economic Development: A Development Guidance Function Approach. J. Agric. Resour. Econ. 2023 , 48 , 461–482. [ Google Scholar ]
  • Erica, Q.; Julia, C.; Clare, N.; Gregory, R. Comparing Travel Behavior and Opportunities to Increase Transportation Sustainability in Small Cities, Towns, and Rural Communities. Transp. Res. Rec. 2023 , 2677 , 1439–1452. [ Google Scholar ]
  • Wang, H.Z.; Bai, K.; Pei, L.L.; Lu, X.R.; Polish, M. The Motivation Mechanism and Evolutionary Logic of Tourism Promoting Rural Revitalisation: Empirical Evidence from China. Sustainability 2023 , 15 , 2336. [ Google Scholar ] [ CrossRef ]
  • Jing, W.L.; Zhang, W.; Luo, P.P.; Wu, L.; Wang, L.; Yu, K.H. Assessment of Synergistic Development Potential between Tourism and Rural Restructuring Using a Coupling Analysis: A Case Study of Southern Shaanxi, China. Land 2022 , 11 , 1352. [ Google Scholar ] [ CrossRef ]
  • Guan, H.F.; Lin, Y.L. Current Situation and Countermeasures of Rural Habitat Improvement in Wind and Sandy Grassland Areas of Northern Shaanxi Province. Mod. Rural Sci. Technol. 2023 , 12 , 85–86. (In Chinese) [ Google Scholar ]
  • Wang, C.; Huang, M.H.; Yi, X. Research on the characteristics of rural habitat environment in loess hill and gully areas of northern Shaanxi Province. Urban Dev. Stud. 2023 , 30 , 38–46. (In Chinese) [ Google Scholar ]
  • Jiang, J.K.; Zhu, S.L.; Wang, W.H.; Li, Y.; Li, N. Coupling coordination between new urbanisation and carbon emissions in China. Sci. Total Environ. 2022 , 850 , 158076. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ma, J. Discussion on the Path of Transformation of China’s Traditional Rural Areas to New Rural Areas. China Collect. Econ. 2020 , 4 , 1–3. (In Chinese) [ Google Scholar ]
  • Sierra, A.L.; Yepes, V.; Pellicer, E. Assessing the social sustainability contribution of an infrastructure project under conditions of uncertainty. Environ. Impact Assess. Rev. 2017 , 67 , 61–72. [ Google Scholar ] [ CrossRef ]
  • Li, X.S.; Wang, H.Y.; Zhou, S.F.; Sun, B.; Gao, Z.H. Did ecological engineering projects have a significant effect on large-scale vegetation restoration in Beijing-Tianjin Sand Source Region, China? A remote sensing approach. Chin. Geogr. Sci. 2016 , 26 , 216–228. [ Google Scholar ] [ CrossRef ]
  • Wang, J.N.; Huang, J.; Liu, B.S.; Wang, R. ANP-based sustainability evaluation of urbanisation infrastructure projects. J. Tianjin Univ. Soc. Sci. 2016 , 18 , 432–438. (In Chinese) [ Google Scholar ]
  • Chang, A.S.; Chang, H.J.; Tsaic, C.Y.; Yang, S.H.; Muench, S.T. Strategy of Indicator Incorporation for Roadway Sustainability Certification. J. Clean. Prod. 2018 , 203 , 836–847. [ Google Scholar ] [ CrossRef ]
  • Wagale, M.; Singh, A.P.; Sarkar, A.K. Exploring Rural Road Impacts Using Fuzzy Multi-criteria Approach. Adv. Transp. Eng. 2019 , 34 , 1–12. [ Google Scholar ]
  • Gopalkrishna, G.; Sarji, P.U.; Rao, G.N.; Jayaram, A.N.; Venkatesh, P. Burden, pattern and outcomes of road traffic injuries in a rural district of India. Int. J. Inj. Control Saf. Promot. 2016 , 23 , 64–71. [ Google Scholar ]
  • Gregorio, G.; Riccardo, C.; Riccardo, R.; Massimiliano, G. A flexible approach to select road traffic counting locations: System design and application of a fuzzy Delphi analytic hierarchy process. Transp. Eng. 2023 , 12 , 100167. [ Google Scholar ]
  • Badu, E.; Owusu-Manu, D.; Edwards, J.D.; Adesi, M.; Lichtenstein, S. Rural infrastructure development in the Volta region of Ghana: Barriers and interventions. J. Financ. Manag. Prop. Constr. 2013 , 18 , 142–159. [ Google Scholar ] [ CrossRef ]
  • Li, Y.; DaCosta, N.M. Transportation and income inequality in China: 1978–2007. Transp. Res. Part A 2013 , 55 , 56–71. [ Google Scholar ] [ CrossRef ]
  • Li, X.L. Research on Performance Evaluation of New Rural Infrastructure Projects in China. Master’s Thesis, Central South University of Forestry and Technology, Changsha, China, 2018. (In Chinese). [ Google Scholar ]
  • Li, X.; Fan, Y.L.; John, S.; Qi, Y.L. A Fuzzy AHP Approach to Compare Transit System Performance in US Urbanized Areas. J. Public Transp. 2017 , 20 , 66–89. [ Google Scholar ] [ CrossRef ]
  • Ashton, K.; Cotter-Roberts, A.; Clemens, T.; Green, L.; Dyakova, M. Advancing the social return on investment framework to capture the social value of public health interventions: Semistructured interviews and a review of scoping reviews. Public Health 2024 , 226 , 122–127. [ Google Scholar ] [ CrossRef ]
  • Li, Z.H.; Miao, X.R.; Wang, M.Y.; Jiang, S.G.; Wang, Y.X. The Classification and Regulation of Mountain Villages in the Context of Rural Revitalization—The Example of Zhaotong, Yunnan Province. Sustainability 2022 , 14 , 11381. [ Google Scholar ] [ CrossRef ]
  • Zhu, Y.T.; Pang, X.R.; Zhou, C.S.; He, X. Coupling Coordination Degree between the Socioeconomic and Eco-Environmental Benefits of Koktokay Global Geopark in China. Int. J. Environ. Res. Public Health 2022 , 19 , 8498. [ Google Scholar ] [ CrossRef ]
  • Chen, B.Y. Public–Private Partnership Infrastructure Investment and Sustainable Economic Development: An Empirical Study Based on Efficiency Evaluation and Spatial Spillover in China. Sustainability 2021 , 13 , 8146. [ Google Scholar ] [ CrossRef ]
  • Bueno, P.; Vassallo, J.; Cheung, K. Sustainability Assessment of Transport Infrastructure Projects: A Review of Existing Tools and Methods. Transp. Rev. 2015 , 35 , 622–649. [ Google Scholar ] [ CrossRef ]
  • Yuan, W.; Yang, H.D. The Current Situation, Existing Problems and Solutions of Municipal Government Investment and Financing Companies. In Proceedings of the 2019 International Conference on Education, E-Learning and Economic Research (IC3ER 2019), Weihai, China, 28–29 December 2019; Francis Academic Press: London, UK, 2019. [ Google Scholar ]
  • Wang, J.; Li, B.Q. Governance and Finance: Availability of Community and Social Development Infrastructures in Rural China. Asia Pac. Policy Stud. 2018 , 5 , 4–17. [ Google Scholar ] [ CrossRef ]
  • Wang, Y.W. Regional differences in the impact of science and technology investment on China’s economic development. In Proceedings of the 2018 International Conference on Humanities Education and Social Sciences (ICHESS 2018), Kuala Lumpur, Malaysia, 23–25 November 2018; Francis Academic Press: London, UK, 2018. [ Google Scholar ]
  • Wang, L.S.; Zhang, F.; Wang, Z.H.; Tan, Q. The impact of rural infrastructural investment on farmers’ income growth in China. China Agric. Econ. Rev. 2022 , 14 , 202–219. [ Google Scholar ] [ CrossRef ]
  • Wang, Z.Y.; Sun, S.Z. Transportation infrastructure and rural development in China. China Agric. Econ. Rev. 2016 , 8 , 516–525. [ Google Scholar ] [ CrossRef ]
  • She, Y.J.; Hu, C.L.; Ma, D.J.; Zhu, Y.H.; Tam Vivian, W.Y.; Chen, X.J. Contribution of Infrastructure to the Township’s Sustainable Development in Southwest China. Buildings 2022 , 12 , 164. [ Google Scholar ] [ CrossRef ]
  • Gao, T.M.; Ivolga, A.; Erokhin, V. Sustainable Rural Development in Northern China: Caught in a Vice between Poverty, Urban Attractions, and Migration. Sustainability 2018 , 10 , 1467. [ Google Scholar ] [ CrossRef ]
  • Shen, L.Y.; Jiang, S.J.; Yuan, H.P. Critical indicators for assessing the contribution of infrastructure projects to coordinated urban–rural development in China. Habitat Int. 2011 , 36 , 237–246. [ Google Scholar ] [ CrossRef ]
  • Yan, H.; Yi, J. Study on Status of Rural Poverty Relief Development in Western China and Countermeasures in New Period: Taking Yibin, in Sichuan Province as an Example. Can. Soc. Sci. 2014 , 10 , 164–170. [ Google Scholar ]
  • Wong, L.H.; Wang, Y.; Luo, R.F.; Zhang, L.X.; Rozelle, S. Local governance and the quality of local infrastructure: Evidence from village road projects in rural China. J. Public Econ. 2017 , 152 , 119–132. [ Google Scholar ] [ CrossRef ]
  • Huang, D.; Zhang, S.S.; Fu, J.X.; Yang, D.; Xi, Z.Y. Study on the Innovative Development of Rural Infrastructure Construction in Guizhou Province in the Context of Rural Revitalisation. Rural Econ. Sci. -Technol. 2023 , 34 , 155–157. (In Chinese) [ Google Scholar ]
  • Xie, X.R. The Study on Maintenance Mechanism of Rural Highway in China. Adv. Mater. Res. 2011 , 403–408 , 2915–2918. [ Google Scholar ] [ CrossRef ]
  • Wong, L.H.; Luo, R.F.; Zhang, L.X.; Rozelle, S. Providing quality infrastructure in rural villages: The case of rural roads in China. J. Dev. Econ. 2013 , 103 , 262–274. [ Google Scholar ] [ CrossRef ]
  • Zhou, F.; Guo, X.R.; Liu, C.Y.; Ma, Q.Y.; Guo, S.D. Analysis on the Influencing Factors of Rural Infrastructure in China. Agriculture 2023 , 13 , 986. [ Google Scholar ] [ CrossRef ]
  • Sima, W.H.; Liu, M.Z. Practice and exploration of infrastructure planning for new socialist countryside. Dev. Small Cities Towns 2011 , 8 , 54–57. (In Chinese) [ Google Scholar ]
  • Liu, B.S.; Zhang, X.H.; Tian, J.F.; Cao, R.M.; Sun, X.Z.; Xue, B. Rural sustainable development: A case study of the Zaozhuang Innovation Demonstration Zone in China. Reg. Sustain. 2023 , 4 , 390–404. [ Google Scholar ] [ CrossRef ]
  • Hannum, E. Poverty and Basic Education in Rural China: Villages, Households, and Girls’ and Boys’ Enrollment. Comp. Educ. Rev. 2003 , 47 , 141–159. [ Google Scholar ] [ CrossRef ]
  • Zhang, L.J.; Zhang, G.Y. Applying the Strategy of Village Revitalization to Manage the Rural Hollowing in Daba Mountains Area of China. Appl. Econ. Financ. 2018 , 5 , 108–115. [ Google Scholar ]
  • Afshari, R.A. Selection of construction project manager by using Delphi and fuzzy linguistic decision making. J. Intell. Fuzzy Syst. 2015 , 28 , 2827–2838. [ Google Scholar ] [ CrossRef ]
  • Yang, Y.L.; Chen, Y.X.; Liu, Y.Y.; He, T.Y.; Chen, L.Y. Evaluation and Optimization of Cultural Perception of Coastal Greenway Landscape Based on Structural Equation Model. Int. J. Environ. Res. Public Health 2023 , 20 , 2540. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Song, C. Evaluation of Greenway Usage Satisfaction Based on Structural Equation Modelling (SEM). Master’s Thesis, Central South University of Forestry and Technology, Changsha, China, 2019. (In Chinese). [ Google Scholar ]
  • Yang, H.W.; Xu, L.H.; Malisa, M.; Xu, M.L.; Hu, Q.T.; Liu, X.; Kim, H.; Yuan, J. Analysing observed categorical data in SPSS AMOS: A Bayesian approach. Int. J. Quant. Res. Educ. 2022 , 5 , 399–430. [ Google Scholar ] [ CrossRef ]
  • Watfa, M.K.; Ahmed, A.; Nada, S.; Kamal, J. A structural equation model to assess the impact of sustainability management on the success of construction projects. Int. J. Constr. Manag. 2023 , 23 , 1653–1664. [ Google Scholar ] [ CrossRef ]
  • Cheng, W.S. Exploration of Rural Supply Chain Finance Mode in Shandong Province and Credit Risk Warning in the “Internet+” Environment. J. Asia Trade Bus. 2021 , 8 , 31–42. [ Google Scholar ] [ CrossRef ]
  • Ye, Z.j.; Wei, Y.; Yang, S.l.; Li, P.P.; Yang, F.; Yang, B.Y.; Wang, L.B. IoT-enhanced smart road infrastructure systems for comprehensive real-time monitoring. Internet Things Cyber-Phys. Syst. 2024 , 4 , 235–249. [ Google Scholar ] [ CrossRef ]
  • Borghetti, F.; Beretta, G.; Bongiorno, N.; Padova, D.M. Road infrastructure maintenance: Operative method for interventions’ ranking. Transp. Res. Interdiscip. Perspect. 2024 , 25 , 101100. [ Google Scholar ] [ CrossRef ]
  • Zhang, C.; Zhang, N. The Cultivation of Farmers’ Social Capital from the Perspective of the New Rural Construction. Int. J. Bus. Manag. 2008 , 3 , 76. [ Google Scholar ] [ CrossRef ]
  • Nyame, J.A.; Ernest, K.; Alex, A.; Edward, B. Assessment of competencies to promote best project management practices for road infrastructure projects in Ghana. J. Eng. Des. Technol. 2024 , 22 , 438–455. [ Google Scholar ]
  • Meyer, J.; Manzini, P.; Lubbe, S.; Klopper, R. Improvement of infrastructure delivery through effective supply chain management at North West Provincial Department of Public Works and Roads. J. Public Adm. 2019 , 54 , 117–129. [ Google Scholar ]
  • Lu, M. The Problems and Countermeasures of Current Rural Ecological Environment Governance. Agric. Stud. 2024 , 8 , 002. [ Google Scholar ]
  • Zeng, H. Analysis on the Current Situation and Countermeasures of Rural Ecological Environment Governance. J. Glob. Econ. Bus. Financ. 2023 , 5 , 33. [ Google Scholar ]
  • Gutierrez-Velez, V.H.; Gilbert, M.R.; Kinsey, D.; Behm, J.E. Beyond the ‘urban’ and the ‘rural’: Conceptualizing a new generation of infrastructure systems to enable rural–urban sustainability. Curr. Opin. Environ. Sustain. 2022 , 56 , 101177. [ Google Scholar ] [ CrossRef ]
  • Bai, Z.F.; Han, L.; Liu, H.Q.; Li, L.Z.; Jiang, X.H. Applying the projection pursuit and DPSIR model for evaluation of ecological carrying capacity in Inner Mongolia Autonomous Region, China. Environ. Sci. Pollut. Res. Int. 2024 , 31 , 3259–3275. [ Google Scholar ]
  • Ma, L.; Qin, Y.T.; Zhang, H.; Zheng, J.; Hou, T.L.; Wen, Y.L. Improving Well-Being of Farmers Using Ecological Awareness around Protected Areas: Evidence from Qinling Region, China. Int. J. Environ. Res. Public Health 2021 , 18 , 9792. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Regional DivisionPrefecture Level CityNumber of Villages SurveyedNumber of RoadsSample Size by Number of RoadsAverage Main Road Width (Meter)
≤1(1, 5)≥5
Southern ShaanxiHanzhong132811203.5~4.5
Shangluo9172613.6~5
Ankang8151704~5
Guanzhong PlainXi’an5260146~18
Xianyang7260524~7
Baoji12302913~6
Weinan10371725~7
Tongchuan8262604~6
Yangling8291616~8
Northern ShaanxiYulin9212526~8
Yan’an12232915~7.8
TargetsResearch MethodsNumber of RespondentsEffective Number of PersonsEffective Percentage
ResidentsQuestionnaire4123790.92
Neighborhood councilsVisits, Questionnaires60590.98
Township governmentVisits, Telephone interview45430.96
County governmentTelephone interview21190.90
EnterprisesVisit, Questionnaire28270.96
OthersQuestionnaire1380.62
Fit ParameterNumerical ValueStandard
2.31.5~5.0
NFI0.91≥0.9
NNFI0.92≥0.9
CFI0.93≥0.9
IFC0.93≥0.9
RFI0.92≥0.9
GFI0.92≥0.9
RMSEA0.04≤0.1
NumberRestraintWeightRankings
F1Policy Support and Public Participation0.213
F2Management Capacity0.272
F3Economic Capacity0.391
F4Terrain Features0.134
Latent VariableMeasured VariablesWeightingWeighted WeightsRankingRanking Within Indicator
Economic capacity
(0.39)
Higher construction costs0.830.3233
Insufficient construction funds0.790.3155
Difficulty in attracting capital0.890.3512
Weak economic strength0.870.3422
Insufficient financial allocation0.810.3244
Insufficient attention from superiors0.760.3066
Management capacity
(0.27)
Insufficient early planning0.870.2371
Inadequate management of functional departments0.820.2293
Lack of sustainability awareness in government agencies0.830.22104
Poor management and maintenance0.840.2382
Insufficient quota of building land0.760.21126
Lack of relevant laws and regulations0.820.22115
Policy support and public participation
(0.21)
Lack of policy support0.830.17142
Insufficient public participation in decision-making0.720.15175
Lack of policy incentives0.810.17153
Population outflow0.890.19131
Lack of public awareness of sustainability0.800.17164
Terrain features (0.13)More complex topography and geomorphology0.820.11192
Low ecological carrying capacity0.870.11181
Frequent ecological disasters0.730.09203
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Li, Q.; Lv, S.; Cui, J.; Hou, D.; Liu, Y.; Li, W. Sustainability Constraints on Rural Road Infrastructure. Sustainability 2024 , 16 , 7066. https://doi.org/10.3390/su16167066

Li Q, Lv S, Cui J, Hou D, Liu Y, Li W. Sustainability Constraints on Rural Road Infrastructure. Sustainability . 2024; 16(16):7066. https://doi.org/10.3390/su16167066

Li, Qin, Shuangning Lv, Jingya Cui, Dongchen Hou, Yijun Liu, and Wenlong Li. 2024. "Sustainability Constraints on Rural Road Infrastructure" Sustainability 16, no. 16: 7066. https://doi.org/10.3390/su16167066

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Biostatistics Graduate Program

Siwei zhang is first author of jamia paper.

Posted by duthip1 on Tuesday, August 13, 2024 in News .

Congratulations to PhD candidate Siwei Zhang , alumnus Nicholas Strayer (PhD 2020; now at Posit), senior biostatistician Yajing Li , and assistant professor Yaomin Xu on the publication of “ PheMIME: an interactive web app and knowledge base for phenome-wide, multi-institutional multimorbidity analysis ” in the  Journal of the American Medical Informatics Association on August 10. As stated in the abstract, “PheMIME provides an extensive multimorbidity knowledge base that consolidates data from three EHR systems, and it is a novel interactive tool designed to analyze and visualize multimorbidities across multiple EHR datasets. It stands out as the first of its kind to offer extensive multimorbidity knowledge integration with substantial support for efficient online analysis and interactive visualization.” Collaborators on the paper include members of Vanderbilt’s Division of Genetic Medicine, Department of Biomedical Informatics, Department of Urology, Department of Obstetrics and Gynecology, Division of Hematology and Oncology, VICTR , Department of Pharmacology, Center for Drug Safety and Immunology, and Department of Psychiatry and Behavioral Sciences, as well as colleagues at Massachusetts General Hospital, North Carolina State University, Murdoch University (Australia), and the Broad Institute. Dr. Xu is corresponding author.

Three-part figure comprising visualization tools for analyzing schizophrenia

Tags: cloud computing , EHR , methods , network analysis , R , schizophrenia , Shiny

Leave a Response

You must be logged in to post a comment

Dr. Michael Andreae’s Manuscript Wins Best Paper of the Year Award

Anesthesiology department, department of anesthesiology research.

The University of Utah School of Medicine Department of Anesthesiology has research opportunities for students and a research grant program for our academic faculty.

We are thrilled to announce that Dr. Michael Andreae and his research team have been honored with the Best Paper of the Year award by the Journal of Cognitive Engineering and Decision Making for their manuscript titled, "Adapting Cognitive Task Analysis Methods for Use in a Large Sample Simulation Study of High-Risk Healthcare Events." This prestigious award recognizes their exceptional work in the field of medical decision-making.

The manuscript is part of an AHRQ-funded multi-center study led by Dr. Weinger at Vanderbilt University. It delves into the use of cognitive task analysis methods to study decision-making processes during simulated perioperative crises. By adapting cognitive interviews for a large-scale trial involving over 100 anesthesiologists, Dr. Andreae’s team has provided groundbreaking insights into how clinicians navigate high-stakes medical situations.

Congratulations to Dr. Andreae and his team for this outstanding achievement and for advancing our understanding of critical decision-making in healthcare.

The cover of the Journal of Cognitive Engineering and Decision Making next to a portrait of Dr. Andreae. The text reads, "Congrats Dr. Andreae!"

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

The Beginner's Guide to Statistical Analysis | 5 Steps & Examples

Statistical analysis means investigating trends, patterns, and relationships using quantitative data . It is an important research tool used by scientists, governments, businesses, and other organizations.

To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process . You need to specify your hypotheses and make decisions about your research design, sample size, and sampling procedure.

After collecting data from your sample, you can organize and summarize the data using descriptive statistics . Then, you can use inferential statistics to formally test hypotheses and make estimates about the population. Finally, you can interpret and generalize your findings.

This article is a practical introduction to statistical analysis for students and researchers. We’ll walk you through the steps using two research examples. The first investigates a potential cause-and-effect relationship, while the second investigates a potential correlation between variables.

Table of contents

Step 1: write your hypotheses and plan your research design, step 2: collect data from a sample, step 3: summarize your data with descriptive statistics, step 4: test hypotheses or make estimates with inferential statistics, step 5: interpret your results, other interesting articles.

To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design.

Writing statistical hypotheses

The goal of research is often to investigate a relationship between variables within a population . You start with a prediction, and use statistical analysis to test that prediction.

A statistical hypothesis is a formal way of writing a prediction about a population. Every research prediction is rephrased into null and alternative hypotheses that can be tested using sample data.

While the null hypothesis always predicts no effect or no relationship between variables, the alternative hypothesis states your research prediction of an effect or relationship.

  • Null hypothesis: A 5-minute meditation exercise will have no effect on math test scores in teenagers.
  • Alternative hypothesis: A 5-minute meditation exercise will improve math test scores in teenagers.
  • Null hypothesis: Parental income and GPA have no relationship with each other in college students.
  • Alternative hypothesis: Parental income and GPA are positively correlated in college students.

Planning your research design

A research design is your overall strategy for data collection and analysis. It determines the statistical tests you can use to test your hypothesis later on.

First, decide whether your research will use a descriptive, correlational, or experimental design. Experiments directly influence variables, whereas descriptive and correlational studies only measure variables.

  • In an experimental design , you can assess a cause-and-effect relationship (e.g., the effect of meditation on test scores) using statistical tests of comparison or regression.
  • In a correlational design , you can explore relationships between variables (e.g., parental income and GPA) without any assumption of causality using correlation coefficients and significance tests.
  • In a descriptive design , you can study the characteristics of a population or phenomenon (e.g., the prevalence of anxiety in U.S. college students) using statistical tests to draw inferences from sample data.

Your research design also concerns whether you’ll compare participants at the group level or individual level, or both.

  • In a between-subjects design , you compare the group-level outcomes of participants who have been exposed to different treatments (e.g., those who performed a meditation exercise vs those who didn’t).
  • In a within-subjects design , you compare repeated measures from participants who have participated in all treatments of a study (e.g., scores from before and after performing a meditation exercise).
  • In a mixed (factorial) design , one variable is altered between subjects and another is altered within subjects (e.g., pretest and posttest scores from participants who either did or didn’t do a meditation exercise).
  • Experimental
  • Correlational

First, you’ll take baseline test scores from participants. Then, your participants will undergo a 5-minute meditation exercise. Finally, you’ll record participants’ scores from a second math test.

In this experiment, the independent variable is the 5-minute meditation exercise, and the dependent variable is the math test score from before and after the intervention. Example: Correlational research design In a correlational study, you test whether there is a relationship between parental income and GPA in graduating college students. To collect your data, you will ask participants to fill in a survey and self-report their parents’ incomes and their own GPA.

Measuring variables

When planning a research design, you should operationalize your variables and decide exactly how you will measure them.

For statistical analysis, it’s important to consider the level of measurement of your variables, which tells you what kind of data they contain:

  • Categorical data represents groupings. These may be nominal (e.g., gender) or ordinal (e.g. level of language ability).
  • Quantitative data represents amounts. These may be on an interval scale (e.g. test score) or a ratio scale (e.g. age).

Many variables can be measured at different levels of precision. For example, age data can be quantitative (8 years old) or categorical (young). If a variable is coded numerically (e.g., level of agreement from 1–5), it doesn’t automatically mean that it’s quantitative instead of categorical.

Identifying the measurement level is important for choosing appropriate statistics and hypothesis tests. For example, you can calculate a mean score with quantitative data, but not with categorical data.

In a research study, along with measures of your variables of interest, you’ll often collect data on relevant participant characteristics.

Variable Type of data
Age Quantitative (ratio)
Gender Categorical (nominal)
Race or ethnicity Categorical (nominal)
Baseline test scores Quantitative (interval)
Final test scores Quantitative (interval)
Parental income Quantitative (ratio)
GPA Quantitative (interval)

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

method of analysis in research paper

In most cases, it’s too difficult or expensive to collect data from every member of the population you’re interested in studying. Instead, you’ll collect data from a sample.

Statistical analysis allows you to apply your findings beyond your own sample as long as you use appropriate sampling procedures . You should aim for a sample that is representative of the population.

Sampling for statistical analysis

There are two main approaches to selecting a sample.

  • Probability sampling: every member of the population has a chance of being selected for the study through random selection.
  • Non-probability sampling: some members of the population are more likely than others to be selected for the study because of criteria such as convenience or voluntary self-selection.

In theory, for highly generalizable findings, you should use a probability sampling method. Random selection reduces several types of research bias , like sampling bias , and ensures that data from your sample is actually typical of the population. Parametric tests can be used to make strong statistical inferences when data are collected using probability sampling.

But in practice, it’s rarely possible to gather the ideal sample. While non-probability samples are more likely to at risk for biases like self-selection bias , they are much easier to recruit and collect data from. Non-parametric tests are more appropriate for non-probability samples, but they result in weaker inferences about the population.

If you want to use parametric tests for non-probability samples, you have to make the case that:

  • your sample is representative of the population you’re generalizing your findings to.
  • your sample lacks systematic bias.

Keep in mind that external validity means that you can only generalize your conclusions to others who share the characteristics of your sample. For instance, results from Western, Educated, Industrialized, Rich and Democratic samples (e.g., college students in the US) aren’t automatically applicable to all non-WEIRD populations.

If you apply parametric tests to data from non-probability samples, be sure to elaborate on the limitations of how far your results can be generalized in your discussion section .

Create an appropriate sampling procedure

Based on the resources available for your research, decide on how you’ll recruit participants.

  • Will you have resources to advertise your study widely, including outside of your university setting?
  • Will you have the means to recruit a diverse sample that represents a broad population?
  • Do you have time to contact and follow up with members of hard-to-reach groups?

Your participants are self-selected by their schools. Although you’re using a non-probability sample, you aim for a diverse and representative sample. Example: Sampling (correlational study) Your main population of interest is male college students in the US. Using social media advertising, you recruit senior-year male college students from a smaller subpopulation: seven universities in the Boston area.

Calculate sufficient sample size

Before recruiting participants, decide on your sample size either by looking at other studies in your field or using statistics. A sample that’s too small may be unrepresentative of the sample, while a sample that’s too large will be more costly than necessary.

There are many sample size calculators online. Different formulas are used depending on whether you have subgroups or how rigorous your study should be (e.g., in clinical research). As a rule of thumb, a minimum of 30 units or more per subgroup is necessary.

To use these calculators, you have to understand and input these key components:

  • Significance level (alpha): the risk of rejecting a true null hypothesis that you are willing to take, usually set at 5%.
  • Statistical power : the probability of your study detecting an effect of a certain size if there is one, usually 80% or higher.
  • Expected effect size : a standardized indication of how large the expected result of your study will be, usually based on other similar studies.
  • Population standard deviation: an estimate of the population parameter based on a previous study or a pilot study of your own.

Once you’ve collected all of your data, you can inspect them and calculate descriptive statistics that summarize them.

Inspect your data

There are various ways to inspect your data, including the following:

  • Organizing data from each variable in frequency distribution tables .
  • Displaying data from a key variable in a bar chart to view the distribution of responses.
  • Visualizing the relationship between two variables using a scatter plot .

By visualizing your data in tables and graphs, you can assess whether your data follow a skewed or normal distribution and whether there are any outliers or missing data.

A normal distribution means that your data are symmetrically distributed around a center where most values lie, with the values tapering off at the tail ends.

Mean, median, mode, and standard deviation in a normal distribution

In contrast, a skewed distribution is asymmetric and has more values on one end than the other. The shape of the distribution is important to keep in mind because only some descriptive statistics should be used with skewed distributions.

Extreme outliers can also produce misleading statistics, so you may need a systematic approach to dealing with these values.

Calculate measures of central tendency

Measures of central tendency describe where most of the values in a data set lie. Three main measures of central tendency are often reported:

  • Mode : the most popular response or value in the data set.
  • Median : the value in the exact middle of the data set when ordered from low to high.
  • Mean : the sum of all values divided by the number of values.

However, depending on the shape of the distribution and level of measurement, only one or two of these measures may be appropriate. For example, many demographic characteristics can only be described using the mode or proportions, while a variable like reaction time may not have a mode at all.

Calculate measures of variability

Measures of variability tell you how spread out the values in a data set are. Four main measures of variability are often reported:

  • Range : the highest value minus the lowest value of the data set.
  • Interquartile range : the range of the middle half of the data set.
  • Standard deviation : the average distance between each value in your data set and the mean.
  • Variance : the square of the standard deviation.

Once again, the shape of the distribution and level of measurement should guide your choice of variability statistics. The interquartile range is the best measure for skewed distributions, while standard deviation and variance provide the best information for normal distributions.

Using your table, you should check whether the units of the descriptive statistics are comparable for pretest and posttest scores. For example, are the variance levels similar across the groups? Are there any extreme values? If there are, you may need to identify and remove extreme outliers in your data set or transform your data before performing a statistical test.

Pretest scores Posttest scores
Mean 68.44 75.25
Standard deviation 9.43 9.88
Variance 88.96 97.96
Range 36.25 45.12
30

From this table, we can see that the mean score increased after the meditation exercise, and the variances of the two scores are comparable. Next, we can perform a statistical test to find out if this improvement in test scores is statistically significant in the population. Example: Descriptive statistics (correlational study) After collecting data from 653 students, you tabulate descriptive statistics for annual parental income and GPA.

It’s important to check whether you have a broad range of data points. If you don’t, your data may be skewed towards some groups more than others (e.g., high academic achievers), and only limited inferences can be made about a relationship.

Parental income (USD) GPA
Mean 62,100 3.12
Standard deviation 15,000 0.45
Variance 225,000,000 0.16
Range 8,000–378,000 2.64–4.00
653

A number that describes a sample is called a statistic , while a number describing a population is called a parameter . Using inferential statistics , you can make conclusions about population parameters based on sample statistics.

Researchers often use two main methods (simultaneously) to make inferences in statistics.

  • Estimation: calculating population parameters based on sample statistics.
  • Hypothesis testing: a formal process for testing research predictions about the population using samples.

You can make two types of estimates of population parameters from sample statistics:

  • A point estimate : a value that represents your best guess of the exact parameter.
  • An interval estimate : a range of values that represent your best guess of where the parameter lies.

If your aim is to infer and report population characteristics from sample data, it’s best to use both point and interval estimates in your paper.

You can consider a sample statistic a point estimate for the population parameter when you have a representative sample (e.g., in a wide public opinion poll, the proportion of a sample that supports the current government is taken as the population proportion of government supporters).

There’s always error involved in estimation, so you should also provide a confidence interval as an interval estimate to show the variability around a point estimate.

A confidence interval uses the standard error and the z score from the standard normal distribution to convey where you’d generally expect to find the population parameter most of the time.

Hypothesis testing

Using data from a sample, you can test hypotheses about relationships between variables in the population. Hypothesis testing starts with the assumption that the null hypothesis is true in the population, and you use statistical tests to assess whether the null hypothesis can be rejected or not.

Statistical tests determine where your sample data would lie on an expected distribution of sample data if the null hypothesis were true. These tests give two main outputs:

  • A test statistic tells you how much your data differs from the null hypothesis of the test.
  • A p value tells you the likelihood of obtaining your results if the null hypothesis is actually true in the population.

Statistical tests come in three main varieties:

  • Comparison tests assess group differences in outcomes.
  • Regression tests assess cause-and-effect relationships between variables.
  • Correlation tests assess relationships between variables without assuming causation.

Your choice of statistical test depends on your research questions, research design, sampling method, and data characteristics.

Parametric tests

Parametric tests make powerful inferences about the population based on sample data. But to use them, some assumptions must be met, and only some types of variables can be used. If your data violate these assumptions, you can perform appropriate data transformations or use alternative non-parametric tests instead.

A regression models the extent to which changes in a predictor variable results in changes in outcome variable(s).

  • A simple linear regression includes one predictor variable and one outcome variable.
  • A multiple linear regression includes two or more predictor variables and one outcome variable.

Comparison tests usually compare the means of groups. These may be the means of different groups within a sample (e.g., a treatment and control group), the means of one sample group taken at different times (e.g., pretest and posttest scores), or a sample mean and a population mean.

  • A t test is for exactly 1 or 2 groups when the sample is small (30 or less).
  • A z test is for exactly 1 or 2 groups when the sample is large.
  • An ANOVA is for 3 or more groups.

The z and t tests have subtypes based on the number and types of samples and the hypotheses:

  • If you have only one sample that you want to compare to a population mean, use a one-sample test .
  • If you have paired measurements (within-subjects design), use a dependent (paired) samples test .
  • If you have completely separate measurements from two unmatched groups (between-subjects design), use an independent (unpaired) samples test .
  • If you expect a difference between groups in a specific direction, use a one-tailed test .
  • If you don’t have any expectations for the direction of a difference between groups, use a two-tailed test .

The only parametric correlation test is Pearson’s r . The correlation coefficient ( r ) tells you the strength of a linear relationship between two quantitative variables.

However, to test whether the correlation in the sample is strong enough to be important in the population, you also need to perform a significance test of the correlation coefficient, usually a t test, to obtain a p value. This test uses your sample size to calculate how much the correlation coefficient differs from zero in the population.

You use a dependent-samples, one-tailed t test to assess whether the meditation exercise significantly improved math test scores. The test gives you:

  • a t value (test statistic) of 3.00
  • a p value of 0.0028

Although Pearson’s r is a test statistic, it doesn’t tell you anything about how significant the correlation is in the population. You also need to test whether this sample correlation coefficient is large enough to demonstrate a correlation in the population.

A t test can also determine how significantly a correlation coefficient differs from zero based on sample size. Since you expect a positive correlation between parental income and GPA, you use a one-sample, one-tailed t test. The t test gives you:

  • a t value of 3.08
  • a p value of 0.001

Prevent plagiarism. Run a free check.

The final step of statistical analysis is interpreting your results.

Statistical significance

In hypothesis testing, statistical significance is the main criterion for forming conclusions. You compare your p value to a set significance level (usually 0.05) to decide whether your results are statistically significant or non-significant.

Statistically significant results are considered unlikely to have arisen solely due to chance. There is only a very low chance of such a result occurring if the null hypothesis is true in the population.

This means that you believe the meditation intervention, rather than random factors, directly caused the increase in test scores. Example: Interpret your results (correlational study) You compare your p value of 0.001 to your significance threshold of 0.05. With a p value under this threshold, you can reject the null hypothesis. This indicates a statistically significant correlation between parental income and GPA in male college students.

Note that correlation doesn’t always mean causation, because there are often many underlying factors contributing to a complex variable like GPA. Even if one variable is related to another, this may be because of a third variable influencing both of them, or indirect links between the two variables.

Effect size

A statistically significant result doesn’t necessarily mean that there are important real life applications or clinical outcomes for a finding.

In contrast, the effect size indicates the practical significance of your results. It’s important to report effect sizes along with your inferential statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper .

With a Cohen’s d of 0.72, there’s medium to high practical significance to your finding that the meditation exercise improved test scores. Example: Effect size (correlational study) To determine the effect size of the correlation coefficient, you compare your Pearson’s r value to Cohen’s effect size criteria.

Decision errors

Type I and Type II errors are mistakes made in research conclusions. A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false.

You can aim to minimize the risk of these errors by selecting an optimal significance level and ensuring high power . However, there’s a trade-off between the two errors, so a fine balance is necessary.

Frequentist versus Bayesian statistics

Traditionally, frequentist statistics emphasizes null hypothesis significance testing and always starts with the assumption of a true null hypothesis.

However, Bayesian statistics has grown in popularity as an alternative approach in the last few decades. In this approach, you use previous research to continually update your hypotheses based on your expectations and observations.

Bayes factor compares the relative strength of evidence for the null versus the alternative hypothesis rather than making a conclusion about rejecting the null hypothesis or not.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval

Methodology

  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hostile attribution bias
  • Affect heuristic

Is this article helpful?

Other students also liked.

  • Descriptive Statistics | Definitions, Types, Examples
  • Inferential Statistics | An Easy Introduction & Examples
  • Choosing the Right Statistical Test | Types & Examples

More interesting articles

  • Akaike Information Criterion | When & How to Use It (Example)
  • An Easy Introduction to Statistical Significance (With Examples)
  • An Introduction to t Tests | Definitions, Formula and Examples
  • ANOVA in R | A Complete Step-by-Step Guide with Examples
  • Central Limit Theorem | Formula, Definition & Examples
  • Central Tendency | Understanding the Mean, Median & Mode
  • Chi-Square (Χ²) Distributions | Definition & Examples
  • Chi-Square (Χ²) Table | Examples & Downloadable Table
  • Chi-Square (Χ²) Tests | Types, Formula & Examples
  • Chi-Square Goodness of Fit Test | Formula, Guide & Examples
  • Chi-Square Test of Independence | Formula, Guide & Examples
  • Coefficient of Determination (R²) | Calculation & Interpretation
  • Correlation Coefficient | Types, Formulas & Examples
  • Frequency Distribution | Tables, Types & Examples
  • How to Calculate Standard Deviation (Guide) | Calculator & Examples
  • How to Calculate Variance | Calculator, Analysis & Examples
  • How to Find Degrees of Freedom | Definition & Formula
  • How to Find Interquartile Range (IQR) | Calculator & Examples
  • How to Find Outliers | 4 Ways with Examples & Explanation
  • How to Find the Geometric Mean | Calculator & Formula
  • How to Find the Mean | Definition, Examples & Calculator
  • How to Find the Median | Definition, Examples & Calculator
  • How to Find the Mode | Definition, Examples & Calculator
  • How to Find the Range of a Data Set | Calculator & Formula
  • Hypothesis Testing | A Step-by-Step Guide with Easy Examples
  • Interval Data and How to Analyze It | Definitions & Examples
  • Levels of Measurement | Nominal, Ordinal, Interval and Ratio
  • Linear Regression in R | A Step-by-Step Guide & Examples
  • Missing Data | Types, Explanation, & Imputation
  • Multiple Linear Regression | A Quick Guide (Examples)
  • Nominal Data | Definition, Examples, Data Collection & Analysis
  • Normal Distribution | Examples, Formulas, & Uses
  • Null and Alternative Hypotheses | Definitions & Examples
  • One-way ANOVA | When and How to Use It (With Examples)
  • Ordinal Data | Definition, Examples, Data Collection & Analysis
  • Parameter vs Statistic | Definitions, Differences & Examples
  • Pearson Correlation Coefficient (r) | Guide & Examples
  • Poisson Distributions | Definition, Formula & Examples
  • Probability Distribution | Formula, Types, & Examples
  • Quartiles & Quantiles | Calculation, Definition & Interpretation
  • Ratio Scales | Definition, Examples, & Data Analysis
  • Simple Linear Regression | An Easy Introduction & Examples
  • Skewness | Definition, Examples & Formula
  • Statistical Power and Why It Matters | A Simple Introduction
  • Student's t Table (Free Download) | Guide & Examples
  • T-distribution: What it is and how to use it
  • Test statistics | Definition, Interpretation, and Examples
  • The Standard Normal Distribution | Calculator, Examples & Uses
  • Two-Way ANOVA | Examples & When To Use It
  • Type I & Type II Errors | Differences, Examples, Visualizations
  • Understanding Confidence Intervals | Easy Examples & Formulas
  • Understanding P values | Definition and Examples
  • Variability | Calculating Range, IQR, Variance, Standard Deviation
  • What is Effect Size and Why Does It Matter? (Examples)
  • What Is Kurtosis? | Definition, Examples & Formula
  • What Is Standard Error? | How to Calculate (Guide with Examples)

What is your plagiarism score?

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 14 August 2024

A Scottish provenance for the Altar Stone of Stonehenge

  • Anthony J. I. Clarke   ORCID: orcid.org/0000-0002-0304-0484 1 ,
  • Christopher L. Kirkland   ORCID: orcid.org/0000-0003-3367-8961 1 ,
  • Richard E. Bevins 2 ,
  • Nick J. G. Pearce   ORCID: orcid.org/0000-0003-3157-9564 2 ,
  • Stijn Glorie 3 &
  • Rob A. Ixer 4  

Nature volume  632 ,  pages 570–575 ( 2024 ) Cite this article

70k Accesses

1 Citations

4461 Altmetric

Metrics details

  • Archaeology

Understanding the provenance of megaliths used in the Neolithic stone circle at Stonehenge, southern England, gives insight into the culture and connectivity of prehistoric Britain. The source of the Altar Stone, the central recumbent sandstone megalith, has remained unknown, with recent work discounting an Anglo-Welsh Basin origin 1 , 2 . Here we present the age and chemistry of detrital zircon, apatite and rutile grains from within fragments of the Altar Stone. The detrital zircon load largely comprises Mesoproterozoic and Archaean sources, whereas rutile and apatite are dominated by a mid-Ordovician source. The ages of these grains indicate derivation from an ultimate Laurentian crystalline source region that was overprinted by Grampian (around 460 million years ago) magmatism. Detrital age comparisons to sedimentary packages throughout Britain and Ireland reveal a remarkable similarity to the Old Red Sandstone of the Orcadian Basin in northeast Scotland. Such a provenance implies that the Altar Stone, a 6 tonne shaped block, was sourced at least 750 km from its current location. The difficulty of long-distance overland transport of such massive cargo from Scotland, navigating topographic barriers, suggests that it was transported by sea. Such routing demonstrates a high level of societal organization with intra-Britain transport during the Neolithic period.

Similar content being viewed by others

method of analysis in research paper

The expansion of Acheulean hominins into the Nefud Desert of Arabia

method of analysis in research paper

Cryptic geological histories accessed through entombed and matrix geochronometers in dykes

method of analysis in research paper

The earliest evidence of Acheulian occupation in Northwest Europe and the rediscovery of the Moulin Quignon site, Somme valley, France

Stonehenge, the Neolithic standing stone circle located on the Salisbury Plain in Wiltshire, England, offers valuable insight into prehistoric Britain. Construction at Stonehenge began as early as 3000  bc , with subsequent modifications during the following two millennia 3 , 4 . The megaliths of Stonehenge are divided into two major categories: sarsen stones and bluestones (Fig. 1a ). The larger sarsens comprise duricrust silcrete predominantly sourced from the West Woods, Marlborough, approximately 25 km north of Stonehenge 5 , 6 . Bluestone, the generic term for rocks considered exotic to the local area, includes volcanic tuff, rhyolite, dolerite and sandstone lithologies 4 (Fig. 1a ). Some lithologies are linked with Neolithic quarrying sites in the Mynydd Preseli area of west Wales 7 , 8 . An unnamed Lower Palaeozoic sandstone, associated with the west Wales area on the basis of acritarch fossils 9 , is present only as widely disseminated debitage at Stonehenge and possibly as buried stumps (Stones 40g and 42c).

figure 1

a , Plan view of Stonehenge showing exposed constituent megaliths and their provenance. The plan of Stonehenge was adapted from ref.  6 under a CC BY 4.0 license. Changes in scale and colour were made, and annotations were added. b , An annotated photograph shows the Altar Stone during a 1958 excavation. The Altar Stone photograph is from the Historic England archive. Reuse is not permitted.

The central megalith of Stonehenge, the Altar Stone (Stone 80), is the largest of the bluestones, measuring 4.9 × 1.0 × 0.5 m, and is a recumbent stone (Fig. 1b ), weighing 6 t and composed of pale green micaceous sandstone with distinctive mineralogy 1 , 2 , 10 (containing baryte, calcite and clay minerals, with a notable absence of K-feldspar) (Fig. 2 ).

figure 2

Minerals with a modal abundance above 0.5% are shown with compositional values averaged across both thin sections. U–Pb ablation pits from laser ablation inductively coupled plasma mass spectrometry (LA-ICP–MS) are shown with age (in millions of years ago, Ma), with uncertainty at the 2 σ level.

Previous petrographic work on the Altar Stone has implied an association to the Old Red Sandstone 10 , 11 , 12 (ORS). The ORS is a late Silurian to Devonian sedimentary rock assemblage that crops out widely throughout Great Britain and Ireland (Extended Data Fig. 1 ). ORS lithologies are dominated by terrestrial siliciclastic sedimentary rocks deposited in continental fluvial, lacustrine and aeolian environments 13 . Each ORS basin reflects local subsidence and sediment infill and thus contains proximal crystalline signatures 13 , 14 .

Constraining the provenance of the Altar Stone could give insights into the connectivity of Neolithic people who left no written record 15 . When the Altar Stone arrived at Stonehenge is uncertain; however, it may have been placed within the central trilithon horseshoe during the second construction phase around 2620–2480  bc 3 . Whether the Altar Stone once stood upright as an approximately 4 m high megalith is unclear 15 ; nevertheless, the current arrangement has Stones 55b and 156 from the collapsed Great Trilithon resting atop the prone and broken Altar Stone (Fig. 1b ).

An early proposed source for the Altar Stone from Mill Bay, Pembrokeshire (Cosheston Subgroup of the Anglo-Welsh ORS Basin), close to the Mynydd Preseli source of the doleritic and rhyolitic bluestones, strongly influenced the notion of a sea transport route via the Bristol Channel 12 . However, inconsistencies in petrography and detrital zircon ages between the Altar Stone and the Cosheston Subgroup have ruled this source out 1 , 11 . Nonetheless, a source from elsewhere in the ORS of the Anglo-Welsh Basin was still considered likely, with an inferred collection and overland transport of the Altar Stone en route to Stonehenge from the Mynydd Preseli 1 . However, a source from the Senni Formation (Cosheston Subgroup) is inconsistent with geochemical and petrographic data, which shows that the Anglo-Welsh Basin is highly unlikely to be the source 2 . Thus, the ultimate provenance of the Altar Stone had remained an open question.

Studies of detrital mineral grains are widely deployed to address questions throughout the Earth sciences and have utility in archaeological investigations 16 , 17 . Sedimentary rocks commonly contain a detrital component derived from a crystalline igneous basement, which may reflect a simple or complex history of erosion, transport and deposition cycles. This detrital cargo can fingerprint a sedimentary rock and its hinterland. More detailed insights become evident when a multi-mineral strategy is implemented, which benefits from the varying degrees of robustness to sedimentary transportation in the different minerals 18 , 19 , 20 .

Here, we present in situ U–Pb, Lu–Hf and trace element isotopic data for zircon, apatite and rutile from two fragments of the Altar Stone collected at Stonehenge: MS3 and 2010K.240 21 , 22 . In addition, we present comparative apatite U–Pb dates for the Orcadian Basin from Caithness and Orkney. We utilize statistical tools (Fig. 3 ) to compare the obtained detrital mineral ages and chemistry (Supplementary Information  1 – 3 ) to crystalline terranes and ORS successions across Great Britain, Ireland and Europe (Fig. 4 and Extended Data Fig. 1 ).

figure 3

a , Multidimensional scaling (MDS) plot of concordant zircon U–Pb ages from the Altar Stone and comparative age datasets, with ellipses at the 95% confidence level 58 . DIM 1 and DIM 2, dimensions 1 and 2. b , Cumulative probability plot of zircon U–Pb ages from crystalline terranes, the Orcadian Basin and the Altar Stone. For a cumulative probability plot of all ORS basins, see Extended Data Fig. 8 .

figure 4

a , Schematic map of Britain, showing outcrops of ORS and other Devonian sedimentary rocks, basement terranes and major faults. Potential Caledonian source plutons are colour-coded on the basis of age 28 . b , Kernel density estimate diagrams displaying zircon U–Pb age (histogram) and apatite Lu–Hf age (dashed line) spectra from the Altar Stone, the Orcadian Basin 25 and plausible crystalline source terranes. The apatite age components for the Altar Stone and Orcadian Basins are shown below their respective kernel density estimates. Extended Data Fig. 3 contains kernel density estimates of other ORS and New Red Sandstone (NRS) age datasets.

Laurentian basement signatures

The crystalline basement terranes of Great Britain and Ireland, from north to south, are Laurentia, Ganderia, Megumia and East Avalonia (Fig. 4a and Extended Data Fig. 1 ). Cadomia-Armorica is south of the Rheic Suture and encompasses basement rocks in western Europe, including northern France and Spain. East Avalonia, Megumia and Ganderia are partly separated by the Menai Strait Fault System (Fig. 4a ). Each terrane has discrete age components, which have imparted palaeogeographic information into overlying sedimentary basins 13 , 14 , 23 . Laurentia was a palaeocontinent that collided with Baltica and Avalonia (a peri-Gondwanan microcontinent) during the early Palaeozoic Caledonian Orogeny to form Laurussia 14 , 24 . West Avalonia is a terrane that includes parts of eastern Canada and comprised the western margin of Avalonia (Extended Data Fig. 1 ).

Statistical comparisons, using a Kolmogorov–Smirnov test, between zircon ages from the Laurentian crystalline basement and the Altar Stone indicate that at a 95% confidence level, no distinction in provenance is evident between Altar Stone detrital zircon U–Pb ages and those from the Laurentian basement. That is, we cannot reject the null hypothesis that both samples are from the same underlying age distribution (Kolmogorov–Smirnov test: P  > 0.05) (Fig. 3a ).

Detrital zircon age components, defined by concordant analyses from at least 4 grains in the Altar Stone, include maxima at 1,047, 1,091, 1,577, 1,663 and 1,790 Ma (Extended Data Fig. 2 ), corresponding to known tectonomagmatic events and sources within Laurentia and Baltica, including the Grenville (1,095–980 Ma), Labrador (1,690–1,590 Ma), Gothian (1,660–1,520 Ma) and Svecokarellian (1,920–1,770 Ma) orogenies 25 .

Laurentian terranes are crystalline lithologies north of the Iapetus Suture Zone (which marks the collision zone between Laurentia and Avalonia) and include the Southern Uplands, Midland Valley, Grampian, Northern Highlands and Hebridean Terranes (Fig. 4a ). Together, these terranes preserve a Proterozoic to Archaean record of zircon production 24 , distinct from the southern Gondwanan-derived terranes of Britain 20 , 26 (Fig. 4a and Extended Data Fig. 3 ).

Age data from Altar Stone rutile grains also point towards an ultimate Laurentian source with several discrete age components (Extended Data Fig. 4 and Supplementary Information  1 ). Group 2 rutile U–Pb analyses from the Altar Stone include Proterozoic ages from 1,724 to 591 Ma, with 3 grains constituting an age peak at 1,607 Ma, overlapping with Laurentian magmatism, including the Labrador and Pinwarian (1,690–1,380 Ma) orogenies 24 . Southern terranes in Britain are not characterized by a large Laurentian (Mesoproterozoic) crystalline age component 25 (Fig. 4b and Extended Data Fig. 3 ). Instead, terranes south of the Iapetus Suture are defined by Neoproterozoic to early Palaeozoic components, with a minor component from around two billion years ago (Figs. 3b and  4b ).

U–Pb analyses of apatite from the Altar Stone define two distinct age groupings. Group 2 apatite U–Pb analyses define a lower intercept age of 1,018 ± 24 Ma ( n  = 9) (Extended Data Fig. 5 ), which overlaps, within uncertainty, to a zircon age component at 1,047 Ma, consistent with a Grenville source 25 . Apatite Lu–Hf dates at 1,496 and 1,151 Ma also imply distinct Laurentian sources 25 (Fig. 4b , Extended Data Fig. 6 and Supplementary Information  2 ). Ultimately, the presence of Grenvillian apatite in the Altar Stone suggests direct derivation from the Laurentian basement, given the lability of apatite during prolonged chemical weathering 20 , 27 .

Grampian Terrane detrital grains

Apatite and rutile U–Pb analyses from the Altar Stone are dominated by regressions from common Pb that yield lower intercepts of 462 ± 4 Ma ( n  = 108) and 451 ± 8 Ma ( n  = 83), respectively (Extended Data Figs. 4 and 5 ). A single concordant zircon analysis also yields an early Palaeozoic age of 498 ± 17 Ma. Hence, with uncertainty from both lower intercepts, Group 1 apatite and rutile analyses demonstrate a mid-Ordovician (443–466 Ma) age component in the Altar Stone. These mid-Ordovician ages are confirmed by in situ apatite Lu–Hf analyses, which define a lower intercept of 470 ± 29 Ma ( n  = 16) (Extended Data Fig. 6 and Supplementary Information  2 ).

Throughout the Altar Stone are sub-planar 100–200-µm bands of concentrated heavy resistive minerals. These resistive minerals are interpreted to be magmatic in origin, given internal textures (oscillatory zonation), lack of mineral overgrowths (in all dated minerals) (Fig. 2 ) and the igneous apatite trace element signatures 27 (Extended Data Fig. 7 and Supplementary Information  3 ). Moreover, there is a general absence of detrital metamorphic zircon grains, further supporting a magmatic origin for these grains.

The most appropriate source region for such mid-Ordovician grains within Laurentian basement is the Grampian Terrane of northeast Scotland (Fig. 4a ). Situated between the Great Glen Fault to the north and the Highland Boundary Fault to the south, the terrane comprises Neoproterozoic to Lower Palaeozoic metasediments termed the Dalradian Supergroup 28 , which are intruded by a compositionally diverse suite of early Palaeozoic granitoids and gabbros (Fig. 4a ). The 466–443 Ma age component from Group 1 apatite and rutile U–Pb analyses overlaps with the terminal stages of Grampian magmatism and subsequent granite pluton emplacement north of the Highland Boundary Fault 28 (Fig. 4a ).

Geochemical classification plots for the Altar Stone apatite imply a compositionally diverse source, much like the lithological diversity within the Grampian Terrane 28 , with 61% of apatite classified as coming from felsic sources, 35% from mafic sources and 4% from alkaline sources (Extended Data Fig. 7 and Supplementary Information  3 ). Specifically, igneous rocks within the Grampian Terrane are largely granitoids, thus accounting for the predominance of felsic-classified apatite grains 29 . We posit that the dominant supply of detritus from 466–443 Ma came from the numerous similarly aged granitoids formed on the Laurentian margin 28 , which are present in both the Northern Highlands and the Grampian Terranes 28 (Fig. 4a ). The alkaline to calc-alkaline suites in these terranes are volumetrically small, consistent with the scarcity of alkaline apatite grains within the Altar Stone (Extended Data Fig. 7 ). Indeed, the Glen Dessary syenite at 447 ± 3 Ma is the only age-appropriate felsic-alkaline pluton in the Northern Highlands Terrane 30 .

The Stacey and Kramers 31 model of terrestrial Pb isotopic evolution predicts a 207 Pb/ 206 Pb isotopic ratio ( 207 Pb/ 206 Pb i ) of 0.8601 for 465 Ma continental crust. Mid-Ordovician regressions through Group 1 apatite and rutile U–Pb analyses yield upper intercepts for 207 Pb/ 206 Pb i of 0.8603 ± 0.0033 and 0.8564 ± 0.0014, respectively (Extended Data Figs. 4 and 5 and Supplementary Information  1 ). The similarity between apatite and rutile 207 Pb/ 206 Pb i implies they were sourced from the same Mid-Ordovician magmatic fluids. Ultimately, the calculated 207 Pb/ 206 Pb i value is consistent with the older (Laurentian) crust north of the Iapetus Suture in Britain 32 (Fig. 4a ).

Orcadian Basin ORS

The detrital zircon age spectra confirm petrographic associations between the Altar Stone and the ORS. Furthermore, the Altar Stone cannot be a New Red Sandstone (NRS) lithology of Permo-Triassic age. The NRS, deposited from around 280–240 Ma, unconformably overlies the ORS 14 . NRS, such as that within the Wessex Basin (Extended Data Fig. 1 ), has characteristic detrital zircon age components, including Carboniferous to Permian zircon grains, which are not present in the Altar Stone 1 , 23 , 26 , 33 , 34 (Extended Data Fig. 3 ).

An ORS classification for the Altar Stone provides the basis for further interpretation of provenance (Extended Data Figs. 1 and 8 ), given that the ORS crops out in distinct areas of Great Britain and Ireland, including the Anglo-Welsh border and south Wales, the Midland Valley and northeast Scotland, reflecting former Palaeozoic depocentres 14 (Fig. 4a ).

Previously reported detrital zircon ages and petrography show that ORS outcrops of the Anglo-Welsh Basin in the Cosheston Subgroup 1 and Senni Formation 2 are unlikely to be the sources of the Altar Stone (Fig. 4a ). ORS within the Anglo-Welsh Basin is characterized by mid-Palaeozoic zircon age maxima and minor Proterozoic components (Fig. 4a ). Ultimately, the detrital zircon age spectra of the Altar Stone are statistically distinct from the Anglo-Welsh Basin (Fig. 3a ). In addition, the ORS outcrops of southwest England (that is, south of the Variscan front), including north Devon and Cornwall (Cornubian Basin) (Fig. 4a ), show characteristic facies, including marine sedimentary structures and fossils along with a metamorphic fabric 13 , 26 , inconsistent with the unmetamorphosed, terrestrial facies of the Altar Stone 1 , 11 .

Another ORS succession with published age data for comparison is the Dingle Peninsula Basin, southwest Ireland. However, the presence of late Silurian (430–420 Ma) and Devonian (400–350 Ma) apatite, zircon and muscovite from the Dingle Peninsula ORS discount a source for the Altar Stone from southern Ireland 20 . The conspicuous absence of apatite grains of less than 450 Ma in age in the Altar Stone precludes the input of Late Caledonian magmatic grains to the source sediment of the Altar Stone and demonstrates that the ORS of the Altar Stone was deposited prior to or distally from areas of Late Caledonian magmatism, unlike the ORS of the Dingle Peninsula 20 . Notably, no distinction in provenance between the Anglo-Welsh Basin and the Dingle Peninsula ORS is evident (Kolmogorov–Smirnov test: P  > 0.05), suggesting that ORS basins south of the Iapetus Suture are relatively more homogenous in terms of their detrital zircon age components (Fig. 4a ).

In Scotland, ORS predominantly crops out in the Midland Valley and Orcadian Basins (Fig. 4a ). The Midland Valley Basin is bound between the Highland Boundary Fault and the Iapetus Suture and is located within the Midland Valley and Southern Uplands Terranes. Throughout Midland Valley ORS stratigraphy, detrital zircon age spectra broadly show a bimodal age distribution between Lower Palaeozoic and Mesoproterozoic components 35 , 36 (Extended Data Fig. 3 ). Indeed, throughout 9 km of ORS stratigraphy in the Midland Valley Basin and across the Sothern Uplands Fault, no major changes in provenance are recognized 36 (Fig. 4a ). Devonian zircon, including grains as young as 402 ± 5 Ma from the northern ORS in the Midland Valley Basin 36 , further differentiates this basin from the Altar Stone (Fig. 3a and Extended Data Fig. 3 ). The scarcity of Archaean to late Palaeoproterozoic zircon grains within the Midland Valley ORS shows that the Laurentian basement was not a dominant detrital source for those rocks 35 . Instead, ORS of the Midland Valley is primarily defined by zircon from 475 Ma interpreted to represent the detrital remnants of Ordovician volcanism within the Midland Valley Terrane, with only minor and periodic input from Caledonian plutonism 35 .

The Orcadian Basin of northeast Scotland, within the Grampian and Northern Highlands terranes, contains a thick package of mostly Mid-Devonian ORS, around 4 km thick in Caithness and up to around 8 km thick in Shetland 14 (Fig. 4a ). The detrital zircon age spectra from Orcadian Basin ORS provides the closest match to the Altar Stone detrital ages 25 (Fig. 3 and Extended Data Fig. 8 ). A Kolmogorov–Smirnov test on age spectra from the Altar Stone and the Orcadian Basin fails to reject the null hypothesis that they are derived from the same underlying distribution (Kolmogorov–Smirnov test: P  > 0.05) (Fig. 3a ). To the north, ORS on the Svalbard archipelago formed on Laurentian and Baltican basement rocks 37 . Similar Kolmogorov–Smirnov test results, where each detrital zircon dataset is statistically indistinguishable, are obtained for ORS from Svalbard, the Orcadian Basin and the Altar Stone.

Apatite U–Pb age components from Orcadian Basin samples from Spittal, Caithness (AQ1) and Cruaday, Orkney (CQ1) (Fig. 4a ) match those from the Altar Stone. Group 2 apatite from the Altar Stone at 1,018 ± 24 Ma is coeval with a Grenvillian age from Spittal at 1,013 ± 35 Ma. Early Palaeozoic apatite components at 473 ± 25 Ma and 466 ± 6 Ma, from Caithness and Orkney, respectively (Extended Data Fig. 5 and Supplementary Information  1 ), are also identical, within uncertainty, to Altar Stone Group 1 (462 ± 4 Ma) apatite U–Pb analyses and a Lu–Hf component at 470 ± 28 Ma supporting a provenance from the Orcadian Basin for the Altar Stone (Extended Data Fig. 6 and Supplementary Information  2 ).

During the Palaeozoic, the Orcadian Basin was situated between Laurentia and Baltica on the Laurussian palaeocontinent 14 . Correlations between detrital zircon age components imply that both Laurentia and Baltica supplied sediment into the Orcadian Basin 25 , 36 . Detrital grains from more than 900 Ma within the Altar Stone are consistent with sediment recycling from intermediary Neoproterozoic supracrustal successions (for example, Dalradian Supergroup) within the Grampian Terrane but also from the Särv and Sparagmite successions of Baltica 25 , 36 . At around 470 Ma, the Grampian Terrane began to denude 28 . Subsequently, first-cycle detritus, such as that represented by Group 1 apatite and rutile, was shed towards the Orcadian Basin from the southeast 25 .

Thus, the resistive mineral cargo in the Altar Stone represents a complex mix of first and multi-cycle grains from multiple sources. Regardless of total input from Baltica versus Laurentia into the Orcadian Basin, crystalline terranes north of the Iapetus Suture (Fig. 4a ) have distinct age components that match the Altar Stone in contrast to Gondwanan-derived terranes to the south.

The Altar Stone and Neolithic Britain

Isotopic data for detrital zircon and rutile (U–Pb) and apatite (U–Pb, Lu–Hf and trace elements) indicate that the Altar Stone of Stonehenge has a provenance from the ORS in the Orcadian Basin of northeast Scotland (Fig. 4a ). Given this detrital mineral provenance, the Altar Stone cannot have been sourced from southern Britain (that is, south of the Iapetus Suture) (Fig. 4a ), including the Anglo-Welsh Basin 1 , 2 .

Some postulate a glacial transport mechanism for the Mynydd Preseli (Fig. 4a ) bluestones to Salisbury Plain 38 , 39 . However, such transport for the Altar Stone is difficult to reconcile with ice-sheet reconstructions that show a northwards movement of glaciers (and erratics) from the Grampian Mountains towards the Orcadian Basin during the Last Glacial Maximum and, indeed, previous Pleistocene glaciations 40 , 41 . Moreover, there is little evidence of extensive glacial deposition in central southern Britain 40 , nor are Scottish glacial erratics found at Stonehenge 42 . Sr and Pb isotopic signatures from animal and human remains from henges on Salisbury Plain demonstrate the mobility of Neolithic people within Britain 32 , 43 , 44 , 45 . Furthermore, shared architectural elements and rock art motifs between Neolithic monuments in Orkney, northern Britain, and Ireland point towards the long-distance movement of people and construction materials 46 , 47 .

Thus, we posit that the Altar Stone was anthropogenically transported to Stonehenge from northeast Scotland, consistent with evidence of Neolithic inhabitation in this region 48 , 49 . Whereas the igneous bluestones were brought around 225 km from the Mynydd Preseli to Stonehenge 50 (Fig. 4a ), a Scottish provenance for the Altar Stone demands a transport distance of at least 750 km (Fig. 4a ). Nonetheless, even with assistance from beasts of burden 51 , rivers and topographical barriers, including the Grampians, Southern Uplands and the Pennines, along with the heavily forested landscape of prehistoric Britain 52 , would have posed formidable obstacles for overland megalith transportation.

At around 5000  bc , Neolithic people introduced the common vole ( Microtus arvalis ) from continental Europe to Orkney, consistent with the long-distance marine transport of cattle and goods 53 . A Neolithic marine trade network of quarried stone tools is found throughout Britain, Ireland and continental Europe 54 . For example, a saddle quern, a large stone grinding tool, was discovered in Dorset and determined to have a provenance in central Normandy 55 , implying the shipping of stone cargo over open water during the Neolithic. Furthermore, the river transport of shaped sandstone blocks in Britain is known from at least around 1500  bc (Hanson Log Boat) 56 . In Britain and Ireland, sea levels approached present-day heights from around 4000  bc 57 , and although coastlines have shifted, the geography of Britain and Ireland would have permitted sea routes southward from the Orcadian Basin towards southern England (Fig. 4a ). A Scottish provenance for the Altar Stone implies Neolithic transport spanning the length of Great Britain.

This work analysed two 30-µm polished thin sections of the Altar Stone (MS3 and 2010K.240) and two sections of ORS from northeast Scotland (Supplementary Information  4 ). CQ1 is from Cruaday, Orkney (59° 04′ 34.2″ N, 3° 18′ 54.6″ W), and AQ1 is from near Spittal, Caithness (58° 28′ 13.8″ N, 3° 27′ 33.6″ W). Conventional optical microscopy (transmitted and reflected light) and automated mineralogy via a TESCAN Integrated Mineral Analyser gave insights into texture and mineralogy and guided spot placement during LA-ICP–MS analysis. A CLARA field emission scanning electron microscope was used for textural characterization of individual minerals (zircon, apatite and rutile) through high-resolution micrometre-scale imaging under both back-scatter electron and cathodoluminescence. The Altar Stone is a fine-grained and well-sorted sandstone with a mean grain size diameter of ≤300 µm. Quartz grains are sub-rounded and monocrystalline. Feldspars are variably altered to fine-grained white mica. MS3 and 2010K.240 have a weakly developed planar fabric and non-planar heavy mineral laminae approximately 100–200 µm thick. Resistive heavy mineral bands are dominated by zircon, rutile, and apatite, with grains typically 10–40 µm wide. The rock is mainly cemented by carbonate, with localized areas of barite and quartz cement. A detailed account of Altar Stone petrography is provided in refs. 1 , 59 .

Zircon isotopic analysis

Zircon u–pb methods.

Two zircon U–Pb analysis sessions were completed at the GeoHistory facility in the John De Laeter Centre (JdLC), Curtin University, Australia. Ablations within zircon grains were created using an excimer laser RESOlution LE193 nm ArF with a Laurin Technic S155 cell. Isotopic data was collected with an Agilent 8900 triple quadrupole mass spectrometer, with high-purity Ar as the plasma carrier gas (flow rate 1.l min −1 ). An on-sample energy of ~2.3–2.7 J cm −2 with a 5–7 Hz repetition rate was used to ablate minerals for 30–40 s (with 25–60 s of background capture). Two cleaning pulses preceded analyses, and ultra-high-purity He (0.68 ml min −1 ) and N 2 (2.8 ml min −1 ) were used to flush the sample cell. A block of reference mineral was analysed following 15–20 unknowns. The small, highly rounded target grains of the Altar Stone (usually <30 µm in width) necessitated using a spot size diameter of ~24 µm for all ablations. Isotopic data was reduced using Iolite 4 60 with the U-Pb Geochronology data reduction scheme, followed by additional calculation and plotting via IsoplotR 61 . The primary matrix-matched reference zircon 62 used to correct instrumental drift and mass fractionation was GJ-1, 601.95 ± 0.40 Ma. Secondary reference zircon included Plešovice 63 , 337.13 ± 0.37 Ma, 91500 64 , 1,063.78 ± 0.65 Ma, OG1 65 3,465.4 ± 0.6 Ma and Maniitsoq 66 3,008.7 ± 0.6 Ma. Weighted mean U–Pb ages for secondary reference materials were within 2 σ uncertainty of reported values (Supplementary Information  5 ).

Zircon U–Pb results

Across two LA-ICP–MS sessions, 83 U–Pb measurements were obtained on as many zircon grains; 41 were concordant (≤10% discordant), where discordance is defined using the concordia log distance (%) approach 67 . We report single-spot (grain) concordia ages, which have numerous benefits over conventional U–Pb/Pb–Pb ages, including providing an objective measure of discordance that is directly coupled to age and avoids the arbitrary switch between 206 Pb/ 238 U and 207 Pb/ 206 Pb. Furthermore, given the spread in ages (Early Palaeozoic to Archaean), concordia ages provide optimum use of both U–Pb/Pb–Pb ratios, offering greater precision over 206 Pb/ 238 U or 207 Pb/ 206 Pb ages alone.

Given that no direct sampling of the Altar Stone is permitted, we are limited in the amount of material available for destructive analysis, such as LA-ICP–MS. We collate our zircon age data with the U–Pb analyses 1 of FN593 (another fragment of the Altar Stone), filtered using the same concordia log distance (%) discordance filter 67 . The total concordant analyses used in this work is thus 56 over 3 thin sections, each showing no discernible provenance differences. Zircon concordia ages span from 498 to 2,812 Ma. Age maxima (peak) were calculated after Gehrels 68 , and peak ages defined by ≥4 grains include 1,047, 1,091, 1,577, 1,663 and 1,790 Ma.

For 56 concordant ages from 56 grains at >95% certainty, the largest unmissed fraction is calculated at 9% of the entire uniform detrital population 69 . In any case, the most prevalent and hence provenance important components will be sampled for any number of analyses 69 . We analysed all zircon grains within the spatial limit of the technique in the thin sections 70 . We used in situ thin-section analysis, which can mitigate against contamination and sampling biases in detrital studies 71 . Adding apatite (U–Pb and Lu–Hf) and rutile (U–Pb) analyses bolsters our confidence in provenance interpretations as these minerals will respond dissimilarly during transport.

Comparative zircon datasets

Zircon U–Pb compilations of the basement terranes of Britain and Ireland were sourced from refs. 20 , 26 . ORS detrital zircon datasets used for comparison include isotopic data from the Dingle Peninsula Basin 20 , Anglo-Welsh Basin 72 , Midland Valley Basin 35 , Svalbard ORS 37 and Orcadian Basin 25 . NRS zircon U–Pb ages were sourced from the Wessex Basin 33 . Comparative datasets were filtered for discordance as per our definition above 20 , 26 . Kernel density estimates for age populations were created within IsoplotR 61 using a kernel and histogram bandwidth of 50 Ma.

A two-sample Kolmogorov–Smirnov statistical test was implemented to compare the compiled zircon age datasets with the Altar Stone (Supplementary Information  6 ). This two-sided test compares the maximum probability difference between two cumulative density age functions, evaluating the null hypothesis that both age spectra are drawn from the same distribution based on a critical value dependent on the number of analyses and a chosen confidence level.

The number of zircon ages within the comparative datasets used varies from the Altar Stone ( n  = 56) to Laurentia ( n  = 2,469). Therefore, to address the degree of dependence on sample n , we also implemented a Monte Carlo resampling (1,000 times) procedure for the Kolmogorov–Smirnov test, including the uncertainty on each age determination to recalculate P values and standard deviations (Supplementary Information  7 ), based on the resampled distribution of each sample. The results from Kolmogorov–Smirnov tests, using Monte Carlo resampling (and multidimensional analysis), taking uncertainty due to sample n into account, also support the interpretation that at >95% certainty, no distinction in provenance can be made between the Altar Stone zircon age dataset ( n  = 56) and those from the Orcadian Basin ( n  = 212), Svalbard ORS ( n  = 619 ) and the Laurentian basement (Supplementary Information  7 ).

MDS plots for zircon datasets were created using the MATLAB script of ref.  58 . Here, we adopted a bootstrap resampling (>1,000 times) with Procrustes rotation of Kolmogorov–Smirnov values, which outputs uncertainty ellipses at a 95% confidence level (Fig. 3a ). In MDS plots, stress is a goodness of fit indicator between dissimilarities in the datasets and distances on the MDS plot. Stress values below 0.15 are desirable 58 . For the MDS plot in Fig. 3a , the value is 0.043, which indicates an “excellent” fit 58 .

Rutile isotopic analysis

Rutile u–pb methods.

One rutile U–Pb analysis session was completed at the GeoHistory facility in the JdLC, Curtin University, Australia. Rutile grains were ablated (24 µm) using a Resonetics RESOlution M-50A-LR sampling system, using a Compex 102 excimer laser, and measured using an Agilent 8900 triple quadrupole mass analyser. The analytical parameters included an on-sample energy of 2.7 J cm −2 , a repetition rate of 7 Hz for a total analysis time of 45 s, and 60 s of background data capture. The sample chamber was purged with ultrahigh purity He at a flow rate of 0.68 l min −1 and N 2 at 2.8 ml min −1 .

U–Pb data for rutile analyses was reduced against the R-10 rutile primary reference material 73 (1,091 ± 4 Ma). The secondary reference material used to monitor the accuracy of U–Pb ratios was R-19 rutile. The mean weighted 238 U/ 206 Pb age obtained for R-19 was 491 ± 10 (mean squared weighted deviation (MSWD) = 0.87, p ( χ 2 ) = 0.57) within uncertainty of the accepted age 74 of 489.5 ± 0.9 Ma.

Rutile grains with negligible Th concentrations can be corrected for common Pb using a 208 Pb correction 74 . Previously used thresholds for Th content have included 75 , 76 Th/U < 0.1 or a Th concentration >5% U. However, Th/U ratios for rutile from MS3 are typically > 1; thus, a 208 Pb correction is not applicable. Instead, we use a 207 -based common Pb correction 31 to account for the presence of common Pb. Rutile isotopic data was reduced within Iolite 4 60 using the U–Pb Geochronology reduction scheme and IsoplotR 61 .

Rutile U–Pb Results

Ninety-two rutile U–Pb analyses were obtained in a U–Pb single session, which defined two coherent age groupings on a Tera–Wasserburg plot.

Group 1 constitutes 83 U–Pb rutile analyses, forming a well-defined mixing array on a Tera-Wasserburg plot between common and radiogenic Pb components. This array yields an upper intercept of 207 Pb/ 206 Pb i  = 0.8563 ± 0.0014. The lower intercept implies an age of 451 ± 8 Ma. The scatter about the line (MSWD = 2.7) is interpreted to reflect the variable passage of rutile of diverse grain sizes through the radiogenic Pb closure temperature at ~600 °C during and after magmatic crystallization 77 .

Group 2 comprises 9 grains, with 207 Pb corrected 238 U/ 206 Pb ages ranging from 591–1,724 Ma. Three grains from Group 2 define an age peak 68 at 1,607 Ma. Given the spread in U–Pb ages, we interpret these Proterozoic grains to represent detrital rutile derived from various sources.

Apatite isotopic analysis

Apatite u–pb methods.

Two apatite U–Pb LA-ICP–MS analysis sessions were conducted at the GeoHistory facility in the JdLC, Curtin University, Australia. For both sessions, ablations were created using a RESOlution 193 nm excimer laser ablation system connected to an Agilent 8900 ICP–MS with a RESOlution LE193 nm ArF and a Laurin Technic S155 cell ICP–MS. Other analytical details include a fluence of 2 J cm 2 and a 5 Hz repetition rate. For the Altar Stone section (MS3) and the Orcadian Basin samples (Supplementary Information  4 ), 24- and 20-µm spot sizes were used, respectively.

The matrix-matched primary reference material used for apatite U–Pb analyses was the Madagascar apatite (MAD-1) 78 . A range of secondary reference apatite was analysed, including FC-1 79 (Duluth Complex) with an age of 1,099.1 ± 0.6 Ma, Mount McClure 80 , 81 526 ± 2.1 Ma, Otter Lake 82 913 ± 7 Ma and Durango 31.44 ± 0.18 83  Ma. Anchored regressions (through reported 207 Pb/ 206 Pb i values) for secondary reference material yielded lower intercept ages within 2 σ uncertainty of reported values (Supplementary Information  8 ).

Altar Stone apatite U–Pb results

This first session of apatite U–Pb of MS3 from the Altar Stone yielded 117 analyses. On a Tera–Wasserburg plot, these analyses form two discordant mixing arrays between common and radiogenic Pb components with distinct lower intercepts.

The array from Group 2 apatite, comprised of 9 analyses, yields a lower intercept equivalent to an age of 1,018 ± 24 Ma (MSWD = 1.4) with an upper intercept 207 Pb/ 206 Pb i  = 0.8910 ± 0.0251. The f 207 % (the percentage of common Pb estimated using the 207 Pb method) of apatite analyses in Group 2 ranges from 16.66–88.8%, with a mean of 55.76%.

Group 1 apatite is defined by 108 analyses yielding a lower intercept of 462 ± 4 Ma (MSWD = 2.4) with an upper intercept 207 Pb/ 206 Pb i  = 0.8603 ± 0.0033. The f 207 % of apatite analyses in Group 1 range from 10.14–99.91%, with a mean of 78.65%. The slight over-dispersion of the apatite regression line may reflect some variation in Pb closure temperature in these crystals 84 .

Orcadian basin apatite U–Pb results

The second apatite U–Pb session yielded 138 analyses from samples CQ1 and AQ1. These data form three discordant mixing arrays between radiogenic and common Pb components on a Tera–Wasserburg plot.

An unanchored regression through Group 1 apatite ( n  = 14) from the Cruaday sample (CQ1) yields a lower intercept of 473 ± 25 Ma (MSWD = 1.8) with an upper intercept of 207 Pb/ 206 Pb i  = 0.8497 ± 0.0128. The f 207 % spans 38–99%, with a mean value of 85%.

Group 1 from the Spittal sample (AQ1), comprised of 109 analyses, yields a lower intercept equal to 466 ± 6 Ma (MSWD = 1.2). The upper 207 Pb/ 206 Pb i is equal to 0.8745 ± 0.0038. f 207 % values for this group range from 6–99%, with a mean value of 83%. A regression through Group 2 analyses ( n  = 17) from the Spittal sample yields a lower intercept of 1,013 ± 35 Ma (MSWD = 1) and an upper intercept 207 Pb/ 206 Pb i of 0.9038 ± 0.0101. f 207 % values span 25–99%, with a mean of 76%. Combined U–Pb analyses from Groups 1 from CQ1 and AQ1 ( n  = 123) yield a lower intercept equivalent to 466 ± 6 Ma (MSWD = 1.4) and an upper intercept 207 Pb/ 206 Pb i of 0.8726 ± 0.0036, which is presented beneath the Orcadian Basin kernel density estimate in Fig. 4b .

Apatite Lu–Hf methods

Apatite grains were dated in thin-section by the in situ Lu–Hf method at the University of Adelaide, using a RESOlution-LR 193 nm excimer laser ablation system, coupled to an Agilent 8900 ICP–MS/MS 85 , 86 . A gas mixture of NH 3 in He was used in the mass spectrometer reaction cell to promote high-order Hf reaction products, while equivalent Lu and Yb reaction products were negligible. The mass-shifted (+82 amu) reaction products of 176+82 Hf and 178+82 Hf reached the highest sensitivity of the measurable range and were analysed free from isobaric interferences. 177 Hf was calculated from 178 Hf, assuming natural abundances. 175 Lu was measured on mass as a proxy 85 for 176 Lu. Laser ablation was conducted with a laser beam of 43 µm at 7.5 Hz repetition rate and a fluency of approximately 3.5 J cm −2 . The analysed isotopes (with dwell times in ms between brackets) are 27 Al (2), 43 Ca (2), 57 Fe (2), 88 Sr (2), 89+85 Y (2), 90+83 Zr (2), 140+15 Ce (2), 146 Nd (2), 147 Sm (2), 172 Yb (5), 175 Lu (10), 175+82 Lu (50), 176+82 Hf (200) and 178+82 Hf (150). Isotopes with short dwell times (<10 ms) were measured to confirm apatite chemistry and to monitor for inclusions. 175+82 Lu was monitored for interferences on 176+82 Hf.

Relevant isotope ratios were calculated in LADR 87 using NIST 610 as the primary reference material 88 . Subsequently, reference apatite OD-306 78 (1,597 ± 7 Ma) was used to correct the Lu–Hf isotope ratios for matrix-induced fractionation 86 , 89 . Reference apatites Bamble-1 (1,597 ± 5 Ma), HR-1 (344 ± 2 Ma) and Wallaroo (1,574 ± 6 Ma) were monitored for accuracy verification 85 , 86 , 90 . Measured Lu–Hf dates of 1,098 ± 7 Ma, 346.0 ± 3.7 Ma and 1,575 ± 12 Ma, respectively, are in agreement with published values. All reference materials have negligible initial Hf, and weighted mean Lu–Hf dates were calculated in IsoplotR 61 directly from the (matrix-corrected) 176 Hf/ 176 Lu ratios.

For the Altar Stone apatites, which have variable 177 Hf/ 176 Hf compositions, single-grain Lu–Hf dates were calculated by anchoring isochrons to an initial 177 Hf/ 176 Hf composition 90 of 3.55 ± 0.05, which spans the entire range of initial 177 Hf/ 176 Hf ratios of the terrestrial reservoir (for example, ref. 91 ). The reported uncertainties for the single-grain Lu–Hf dates are presented as 95% confidence intervals, and dates are displayed on a kernel density estimate plot.

Apatite Lu–Hf results

Forty-five apatite Lu–Hf analyses were obtained from 2010K.240. Those with radiogenic Lu ingrowth or lacking common Hf gave Lu–Hf ages, defining four coherent isochrons and age groups.

Group 1, defined by 16 grains, yields a Lu–Hf isochron with a lower intercept of 470 ± 28 Ma (MSWD = 0.16, p ( χ 2 ) = 1). A second isochron through 5 analyses (Group 2) constitutes a lower intercept equivalent to 604 ± 38 Ma (MSWD = 0.14, p ( χ 2 ) = 0.94). Twelve apatite Lu–Hf analyses define Group 3 with a lower intercept of 1,123 ± 42 Ma (MSWD = 0.75, p ( χ 2 ) = 0.68). Three grains constitute the oldest grouping, Group 4 at 1,526 ± 186 Ma (MSWD = 0.014, p ( χ 2 ) = 0.91).

Apatite trace elements methods

A separate session of apatite trace element analysis was undertaken. Instrumentation and analytical set-up were identical to that described in 4.1. NIST 610 glass was the primary reference material for apatite trace element analyses. 43 Ca was used as the internal reference isotope, assuming an apatite Ca concentration of 40 wt%. Secondary reference materials included NIST 612 and the BHVO−2g glasses 92 . Elemental abundances for secondary reference material were generally within 5–10% of accepted values. Apatite trace element data was examined using the Geochemical Data Toolkit 93 .

Apatite trace elements results

One hundred and thirty-six apatite trace element analyses were obtained from as many grains. Geochemical classification schemes for apatite were used 29 , and three compositional groupings (felsic, mafic-intermediate, and alkaline) were defined.

Felsic-classified apatite grains ( n  = 83 (61% of analyses)) are defined by La/Nd of <0.6 and (La + Ce + Pr)/ΣREE (rare earth elements) of <0.5. The median values of felsic grains show a flat to slightly negative gradient on the chondrite-normalized REE plot from light to heavy REEs 94 . Felsic apatite’s median europium anomaly (Eu/Eu*) is 0.59, a moderately negative signature.

Mafic-intermediate apatite 29 ( n  = 48 (35% of grains)) are defined by (La + Ce + Pr)/ΣREE of 0.5–0.7 and a La/Nd of 0.5–1.5. In addition, apatite grains of this group typically exhibit a chondrite-normalized Ce/Yb of >5 and ΣREEs up to 1.25 wt%. Apatite grains classified as mafic-intermediate show a negative gradient on a chondrite-normalized REE plot from light to heavy REEs. The apatite grains of this group generally show the most enrichment in REEs compared to chondrite 94 . The median europium (Eu/Eu*) of mafic-intermediate apatite is 0.62, a moderately negative anomaly.

Lastly, alkaline apatite grains 29 ( n  = 5 (4% of analyses)) are characterized by La/Nd > 1.5 and a (La + Ce + Pr)/ΣREE > 0.8. The median europium anomaly of this group is 0.45. This grouping also shows elevated chondrite-normalized Ce/Yb of >10 and >0.5 wt% for the ΣREEs.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The isotopic and chemical data supporting the findings of this study are available within the paper and its supplementary information files.

Bevins, R. E. et al. Constraining the provenance of the Stonehenge ‘Altar Stone’: evidence from automated mineralogy and U–Pb zircon age dating. J. Archaeolog. Sci. 120 , 105188 (2020).

Article   CAS   Google Scholar  

Bevins, R. E. et al. The Stonehenge Altar Stone was probably not sourced from the Old Red Sandstone of the Anglo-Welsh Basin: time to broaden our geographic and stratigraphic horizons? J. Archaeolog. Sci. Rep. 51 , 104215 (2023).

Google Scholar  

Pearson, M. P. et al. in Stonehenge for the Ancestors: Part 2: Synthesis (eds Pearson, M. P. et al.) 47–75 (Sidestone Press, 2022).

Pitts, M. W. How to Build Stonehenge (Thames & Hudson, 2022).

Nash, D. J. et al. Origins of the sarsen megaliths at Stonehenge. Sci. Adv. 6 , eabc0133 (2020).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Nash, D. J. et al. Petrological and geochemical characterisation of the sarsen stones at Stonehenge. PLoS ONE 16 , e0254760 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Pearson, M. P. et al. Megalith quarries for Stonehenge’s bluestones. Antiquity 93 , 45–62 (2019).

Article   Google Scholar  

Pearson, M. P. et al. Craig Rhos-y-felin: a Welsh bluestone megalith quarry for Stonehenge. Antiquity 89 , 1331–1352 (2015).

Ixer, R., Turner, P., Molyneux, S. & Bevins, R. The petrography, geological age and distribution of the Lower Palaeozoic Sandstone debitage from the Stonehenge landscape. Wilts. Archaeol. Nat. Hist. Mag. 110 , 1–16 (2017).

Ixer, R. & Turner, P. A detailed re-examination of the petrography of the Altar Stone and other non-sarsen sandstones from Stonehenge as a guide to their provenance. Wilts. Archaeol. Nat. Hist. Mag. 99 , 1–9 (2006).

Ixer, R., Bevins, R. E., Pirrie, D., Turner, P. & Power, M. No provenance is better than wrong provenance: Milford Haven and the Stonehenge sandstones. Wilts. Archaeol. Nat. Hist. Mag. 113 , 1–15 (2020).

Thomas, H. H. The source of the stones of Stonehenge. The Antiq. J. 3 , 239–260 (1923).

Kendall, R. S. The Old Red Sandstone of Britain and Ireland—a review. Proc. Geol. Assoc. 128 , 409–421 (2017).

Woodcock, N., Holdsworth, R. E. & Strachan, R. A. in Geological History of Britain and Ireland (eds Woodcock, N. & Strachan, R. A.) Ch. 6 91–109 (Wiley-Blackwell, 2012).

Pearson, M. P., Pollard, J., Richards, C., Thomas, J. & Welham, K. Stonehenge: Making Sense of a Prehistoric Mystery (Council for British Archaeology, 2015).

Shewan, L. et al. Dating the megalithic culture of laos: Radiocarbon, optically stimulated luminescence and U/Pb zircon results. PLoS ONE 16 , e0247167 (2021).

Kelloway, S. et al. Sourcing olive jars using U–Pb ages of detrital zircons: a study of 16th century olive jars recovered from the Solomon Islands. Geoarchaeology 29 , 47–60 (2014).

Barham, M. et al. The answers are blowin’ in the wind: ultra-distal ashfall zircons, indicators of Cretaceous super-eruptions in eastern Gondwana. Geology 44 , 643–646 (2016).

Article   ADS   CAS   Google Scholar  

Gillespie, J., Glorie, S., Khudoley, A. & Collins, A. S. Detrital apatite U–Pb and trace element analysis as a provenance tool: Insights from the Yenisey Ridge (Siberia). Lithos 314–315 , 140–155 (2018).

Article   ADS   Google Scholar  

Fairey, B. J. et al. The provenance of the Devonian Old Red Sandstone of the Dingle Peninsula, SW Ireland; the earliest record of Laurentian and peri-Gondwanan sediment mixing in Ireland. J. Geol. Soc. 175 , 411–424 (2018).

Bevins, R. E. et al. Assessing the authenticity of a sample taken from the Altar Stone at Stonehenge in 1844 using portable XRF and automated SEM-EDS. J. Archaeol. Sci. Rep. 49 , 103973 (2023).

Bevins, R. E. et al. Linking derived debitage to the Stonehenge Altar Stone using portable X-ray fluorescence analysis. Mineral. Mag. 86 , 688–700 (2022).

Morton, A. C., Chisholm, J. I. & Frei, D. Provenance of Carboniferous sandstones in the central and southern parts of the Pennine Basin, UK: evidence from detrital zircon ages. Proc. York. Geol. Soc. 63 , https://doi.org/10.1144/pygs2020-010 (2021).

Cawood, P. A., Nemchin, A. A., Strachan, R., Prave, T. & Krabbendam, M. Sedimentary basin and detrital zircon record along East Laurentia and Baltica during assembly and breakup of Rodinia. J. Geol. Soc. 164 , 257–275 (2007).

Strachan, R. A., Olierook, H. K. H. & Kirkland, C. L. Evidence from the U–Pb–Hf signatures of detrital zircons for a Baltican provenance for basal Old Red Sandstone successions, northern Scottish Caledonides. J. Geol. Soc. 178 , https://doi.org/10.1144/jgs2020-241 (2021).

Stevens, T. & Baykal, Y. Detrital zircon U–Pb ages and source of the late Palaeocene Thanet Formation, Kent, SE England. Proc. Geol. Assoc. 132 , 240–248 (2021).

O’Sullivan, G., Chew, D. M., Kenny, G., Heinrichs, I. & Mulligan, D. The trace element composition of apatite and its application to detrital provenance studies. Earth Sci. Rev. 201 , 103044 (2020).

Oliver, G., Wilde, S. & Wan, Y. Geochronology and geodynamics of Scottish granitoids from the late Neoproterozoic break-up of Rodinia to Palaeozoic collision. J. Geol. Soc. 165 , 661–674 (2008).

Fleischer, M. & Altschuler, Z. S. The lanthanides and yttrium in minerals of the apatite group-an analysis of the available data. Neu. Jb. Mineral. Mh. 10 , 467–480 (1986).

Goodenough, K. M., Millar, I., Strachan, R. A., Krabbendam, M. & Evans, J. A. Timing of regional deformation and development of the Moine Thrust Zone in the Scottish Caledonides: constraints from the U–Pb geochronology of alkaline intrusions. J. Geol. Soc. 168 , 99–114 (2011).

Stacey, J. S. & Kramers, J. D. Approximation of terrestrial lead isotope evolution by a two-stage model. Earth Planet. Sci. Lett. 26 , 207–221 (1975).

Evans, J. A. et al. Applying lead (Pb) isotopes to explore mobility in humans and animals. PLoS ONE 17 , e0274831 (2022).

Morton, A., Knox, R. & Frei, D. Heavy mineral and zircon age constraints on provenance of the Sherwood Sandstone Group (Triassic) in the eastern Wessex Basin, UK. Proc. Geol. Assoc. 127 , 514–526 (2016).

Morton, A., Hounslow, M. W. & Frei, D. Heavy-mineral, mineral-chemical and zircon-age constraints on the provenance of Triassic sandstones from the Devon coast, southern Britain. Geologos 19 , 67–85 (2013).

Phillips, E. R., Smith, R. A., Stone, P., Pashley, V. & Horstwood, M. Zircon age constraints on the provenance of Llandovery to Wenlock sandstones from the Midland Valley terrane of the Scottish Caledonides. Scott. J. Geol. 45 , 131–146 (2009).

McKellar, Z., Hartley, A. J., Morton, A. C. & Frei, D. A multidisciplinary approach to sediment provenance analysis of the late Silurian–Devonian Lower Old Red Sandstone succession, northern Midland Valley Basin, Scotland. J. Geol. Soc. 177 , 297–314 (2019).

Beranek, L. P., Gee, D. G. & Fisher, C. M. Detrital zircon U–Pb–Hf isotope signatures of Old Red Sandstone strata constrain the Silurian to Devonian paleogeography, tectonics, and crustal evolution of the Svalbard Caledonides. GSA Bull. 132 , 1987–2003 (2020).

John, B. The Stonehenge Bluestones (Greencroft Books, 2018).

John, B. The Stonehenge bluestones did not come from Waun Mawn in West Wales. The Holocene https://doi.org/10.1177/09596836241236318 (2024).

Clark, C. D. et al. Growth and retreat of the last British–Irish Ice Sheet, 31 000 to 15 000 years ago: the BRITICE-CHRONO reconstruction. Boreas 51 , 699–758 (2022).

Gibbard, P. L. & Clark, C. D. in Developments in Quaternary Sciences , Vol. 15 (eds Ehlers, J. et al.) 75–93 (Elsevier, 2011).

Bevins, R., Ixer, R., Pearce, N., Scourse, J. & Daw, T. Lithological description and provenancing of a collection of bluestones from excavations at Stonehenge by William Hawley in 1924 with implications for the human versus ice transport debate of the monument’s bluestone megaliths. Geoarchaeology 38 , 771–785 (2023).

Snoeck, C. et al. Strontium isotope analysis on cremated human remains from Stonehenge support links with west Wales. Sci. Rep. 8 , 10790 (2018).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Viner, S., Evans, J., Albarella, U. & Pearson, M. P. Cattle mobility in prehistoric Britain: strontium isotope analysis of cattle teeth from Durrington Walls (Wiltshire, Britain). J. Archaeolog. Sci. 37 , 2812–2820 (2010).

Evans, J. A., Chenery, C. A. & Fitzpatrick, A. P. Bronze Age childhood migration of individuals near Stonehenge, revealed by strontium and oxygen isotope tooth enamel analysis. Archaeometry 48 , 309–321 (2006).

Bradley, R. Beyond the bluestones: links between distant monuments in Late Neolithic Britain and Ireland. Antiquity 98 , 821–828 (2024).

Bradley, R. Long distance connections within Britain and Ireland: the evidence of insular rock art. Proc. Prehist. Soc. 89 , 249–271 (2023).

Fairweather, A. D. & Ralston, I. B. M. The Neolithic timber hall at Balbridie, Grampian Region, Scotland: the building, the date, the plant macrofossils. Antiquity 67 , 313–323 (1993).

Bayliss, A., Marshall, P., Richards, C. & Whittle, A. Islands of history: the Late Neolithic timescape of Orkney. Antiquity 91 , 1171–1188 (2017).

Parker Pearson, M. et al. in Megaliths and Geology (eds Bouventura, R. et al.) 151–169 (Archaeopress Publishing, 2020).

Pigière, F. & Smyth, J. First evidence for cattle traction in Middle Neolithic Ireland: A pivotal element for resource exploitation. PLoS ONE 18 , e0279556 (2023).

Article   PubMed   PubMed Central   Google Scholar  

Godwin, H. History of the natural forests of Britain: establishment, dominance and destruction. Philos. Trans. R. Soc. B 271 , 47–67 (1975).

ADS   Google Scholar  

Martínková, N. et al. Divergent evolutionary processes associated with colonization of offshore islands. Mol. Ecol. 22 , 5205–5220 (2013).

Bradley, R. & Edmonds, M. Interpreting the Axe Trade: Production and Exchange in Neolithic Britain (Cambridge Univ. Press, 2005).

Peacock, D., Cutler, L. & Woodward, P. A Neolithic voyage. Int. J. Naut. Archaeol. 39 , 116–124 (2010).

Pinder, A. P., Panter, I., Abbott, G. D. & Keely, B. J. Deterioration of the Hanson Logboat: chemical and imaging assessment with removal of polyethylene glycol conserving agent. Sci. Rep. 7 , 13697 (2017).

Harff, J. et al. in Submerged Landscapes of the European Continental Shelf: Quaternary Paleoenvironments (eds Flemming, N. C. et al.) 11–49 (2017).

Nordsvan, A. R., Kirscher, U., Kirkland, C. L., Barham, M. & Brennan, D. T. Resampling (detrital) zircon age distributions for accurate multidimensional scaling solutions. Earth Sci. Rev. 204 , 103149 (2020).

Ixer, R., Bevins, R. & Turner, P. Alternative Altar Stones? Carbonate-cemented micaceous sandstones from the Stonehenge landscape. Wilts. Archaeol. Nat. Hist. Mag. 112 , 1–13 (2019).

Paton, C., Hellstrom, J. C., Paul, B., Woodhead, J. D. & Hergt, J. M. Iolite: freeware for the visualisation and processing of mass spectrometric data. J. Anal. At. Spectrom. 26 , 2508–2518 (2011).

Vermeesch, P. IsoplotR: a free and open toolbox for geochronology. Geosci. Front. 9 , 1479–1493 (2018).

Jackson, S. E., Pearson, N. J., Griffin, W. L. & Belousova, E. A. The application of laser ablation-inductively coupled plasma-mass spectrometry to in situ U–Pb zircon geochronology. Chem. Geol. 211 , 47–69 (2004).

Sláma, J. et al. Plešovice zircon—A new natural reference material for U–Pb and Hf isotopic microanalysis. Chem. Geol. 249 , 1–35 (2008).

Wiedenbeck, M. et al. Three natural zircon standards for U-Th-Pb, Lu–Hf, trace element and REE analyses. Geostand. Newslett. 19 , 1–23 (1995).

Stern, R. A., Bodorkos, S., Kamo, S. L., Hickman, A. H. & Corfu, F. Measurement of SIMS instrumental mass fractionation of Pb isotopes during zircon dating. Geostand. Geoanal. Res. 33 , 145–168 (2009).

Marsh, J. H., Jørgensen, T. R. C., Petrus, J. A., Hamilton, M. A. & Mole, D. R. U-Pb, trace element, and hafnium isotope composition of the Maniitsoq zircon: a potential new Archean zircon reference material. Goldschmidt Abstr. 2019 , 18 (2019).

Vermeesch, P. On the treatment of discordant detrital zircon U–Pb data. Geochronology 3 , 247–257 (2021).

Gehrels, G. in Tectonics of Sedimentary Basins: Recent Advances (eds Busby, C. & Azor, A.) 45–62 (2011).

Vermeesch, P. How many grains are needed for a provenance study? Earth Planet. Sci. Lett. 224 , 441–451 (2004).

Dröllner, M., Barham, M., Kirkland, C. L. & Ware, B. Every zircon deserves a date: selection bias in detrital geochronology. Geol. Mag. 158 , 1135–1142 (2021).

Zutterkirch, I. C., Kirkland, C. L., Barham, M. & Elders, C. Thin-section detrital zircon geochronology mitigates bias in provenance investigations. J. Geol. Soc. 179 , jgs2021–070 (2021).

Morton, A., Waters, C., Fanning, M., Chisholm, I. & Brettle, M. Origin of Carboniferous sandstones fringing the northern margin of the Wales-Brabant Massif: insights from detrital zircon ages. Geol. J. 50 , 553–574 (2015).

Luvizotto, G. et al. Rutile crystals as potential trace element and isotope mineral standards for microanalysis. Chem. Geol. 261 , 346–369 (2009).

Zack, T. et al. In situ U–Pb rutile dating by LA-ICP-MS: 208 Pb correction and prospects for geological applications. Contrib. Mineral. Petrol. 162 , 515–530 (2011).

Dröllner, M., Barham, M. & Kirkland, C. L. Reorganization of continent-scale sediment routing based on detrital zircon and rutile multi-proxy analysis. Basin Res. 35 , 363–386 (2023).

Liebmann, J., Barham, M. & Kirkland, C. L. Rutile ages and thermometry along a Grenville anorthosite pathway. Geochem. Geophys. Geosyst. 24 , e2022GC010330 (2023).

Zack, T. & Kooijman, E. Petrology and geochronology of rutile. Rev. Mineral. Geochem. 83 , 443–467 (2017).

Thompson, J. et al. Matrix effects in Pb/U measurements during LA-ICP-MS analysis of the mineral apatite. J. Anal. At. Spectrom. 31 , 1206–1215 (2016).

Schmitz, M. D., Bowring, S. A. & Ireland, T. R. Evaluation of Duluth Complex anorthositic series (AS3) zircon as a U–Pb geochronological standard: new high-precision isotope dilution thermal ionization mass spectrometry results. Geochim. Cosmochim. Acta 67 , 3665–3672 (2003).

Schoene, B. & Bowring, S. U–Pb systematics of the McClure Mountain syenite: thermochronological constraints on the age of the 40 Ar/ 39 Ar standard MMhb. Contrib. Mineral. Petrol. 151 , 615–630 (2006).

Thomson, S. N., Gehrels, G. E., Ruiz, J. & Buchwaldt, R. Routine low-damage apatite U–Pb dating using laser ablation-multicollector-ICPMS. Geochem. Geophys. Geosyst. 13 , https://doi.org/10.1029/2011GC003928 (2012).

Barfod, G. H., Krogstad, E. J., Frei, R. & Albarède, F. Lu–Hf and PbSL geochronology of apatites from Proterozoic terranes: a first look at Lu–Hf isotopic closure in metamorphic apatite. Geochim. Cosmochim. Acta 69 , 1847–1859 (2005).

McDowell, F. W., McIntosh, W. C. & Farley, K. A. A precise 40 Ar– 39 Ar reference age for the Durango apatite (U–Th)/He and fission-track dating standard Chem. Geol. 214 , 249–263 (2005).

Kirkland, C. L. et al. Apatite: a U–Pb thermochronometer or geochronometer? Lithos 318-319 , 143–157 (2018).

Simpson, A. et al. In-situ Lu Hf geochronology of garnet, apatite and xenotime by LA ICP MS/MS. Chem. Geol. 577 , 120299 (2021).

Glorie, S. et al. Robust laser ablation Lu–Hf dating of apatite: an empirical evaluation. Geol. Soc. Lond. Spec. Publ. 537 , 165–184 (2024).

Norris, C. & Danyushevsky, L. Towards estimating the complete uncertainty budget of quantified results measured by LA-ICP-MS. Goldschmidt Abstr. 2018 , 1894 (2018).

Nebel, O., Morel, M. L. A. & Vroon, P. Z. Isotope dilution determinations of Lu, Hf, Zr, Ta and W, and Hf isotope compositions of NIST SRM 610 and 612 glass wafers. Geostand. Geoanal. Res. 33 , 487–499 (2009).

Kharkongor, M. B. K. et al. Apatite laser ablation LuHf geochronology: A new tool to date mafic rocks. Chem. Geol. 636 , 121630 (2023).

Glorie, S. et al. Detrital apatite Lu–Hf and U–Pb geochronology applied to the southwestern Siberian margin. Terra Nova 34 , 201–209 (2022).

Spencer, C. J., Kirkland, C. L., Roberts, N. M. W., Evans, N. J. & Liebmann, J. Strategies towards robust interpretations of in situ zircon Lu–Hf isotope analyses. Geosci. Front. 11 , 843–853 (2020).

Jochum, K. P. et al. GeoReM: a new geochemical database for reference materials and isotopic standards. Geostand. Geoanal. Res. 29 , 333–338 (2005).

Janousek, V., Farrow, C. & Erban, V. Interpretation of whole-rock geochemical data in igneous geochemistry: introducing Geochemical Data Toolkit (GCDkit). J. Petrol. 47 , 1255–1259 (2006).

Boynton, W. V. in Developments in Geochemistry , Vol. 2 (ed. Henderson, P.) 63–114 (Elsevier, 1984).

Landing, E., Keppie, J. D., Keppie, D. F., Geyer, G. & Westrop, S. R. Greater Avalonia—latest Ediacaran–Ordovicia “peribaltic” terrane bounded by continental margin prisms (“Ganderia”, Harlech Dome, Meguma): review, tectonic implications, and paleogeography. Earth Sci. Rev. 224 , 103863 (2022).

Download references

Acknowledgements

Funding was provided by an Australian Research Council Discovery Project (DP200101881). Sample material was loaned from the Salisbury Museum and Amgueddfa Cymru–Museum Wales and sampled with permission. The authors thank A. Green for assistance in accessing the Salisbury Museum material; B. McDonald, N. Evans, K. Rankenburg and S. Gilbert for their help during isotopic analysis; and P. Sampaio for assistance with statistical analysis. Instruments in the John de Laeter Centre, Curtin University, were funded via AuScope, the Australian Education Investment Fund, the National Collaborative Research Infrastructure Strategy, and the Australian Government. R.E.B. acknowledges a Leverhulme Trust Emeritus Fellowship.

Author information

Authors and affiliations.

Timescales of Mineral Systems Group, School of Earth and Planetary Sciences, Curtin University, Perth, Western Australia, Australia

Anthony J. I. Clarke & Christopher L. Kirkland

Department of Geography and Earth Sciences, Aberystwyth University, Aberystwyth, UK

Richard E. Bevins & Nick J. G. Pearce

Department of Earth Sciences, The University of Adelaide, Adelaide, South Australia, Australia

Stijn Glorie

Institute of Archaeology, University College London, London, UK

Rob A. Ixer

You can also search for this author in PubMed   Google Scholar

Contributions

A.J.I.C.: writing, original draft, formal analysis, investigation, visualization, project administration, conceptualization and methodology. C.L.K.: supervision, resources, formal analysis, funding acquisition, writing, review and editing, conceptualization and methodology. R.E.B.: writing, review and editing, resources and conceptualization. N.J.G.P.: writing, review and editing, resources and conceptualization. S.G.: resources, formal analysis, funding acquisition, writing, review and editing, supervision, and methodology. R.A.I.: writing, review and editing.

Corresponding author

Correspondence to Anthony J. I. Clarke .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Tim Kinnaird and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer review reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 geological maps of potential source terranes for the altar stone..

a , Schematic map of the North Atlantic region with the crystalline terranes in the Caledonian-Variscan orogens depicted prior to the opening of the North Atlantic, adapted after ref.  95 . b , Schematic map of Britain and Ireland, showing outcrops of Old Red Sandstone, basement terranes, and major faults with reference to Stonehenge.

Extended Data Fig. 2 Altar Stone zircon U–Pb data.

a , Tera-Wasserburg plot for all concordant (≤10% discordant) zircon analyses reported from three samples of the Altar Stone. Discordance is defined using the concordia log % distance approach, and analytical ellipses are shown at the two-sigma uncertainty level. The ellipse colour denotes the sample. Replotted isotopic data for thin-section FN593 is from ref. 1 . b , Kernel density estimate for concordia U–Pb ages of concordant zircon from the Altar Stone, using a kernel and histogram bandwidth of 50 Ma. Fifty-six concordant analyses are shown from 113 measurements. A rug plot is given below the kernel density estimate, marking the age of each measurement.

Extended Data Fig. 3 Comparative kernel density estimates of concordant zircon concordia ages from the Altar Stone, crystalline sources terranes, and comparative sedimentary rock successions.

Each plot uses a kernel and histogram bandwidth of 50 Ma. The zircon U–Pb geochronology source for each comparative dataset is shown with their respective kernel density estimate. Zircon age data for basement terranes (right side of the plot) was sourced from refs. 20 , 26 .

Extended Data Fig. 4 Plots of rutile U–Pb ages.

a , Tera-Wasserburg plot of rutile U–Pb analyses from the Altar Stone (thin-section MS3). Isotopic data is shown at the two-sigma uncertainty level. b , Kernel density estimate for Group 2 rutile 207 Pb corrected 206 Pb/ 238 U ages, using a kernel and histogram bandwidth of 25 Ma. The rug plot below the kernel density estimate marks the age for each measurement.

Extended Data Fig. 5 Apatite Tera-Wasserburg U–Pb plots for the Altar Stone and Orcadian Basin.

a , Altar Stone apatite U–Pb analyses from thin-section MS3. b , Orcadian Basin apatite U–Pb analyses from sample AQ1, Spittal, Caithness. c , Orcadian Basin apatite U–Pb analyses from sample CQ1, Cruaday, Orkney. All data are shown as ellipses at the two-sigma uncertainty level. Regressions through U–Pb data are unanchored.

Extended Data Fig. 6 Combined kernel density estimate and histogram for apatite Lu–Hf single-grain ages from the Altar Stone.

Lu–Hf apparent ages from thin-section 2010K.240. Kernel and histogram bandwidth of 50 Ma. The rug plot below the kernel density estimate marks each calculated age. Single spot ages are calculated assuming an initial average terrestrial 177 Hf/ 176 Hf composition (see  Methods ).

Extended Data Fig. 7 Apatite trace element classification plots for the Altar Stone thin-section MS3.

Colours for all plots follow the geochemical discrimination defined in A. a , Reference 29  classification plot for apatite with an inset pie chart depicting the compositional groupings based on these geochemical ratios. b , The principal component plot of geochemical data from apatite shows the main eigenvectors of geochemical dispersion, highlighting enhanced Nd and La in the distinguishing groups. Medians for each group are denoted with a cross. c , Plot of total rare earth elements (REE) (%) versus (Ce/Yb) n with Mahalanobis ellipses around compositional classification centroids. A P = 0.5 in Mahalanobis distance analysis represents a two-sided probability, indicating that 50% of the probability mass of the chi-squared distribution for that compositional grouping is contained within the ellipse. This probability is calculated based on the cumulative distribution function of the chi-squared distribution. d , Chondrite normalized REE plot of median apatite values for each defined apatite classification type.

Extended Data Fig. 8 Cumulative probability density function plot.

Cumulative probability density function plot of comparative Old Red Sandstone detrital zircon U–Pb datasets (concordant ages) versus the Altar Stone. Proximity between cumulative density probability lines implies similar detrital zircon age populations.

Supplementary information

Supplementary information 1.

Zircon, rutile, and apatite U–Pb data for the Altar Stone and Orcadian Basin samples. A ) Zircon U–Pb data for MS3, 2010K.240, and FN593. B ) Zircon U–Pb data for secondary references. C ) Rutile U–Pb data for MS3. D ) Rutile U–Pb data for secondary references. E ) Session 1 apatite U–Pb data for MS3. F ) Session 1 apatite U–Pb data for secondary references. G ) Session 2 apatite U–Pb data for Orcadian Basin samples. H ) Session 2 apatite U–Pb data for secondary references.

Reporting Summary

Peer review file, supplementary information 2.

Apatite Lu–Hf data for the Altar Stone. A) Apatite Lu–Hf isotopic data and ages for thin-section 2010K.240. B) Apatite Lu–Hf data for secondary references.

Supplementary Information 3

Apatite trace elements for the Altar Stone. A) Apatite trace element data for MS3. B) Apatite trace element secondary reference values.

Supplementary Information 4–8

Supplementary Information 4 : Summary of analyses. Summary table of analyses undertaken in this work on samples from the Altar Stone and the Orcadian Basin. Supplementary Information 5: Summary of zircon U–Pb reference material. A summary table of analyses was obtained for zircon U–Pb secondary reference material run during this work. Supplementary Information 6: Kolmogorov–Smirnov test results. Table of D and P values for the Kolmogorov–Smirnov test on zircon ages from the Altar Stone and potential source regions. Supplementary Information 7: Kolmogorov–Smirnov test results, with Monte Carlo resampling. Table of D and P values for the Kolmogorov–Smirnov test (with Monte Carlo resampling) on zircon ages from the Altar Stone and potential source regions. Supplementary Information 8: Summary of apatite U–Pb reference material. A summary table of analyses was obtained for the apatite U–Pb secondary reference material run during this work.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Clarke, A.J.I., Kirkland, C.L., Bevins, R.E. et al. A Scottish provenance for the Altar Stone of Stonehenge. Nature 632 , 570–575 (2024). https://doi.org/10.1038/s41586-024-07652-1

Download citation

Received : 16 December 2023

Accepted : 03 June 2024

Published : 14 August 2024

Issue Date : 15 August 2024

DOI : https://doi.org/10.1038/s41586-024-07652-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

method of analysis in research paper

IMAGES

  1. Critical Analysis Of A Research Paper

    method of analysis in research paper

  2. Qualitative Research Paper Critique Example

    method of analysis in research paper

  3. FREE 46+ Research Paper Examples & Templates in PDF, MS Word

    method of analysis in research paper

  4. 15 Research Methodology Examples (2024)

    method of analysis in research paper

  5. Research Methodology Secondary Data Example

    method of analysis in research paper

  6. How to write data analysis in a research paper?

    method of analysis in research paper

COMMENTS

  1. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  2. How to Write an APA Methods Section

    Research papers in the social and natural sciences often follow APA style. This article focuses on reporting quantitative research methods. In your APA methods section, you should report enough information to understand and replicate your study, including detailed information on the sample, measures, and procedures used.

  3. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  4. What Is a Research Methodology?

    Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic.

  5. How to Write the Methods Section of a Research Paper

    The methods section of a research paper typically constitutes materials and methods; while writing this section, authors usually arrange the information under each category. The materials category describes the samples, materials, treatments, and instruments, while experimental design, sample preparation, data collection, and data analysis are ...

  6. Research Paper

    Definition: Research Paper is a written document that presents the author's original research, analysis, and interpretation of a specific topic or issue. It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new ...

  7. PDF How to Write the Methods Section of a Research Paper

    The methods section should describe what was done to answer the research question, describe how it was done, justify the experimental design, and explain how the results were analyzed. Scientific writing is direct and orderly. Therefore, the methods section structure should: describe the materials used in the study, explain how the materials ...

  8. Research Methodology

    The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

  9. What Is a Research Methodology?

    Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research. It should include:

  10. (PDF) Different Types of Data Analysis; Data Analysis Methods and

    Data analysis is simply the process of converting the gathered data to meanin gf ul information. Different techniques such as modeling to reach trends, relatio nships, and therefore conclusions to ...

  11. Research Methods

    Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

  12. (PDF) Quantitative Analysis: the guide for beginners

    quantitative (numbers) and qualitative (words or images) data. The combination of. quantitative and qualitative research methods is called mixed methods. For example, first, numerical data are ...

  13. Qualitative Data Analysis Methods: Top 6 + Examples

    QDA Method #1: Qualitative Content Analysis. Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

  14. Research Paper Analysis: How to Analyze a Research Article + Example

    Save the word count for the "meat" of your paper — that is, for the analysis. 2. Summarize the Article. Now, you should write a brief and focused summary of the scientific article. It should be shorter than your analysis section and contain all the relevant details about the research paper.

  15. PDF Method Sections for Empirical Research Papers

    An annotated Method section and other empirical research paper resources are available here. What is the purpose of the Method section in an empirical research paper? The Method section (also sometimes called Methods, Materials and Methods, or Research Design and Methods) describes the data collection and analysis procedures for a research project.

  16. Choosing the Right Research Methodology: A Guide

    Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an ...

  17. Textual Analysis

    Textual analysis is a broad term for various research methods used to describe, interpret and understand texts. All kinds of information can be gleaned from a text - from its literal meaning to the subtext, symbolism, assumptions, and values it reveals. The methods used to conduct textual analysis depend on the field and the aims of the ...

  18. Data Analysis Techniques In Research

    Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.. Data Analysis Techniques in Research: While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence.

  19. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  20. Title page setup

    For a professional paper, the affiliation is the institution at which the research was conducted. Include both the name of any department and the name of the college, university, or other institution, separated by a comma. Center the affiliation on the next double-spaced line after the author names; when there are multiple affiliations, center ...

  21. A method for detection of delamination depth position within composite

    This paper proposes a laminate mode shape curvature (MSC) analysis method combining 2D continuous wavelet transform (2D-CWT) and convolutional neural network (CNN) technologies to address the delamination damage detection in composite laminated plates.

  22. Sustainability Constraints on Rural Road Infrastructure

    Research on the sustainability of rural roads is of great significance to the integrated promotion of rural habitat improvement, the maintenance of regional ecological patterns, and the implementation of the rural revitalization strategy. This paper examines the constraints to ensuring the sustainability of road infrastructure in rural Shaanxi, China. Rural road infrastructure plays an ...

  23. Narrative Analysis

    Narrative analysis is a qualitative research methodology that involves examining and interpreting the stories or narratives people tell in order to gain insights into the meanings, experiences, and perspectives that underlie them. Narrative analysis can be applied to various forms of communication, including written texts, oral interviews, and ...

  24. Siwei Zhang is first author of JAMIA paper

    Siwei Zhang is first author of JAMIA paper. Congratulations to PhD candidate Siwei Zhang, alumnus Nicholas Strayer (PhD 2020; now at Posit), senior biostatistician Yajing Li, and assistant professor Yaomin Xu on the publication of "PheMIME: an interactive web app and knowledge base for phenome-wide, multi-institutional multimorbidity analysis" in the Journal of the American Medical ...

  25. Dr. Michael Andreae's Manuscript Wins Best Paper of the Year Award

    We are thrilled to announce that Dr. Michael Andreae and his research team have been honored with the Best Paper of the Year award by the Journal of Cognitive Engineering and Decision Making for their manuscript titled, "Adapting Cognitive Task Analysis Methods for Use in a Large Sample Simulation Study of High-Risk Healthcare Events."

  26. The Beginner's Guide to Statistical Analysis

    Table of contents. Step 1: Write your hypotheses and plan your research design. Step 2: Collect data from a sample. Step 3: Summarize your data with descriptive statistics. Step 4: Test hypotheses or make estimates with inferential statistics.

  27. Cyber Resilience Act Requirements Standards Mapping

    To facilitate adoption of the CRA provisions, these requirements need to be translated into the form of harmonised standards, with which manufacturers can comply. In support of the standardisation effort, this study attempt to identify the most relevant existing cybersecurity standards for each CRA requirement, analyses the coverage already offered on the intended scope of the requirement and ...

  28. A Scottish provenance for the Altar Stone of Stonehenge

    Understanding the provenance of megaliths used in the Neolithic stone circle at Stonehenge, southern England, gives insight into the culture and connectivity of prehistoric Britain. The source of ...

  29. Prevalence of plagiarism in hijacked journals: A text similarity analysis

    Methods . A quasi-random sample of 936 papers published in 58 hijacked journals that provided free access to their archive as of June 2021 was selected for the analysis. The study utilizes Urkund (Ouriginal) software and manual verification to investigate plagiarism and finds a significant prevalence of plagiarism in hijacked journals.

  30. Data Analysis

    Data Analysis. Definition: Data analysis refers to the process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It involves applying various statistical and computational techniques to interpret and derive insights from large datasets.