Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Prevent plagiarism. Run a free check.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved August 9, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

Type of design Purpose and characteristics
Experimental
Quasi-experimental
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Questionnaires Interviews

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 5 August 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

research design and framework

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research design and framework

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research design and framework

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research design and framework

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research design and framework

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

10 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

John Latham

Research Design

Whether you are working to solve a unique organizational issue or contributing to theory, the research design framework will help you align the “DNA” of your study to deliver the insights you need. The research framework includes nine components with clear linkages. Each of the nine components links to the previous and subsequent components and the conceptual framework. The components form two groups – the “T” or the foundation of the problem, purpose, research questions, and conceptual framework, and the “U” or methodology, including the literature review, overall approach, data collection, data analysis, and drawing conclusions.

research design and framework

“I use terms like ‘canvas’ and ‘design’ because research requires both analytical and creative knowledge, skills, and abilities. There is no one best way to conduct research, and the answer to ALL research methods questions is, ‘it depends.’” ( Latham, 2022 ).

While this framework provides structure to facilitate the development of an aligned and internally consistent research design, developing a good research design is an iterative and often “messy” process. Download the free eBook for a more detailed description of the Research Design Framework.

Research Agenda

My research agenda focuses on two interrelated areas of leading transformation and organization design. Shaped by my professional interests and practice, research literature, and collaboration with other researchers, research projects provide insights to improve organizational designs and the design process. The three articles below provide additional insights into my research interests.

Selected Publications

Latham, J. R. (2008). Building Bridges between Researchers and Practitioners: A Collaborative Approach to Research in Performance Excellence . Quality Management Journal, 15 (1), 20.

Evans, J. R., & QMJ Editorial Board. (2013). Insights on the Future of Quality Management Research . Quality Management Journal, 20 (1), 8.

Latham, J. R. (2014). Leadership for Quality and Innovation: Challenges, Theories, and a Framework for Future Research [Perspectives Paper]. Quality Management Journal, 21 (1). 5.

Research Canvas Book

The Research Canvas is a FREE ebook about the “art” and “science” of research design. It is a “how-to” guide for getting the “DNA” of your research study designed and aligned before writing more detailed descriptions of the methodology. The book emerged from my experience in doing my research over the past several years and helping other researchers learn the “craft” of research. The book addresses the nine components of the research design framework. | Download Here

research design and framework

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Educational resources and simple solutions for your research journey

What is research design? Types, elements, and examples

What is Research Design? Understand Types of Research Design, with Examples

Have you been wondering “ what is research design ?” or “what are some research design examples ?” Are you unsure about the research design elements or which of the different types of research design best suit your study? Don’t worry! In this article, we’ve got you covered!   

Table of Contents

What is research design?  

Have you been wondering “ what is research design ?” or “what are some research design examples ?” Don’t worry! In this article, we’ve got you covered!  

A research design is the plan or framework used to conduct a research study. It involves outlining the overall approach and methods that will be used to collect and analyze data in order to answer research questions or test hypotheses. A well-designed research study should have a clear and well-defined research question, a detailed plan for collecting data, and a method for analyzing and interpreting the results. A well-thought-out research design addresses all these features.  

Research design elements  

Research design elements include the following:  

  • Clear purpose: The research question or hypothesis must be clearly defined and focused.  
  • Sampling: This includes decisions about sample size, sampling method, and criteria for inclusion or exclusion. The approach varies for different research design types .  
  • Data collection: This research design element involves the process of gathering data or information from the study participants or sources. It includes decisions about what data to collect, how to collect it, and the tools or instruments that will be used.  
  • Data analysis: All research design types require analysis and interpretation of the data collected. This research design element includes decisions about the statistical tests or methods that will be used to analyze the data, as well as any potential confounding variables or biases that may need to be addressed.  
  • Type of research methodology: This includes decisions about the overall approach for the study.  
  • Time frame: An important research design element is the time frame, which includes decisions about the duration of the study, the timeline for data collection and analysis, and follow-up periods.  
  • Ethical considerations: The research design must include decisions about ethical considerations such as informed consent, confidentiality, and participant protection.  
  • Resources: A good research design takes into account decisions about the budget, staffing, and other resources needed to carry out the study.  

The elements of research design should be carefully planned and executed to ensure the validity and reliability of the study findings. Let’s go deeper into the concepts of research design .    

research design and framework

Characteristics of research design  

Some basic characteristics of research design are common to different research design types . These characteristics of research design are as follows:  

  • Neutrality : Right from the study assumptions to setting up the study, a neutral stance must be maintained, free of pre-conceived notions. The researcher’s expectations or beliefs should not color the findings or interpretation of the findings. Accordingly, a good research design should address potential sources of bias and confounding factors to be able to yield unbiased and neutral results.   
  •   Reliability : Reliability is one of the characteristics of research design that refers to consistency in measurement over repeated measures and fewer random errors. A reliable research design must allow for results to be consistent, with few errors due to chance.   
  •   Validity : Validity refers to the minimization of nonrandom (systematic) errors. A good research design must employ measurement tools that ensure validity of the results.  
  •   Generalizability: The outcome of the research design should be applicable to a larger population and not just a small sample . A generalized method means the study can be conducted on any part of a population with similar accuracy.   
  •   Flexibility: A research design should allow for changes to be made to the research plan as needed, based on the data collected and the outcomes of the study  

A well-planned research design is critical for conducting a scientifically rigorous study that will generate neutral, reliable, valid, and generalizable results. At the same time, it should allow some level of flexibility.  

Different types of research design  

A research design is essential to systematically investigate, understand, and interpret phenomena of interest. Let’s look at different types of research design and research design examples .  

Broadly, research design types can be divided into qualitative and quantitative research.  

Qualitative research is subjective and exploratory. It determines relationships between collected data and observations. It is usually carried out through interviews with open-ended questions, observations that are described in words, etc.  

Quantitative research is objective and employs statistical approaches. It establishes the cause-and-effect relationship among variables using different statistical and computational methods. This type of research is usually done using surveys and experiments.  

Qualitative research vs. Quantitative research  

   
Deals with subjective aspects, e.g., experiences, beliefs, perspectives, and concepts.  Measures different types of variables and describes frequencies, averages, correlations, etc. 
Deals with non-numerical data, such as words, images, and observations.  Tests hypotheses about relationships between variables. Results are presented numerically and statistically. 
In qualitative research design, data are collected via direct observations, interviews, focus groups, and naturally occurring data. Methods for conducting qualitative research are grounded theory, thematic analysis, and discourse analysis. 

 

Quantitative research design is empirical. Data collection methods involved are experiments, surveys, and observations expressed in numbers. The research design categories under this are descriptive, experimental, correlational, diagnostic, and explanatory. 
Data analysis involves interpretation and narrative analysis.  Data analysis involves statistical analysis and hypothesis testing. 
The reasoning used to synthesize data is inductive. 

 

The reasoning used to synthesize data is deductive. 

 

Typically used in fields such as sociology, linguistics, and anthropology.  Typically used in fields such as economics, ecology, statistics, and medicine. 
Example: Focus group discussions with women farmers about climate change perception. 

 

Example: Testing the effectiveness of a new treatment for insomnia. 

Qualitative research design types and qualitative research design examples  

The following will familiarize you with the research design categories in qualitative research:  

  • Grounded theory: This design is used to investigate research questions that have not previously been studied in depth. Also referred to as exploratory design , it creates sequential guidelines, offers strategies for inquiry, and makes data collection and analysis more efficient in qualitative research.   

Example: A researcher wants to study how people adopt a certain app. The researcher collects data through interviews and then analyzes the data to look for patterns. These patterns are used to develop a theory about how people adopt that app.  

  •   Thematic analysis: This design is used to compare the data collected in past research to find similar themes in qualitative research.  

Example: A researcher examines an interview transcript to identify common themes, say, topics or patterns emerging repeatedly.  

  • Discourse analysis : This research design deals with language or social contexts used in data gathering in qualitative research.   

Example: Identifying ideological frameworks and viewpoints of writers of a series of policies.  

Quantitative research design types and quantitative research design examples  

Note the following research design categories in quantitative research:  

  • Descriptive research design : This quantitative research design is applied where the aim is to identify characteristics, frequencies, trends, and categories. It may not often begin with a hypothesis. The basis of this research type is a description of an identified variable. This research design type describes the “what,” “when,” “where,” or “how” of phenomena (but not the “why”).   

Example: A study on the different income levels of people who use nutritional supplements regularly.  

  • Correlational research design : Correlation reflects the strength and/or direction of the relationship among variables. The direction of a correlation can be positive or negative. Correlational research design helps researchers establish a relationship between two variables without the researcher controlling any of them.  

Example : An example of correlational research design could be studying the correlation between time spent watching crime shows and aggressive behavior in teenagers.  

  •   Diagnostic research design : In diagnostic design, the researcher aims to understand the underlying cause of a specific topic or phenomenon (usually an area of improvement) and find the most effective solution. In simpler terms, a researcher seeks an accurate “diagnosis” of a problem and identifies a solution.  

Example : A researcher analyzing customer feedback and reviews to identify areas where an app can be improved.    

  • Explanatory research design : In explanatory research design , a researcher uses their ideas and thoughts on a topic to explore their theories in more depth. This design is used to explore a phenomenon when limited information is available. It can help increase current understanding of unexplored aspects of a subject. It is thus a kind of “starting point” for future research.  

Example : Formulating hypotheses to guide future studies on delaying school start times for better mental health in teenagers.  

  •   Causal research design : This can be considered a type of explanatory research. Causal research design seeks to define a cause and effect in its data. The researcher does not use a randomly chosen control group but naturally or pre-existing groupings. Importantly, the researcher does not manipulate the independent variable.   

Example : Comparing school dropout levels and possible bullying events.  

  •   Experimental research design : This research design is used to study causal relationships . One or more independent variables are manipulated, and their effect on one or more dependent variables is measured.  

Example: Determining the efficacy of a new vaccine plan for influenza.  

Benefits of research design  

 T here are numerous benefits of research design . These are as follows:  

  • Clear direction: Among the benefits of research design , the main one is providing direction to the research and guiding the choice of clear objectives, which help the researcher to focus on the specific research questions or hypotheses they want to investigate.  
  • Control: Through a proper research design , researchers can control variables, identify potential confounding factors, and use randomization to minimize bias and increase the reliability of their findings.
  • Replication: Research designs provide the opportunity for replication. This helps to confirm the findings of a study and ensures that the results are not due to chance or other factors. Thus, a well-chosen research design also eliminates bias and errors.  
  • Validity: A research design ensures the validity of the research, i.e., whether the results truly reflect the phenomenon being investigated.  
  • Reliability: Benefits of research design also include reducing inaccuracies and ensuring the reliability of the research (i.e., consistency of the research results over time, across different samples, and under different conditions).  
  • Efficiency: A strong research design helps increase the efficiency of the research process. Researchers can use a variety of designs to investigate their research questions, choose the most appropriate research design for their study, and use statistical analysis to make the most of their data. By effectively describing the data necessary for an adequate test of the hypotheses and explaining how such data will be obtained, research design saves a researcher’s time.   

Overall, an appropriately chosen and executed research design helps researchers to conduct high-quality research, draw meaningful conclusions, and contribute to the advancement of knowledge in their field.

research design and framework

Frequently Asked Questions (FAQ) on Research Design

Q: What are th e main types of research design?

Broadly speaking there are two basic types of research design –

qualitative and quantitative research. Qualitative research is subjective and exploratory; it determines relationships between collected data and observations. It is usually carried out through interviews with open-ended questions, observations that are described in words, etc. Quantitative research , on the other hand, is more objective and employs statistical approaches. It establishes the cause-and-effect relationship among variables using different statistical and computational methods. This type of research design is usually done using surveys and experiments.

Q: How do I choose the appropriate research design for my study?

Choosing the appropriate research design for your study requires careful consideration of various factors. Start by clarifying your research objectives and the type of data you need to collect. Determine whether your study is exploratory, descriptive, or experimental in nature. Consider the availability of resources, time constraints, and the feasibility of implementing the different research designs. Review existing literature to identify similar studies and their research designs, which can serve as a guide. Ultimately, the chosen research design should align with your research questions, provide the necessary data to answer them, and be feasible given your own specific requirements/constraints.

Q: Can research design be modified during the course of a study?

Yes, research design can be modified during the course of a study based on emerging insights, practical constraints, or unforeseen circumstances. Research is an iterative process and, as new data is collected and analyzed, it may become necessary to adjust or refine the research design. However, any modifications should be made judiciously and with careful consideration of their impact on the study’s integrity and validity. It is advisable to document any changes made to the research design, along with a clear rationale for the modifications, in order to maintain transparency and allow for proper interpretation of the results.

Q: How can I ensure the validity and reliability of my research design?

Validity refers to the accuracy and meaningfulness of your study’s findings, while reliability relates to the consistency and stability of the measurements or observations. To enhance validity, carefully define your research variables, use established measurement scales or protocols, and collect data through appropriate methods. Consider conducting a pilot study to identify and address any potential issues before full implementation. To enhance reliability, use standardized procedures, conduct inter-rater or test-retest reliability checks, and employ appropriate statistical techniques for data analysis. It is also essential to document and report your methodology clearly, allowing for replication and scrutiny by other researchers.

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

IMRAD format

What is IMRaD Format in Research?

ChatGPT for research

How to Use ChatGPT for Research?

Open Access is an initiative that aims to make scientific research freely available to all. To date our community has made over 100 million downloads. It’s based on principles of collaboration, unobstructed discovery, and, most importantly, scientific progression. As PhD students, we found it difficult to access the research we needed, so we decided to create a new Open Access publisher that levels the playing field for scientists across the world. How? By making research easy to access, and puts the academic needs of the researchers before the business interests of publishers.

We are a community of more than 103,000 authors and editors from 3,291 institutions spanning 160 countries, including Nobel Prize winners and some of the world’s most-cited researchers. Publishing on IntechOpen allows authors to earn citations and find new collaborators, meaning more people see your work not only from your own field of study, but from other related fields too.

Brief introduction to this section that descibes Open Access especially from an IntechOpen perspective

Want to get in touch? Contact our London head office or media team here

Our team is growing all the time, so we’re always on the lookout for smart people who want to help us reshape the world of scientific publishing.

Home > Books > Cyberspace

Research Design and Methodology

Submitted: 23 January 2019 Reviewed: 08 March 2019 Published: 07 August 2019

DOI: 10.5772/intechopen.85731

Cite this chapter

There are two ways to cite this chapter:

From the Edited Volume

Edited by Evon Abu-Taieh, Abdelkrim El Mouatasim and Issam H. Al Hadid

To purchase hard copies of this book, please contact the representative in India: CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | [email protected]

Chapter metrics overview

31,849 Chapter Downloads

Impact of this chapter

Total Chapter Downloads on intechopen.com

IntechOpen

Total Chapter Views on intechopen.com

Overall attention for this chapters

There are a number of approaches used in this research method design. The purpose of this chapter is to design the methodology of the research approach through mixed types of research techniques. The research approach also supports the researcher on how to come across the research result findings. In this chapter, the general design of the research and the methods used for data collection are explained in detail. It includes three main parts. The first part gives a highlight about the dissertation design. The second part discusses about qualitative and quantitative data collection methods. The last part illustrates the general research framework. The purpose of this section is to indicate how the research was conducted throughout the study periods.

  • research design
  • methodology
  • data sources

Author Information

Kassu jilcha sileyew *.

  • School of Mechanical and Industrial Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa, Ethiopia

*Address all correspondence to: [email protected]

1. Introduction

Research methodology is the path through which researchers need to conduct their research. It shows the path through which these researchers formulate their problem and objective and present their result from the data obtained during the study period. This research design and methodology chapter also shows how the research outcome at the end will be obtained in line with meeting the objective of the study. This chapter hence discusses the research methods that were used during the research process. It includes the research methodology of the study from the research strategy to the result dissemination. For emphasis, in this chapter, the author outlines the research strategy, research design, research methodology, the study area, data sources such as primary data sources and secondary data, population consideration and sample size determination such as questionnaires sample size determination and workplace site exposure measurement sample determination, data collection methods like primary data collection methods including workplace site observation data collection and data collection through desk review, data collection through questionnaires, data obtained from experts opinion, workplace site exposure measurement, data collection tools pretest, secondary data collection methods, methods of data analysis used such as quantitative data analysis and qualitative data analysis, data analysis software, the reliability and validity analysis of the quantitative data, reliability of data, reliability analysis, validity, data quality management, inclusion criteria, ethical consideration and dissemination of result and its utilization approaches. In order to satisfy the objectives of the study, a qualitative and quantitative research method is apprehended in general. The study used these mixed strategies because the data were obtained from all aspects of the data source during the study time. Therefore, the purpose of this methodology is to satisfy the research plan and target devised by the researcher.

2. Research design

The research design is intended to provide an appropriate framework for a study. A very significant decision in research design process is the choice to be made regarding research approach since it determines how relevant information for a study will be obtained; however, the research design process involves many interrelated decisions [ 1 ].

This study employed a mixed type of methods. The first part of the study consisted of a series of well-structured questionnaires (for management, employee’s representatives, and technician of industries) and semi-structured interviews with key stakeholders (government bodies, ministries, and industries) in participating organizations. The other design used is an interview of employees to know how they feel about safety and health of their workplace, and field observation at the selected industrial sites was undertaken.

Hence, this study employs a descriptive research design to agree on the effects of occupational safety and health management system on employee health, safety, and property damage for selected manufacturing industries. Saunders et al. [ 2 ] and Miller [ 3 ] say that descriptive research portrays an accurate profile of persons, events, or situations. This design offers to the researchers a profile of described relevant aspects of the phenomena of interest from an individual, organizational, and industry-oriented perspective. Therefore, this research design enabled the researchers to gather data from a wide range of respondents on the impact of safety and health on manufacturing industries in Ethiopia. And this helped in analyzing the response obtained on how it affects the manufacturing industries’ workplace safety and health. The research overall design and flow process are depicted in Figure 1 .

research design and framework

Research methods and processes (author design).

3. Research methodology

To address the key research objectives, this research used both qualitative and quantitative methods and combination of primary and secondary sources. The qualitative data supports the quantitative data analysis and results. The result obtained is triangulated since the researcher utilized the qualitative and quantitative data types in the data analysis. The study area, data sources, and sampling techniques were discussed under this section.

3.1 The study area

According to Fraenkel and Warren [ 4 ] studies, population refers to the complete set of individuals (subjects or events) having common characteristics in which the researcher is interested. The population of the study was determined based on random sampling system. This data collection was conducted from March 07, 2015 to December 10, 2016, from selected manufacturing industries found in Addis Ababa city and around. The manufacturing companies were selected based on their employee number, established year, and the potential accidents prevailing and the manufacturing industry type even though all criterions were difficult to satisfy.

3.2 Data sources

3.2.1 primary data sources.

It was obtained from the original source of information. The primary data were more reliable and have more confidence level of decision-making with the trusted analysis having direct intact with occurrence of the events. The primary data sources are industries’ working environment (through observation, pictures, and photograph) and industry employees (management and bottom workers) (interview, questionnaires and discussions).

3.2.2 Secondary data

Desk review has been conducted to collect data from various secondary sources. This includes reports and project documents at each manufacturing sectors (more on medium and large level). Secondary data sources have been obtained from literatures regarding OSH, and the remaining data were from the companies’ manuals, reports, and some management documents which were included under the desk review. Reputable journals, books, different articles, periodicals, proceedings, magazines, newsletters, newspapers, websites, and other sources were considered on the manufacturing industrial sectors. The data also obtained from the existing working documents, manuals, procedures, reports, statistical data, policies, regulations, and standards were taken into account for the review.

In general, for this research study, the desk review has been completed to this end, and it had been polished and modified upon manuals and documents obtained from the selected companies.

4. Population and sample size

4.1 population.

The study population consisted of manufacturing industries’ employees in Addis Ababa city and around as there are more representative manufacturing industrial clusters found. To select representative manufacturing industrial sector population, the types of the industries expected were more potential to accidents based on random and purposive sampling considered. The population of data was from textile, leather, metal, chemicals, and food manufacturing industries. A total of 189 sample sizes of industries responded to the questionnaire survey from the priority areas of the government. Random sample sizes and disproportionate methods were used, and 80 from wood, metal, and iron works; 30 from food, beverage, and tobacco products; 50 from leather, textile, and garments; 20 from chemical and chemical products; and 9 from other remaining 9 clusters of manufacturing industries responded.

4.2 Questionnaire sample size determination

A simple random sampling and purposive sampling methods were used to select the representative manufacturing industries and respondents for the study. The simple random sampling ensures that each member of the population has an equal chance for the selection or the chance of getting a response which can be more than equal to the chance depending on the data analysis justification. Sample size determination procedure was used to get optimum and reasonable information. In this study, both probability (simple random sampling) and nonprobability (convenience, quota, purposive, and judgmental) sampling methods were used as the nature of the industries are varied. This is because of the characteristics of data sources which permitted the researchers to follow the multi-methods. This helps the analysis to triangulate the data obtained and increase the reliability of the research outcome and its decision. The companies’ establishment time and its engagement in operation, the number of employees and the proportion it has, the owner types (government and private), type of manufacturing industry/production, types of resource used at work, and the location it is found in the city and around were some of the criteria for the selections.

The determination of the sample size was adopted from Daniel [ 5 ] and Cochran [ 6 ] formula. The formula used was for unknown population size Eq. (1) and is given as

research design and framework

where n  = sample size, Z  = statistic for a level of confidence, P  = expected prevalence or proportion (in proportion of one; if 50%, P  = 0.5), and d  = precision (in proportion of one; if 6%, d  = 0.06). Z statistic ( Z ): for the level of confidence of 95%, which is conventional, Z value is 1.96. In this study, investigators present their results with 95% confidence intervals (CI).

The expected sample number was 267 at the marginal error of 6% for 95% confidence interval of manufacturing industries. However, the collected data indicated that only 189 populations were used for the analysis after rejecting some data having more missing values in the responses from the industries. Hence, the actual data collection resulted in 71% response rate. The 267 population were assumed to be satisfactory and representative for the data analysis.

4.3 Workplace site exposure measurement sample determination

The sample size for the experimental exposure measurements of physical work environment has been considered based on the physical data prepared for questionnaires and respondents. The response of positive were considered for exposure measurement factors to be considered for the physical environment health and disease causing such as noise intensity, light intensity, pressure/stress, vibration, temperature/coldness, or hotness and dust particles on 20 workplace sites. The selection method was using random sampling in line with purposive method. The measurement of the exposure factors was done in collaboration with Addis Ababa city Administration and Oromia Bureau of Labour and Social Affair (AACBOLSA). Some measuring instruments were obtained from the Addis Ababa city and Oromia Bureau of Labour and Social Affair.

5. Data collection methods

Data collection methods were focused on the followings basic techniques. These included secondary and primary data collections focusing on both qualitative and quantitative data as defined in the previous section. The data collection mechanisms are devised and prepared with their proper procedures.

5.1 Primary data collection methods

Primary data sources are qualitative and quantitative. The qualitative sources are field observation, interview, and informal discussions, while that of quantitative data sources are survey questionnaires and interview questions. The next sections elaborate how the data were obtained from the primary sources.

5.1.1 Workplace site observation data collection

Observation is an important aspect of science. Observation is tightly connected to data collection, and there are different sources for this: documentation, archival records, interviews, direct observations, and participant observations. Observational research findings are considered strong in validity because the researcher is able to collect a depth of information about a particular behavior. In this dissertation, the researchers used observation method as one tool for collecting information and data before questionnaire design and after the start of research too. The researcher made more than 20 specific observations of manufacturing industries in the study areas. During the observations, it found a deeper understanding of the working environment and the different sections in the production system and OSH practices.

5.1.2 Data collection through interview

Interview is a loosely structured qualitative in-depth interview with people who are considered to be particularly knowledgeable about the topic of interest. The semi-structured interview is usually conducted in a face-to-face setting which permits the researcher to seek new insights, ask questions, and assess phenomena in different perspectives. It let the researcher to know the in-depth of the present working environment influential factors and consequences. It has provided opportunities for refining data collection efforts and examining specialized systems or processes. It was used when the researcher faces written records or published document limitation or wanted to triangulate the data obtained from other primary and secondary data sources.

This dissertation is also conducted with a qualitative approach and conducting interviews. The advantage of using interviews as a method is that it allows respondents to raise issues that the interviewer may not have expected. All interviews with employees, management, and technicians were conducted by the corresponding researcher, on a face-to-face basis at workplace. All interviews were recorded and transcribed.

5.1.3 Data collection through questionnaires

The main tool for gaining primary information in practical research is questionnaires, due to the fact that the researcher can decide on the sample and the types of questions to be asked [ 2 ].

In this dissertation, each respondent is requested to reply to an identical list of questions mixed so that biasness was prevented. Initially the questionnaire design was coded and mixed up from specific topic based on uniform structures. Consequently, the questionnaire produced valuable data which was required to achieve the dissertation objectives.

The questionnaires developed were based on a five-item Likert scale. Responses were given to each statement using a five-point Likert-type scale, for which 1 = “strongly disagree” to 5 = “strongly agree.” The responses were summed up to produce a score for the measures.

5.1.4 Data obtained from experts’ opinion

The data was also obtained from the expert’s opinion related to the comparison of the knowledge, management, collaboration, and technology utilization including their sub-factors. The data obtained in this way was used for prioritization and decision-making of OSH, improving factor priority. The prioritization of the factors was using Saaty scales (1–9) and then converting to Fuzzy set values obtained from previous researches using triangular fuzzy set [ 7 ].

5.1.5 Workplace site exposure measurement

The researcher has measured the workplace environment for dust, vibration, heat, pressure, light, and noise to know how much is the level of each variable. The primary data sources planned and an actual coverage has been compared as shown in Table 1 .

research design and framework

Planned versus actual coverage of the survey.

The response rate for the proposed data source was good, and the pilot test also proved the reliability of questionnaires. Interview/discussion resulted in 87% of responses among the respondents; the survey questionnaire response rate obtained was 71%, and the field observation response rate was 90% for the whole data analysis process. Hence, the data organization quality level has not been compromised.

This response rate is considered to be representative of studies of organizations. As the study agrees on the response rate to be 30%, it is considered acceptable [ 8 ]. Saunders et al. [ 2 ] argued that the questionnaire with a scale response of 20% response rate is acceptable. Low response rate should not discourage the researchers, because a great deal of published research work also achieves low response rate. Hence, the response rate of this study is acceptable and very good for the purpose of meeting the study objectives.

5.1.6 Data collection tool pretest

The pretest for questionnaires, interviews, and tools were conducted to validate that the tool content is valid or not in the sense of the respondents’ understanding. Hence, content validity (in which the questions are answered to the target without excluding important points), internal validity (in which the questions raised answer the outcomes of researchers’ target), and external validity (in which the result can generalize to all the population from the survey sample population) were reflected. It has been proved with this pilot test prior to the start of the basic data collections. Following feedback process, a few minor changes were made to the originally designed data collect tools. The pilot test made for the questionnaire test was on 10 sample sizes selected randomly from the target sectors and experts.

5.2 Secondary data collection methods

The secondary data refers to data that was collected by someone other than the user. This data source gives insights of the research area of the current state-of-the-art method. It also makes some sort of research gap that needs to be filled by the researcher. This secondary data sources could be internal and external data sources of information that may cover a wide range of areas.

Literature/desk review and industry documents and reports: To achieve the dissertation’s objectives, the researcher has conducted excessive document review and reports of the companies in both online and offline modes. From a methodological point of view, literature reviews can be comprehended as content analysis, where quantitative and qualitative aspects are mixed to assess structural (descriptive) as well as content criteria.

A literature search was conducted using the database sources like MEDLINE; Emerald; Taylor and Francis publications; EMBASE (medical literature); PsycINFO (psychological literature); Sociological Abstracts (sociological literature); accident prevention journals; US Statistics of Labor, European Safety and Health database; ABI Inform; Business Source Premier (business/management literature); EconLit (economic literature); Social Service Abstracts (social work and social service literature); and other related materials. The search strategy was focused on articles or reports that measure one or more of the dimensions within the research OSH model framework. This search strategy was based on a framework and measurement filter strategy developed by the Consensus-Based Standards for the Selection of Health Measurement Instruments (COSMIN) group. Based on screening, unrelated articles to the research model and objectives were excluded. Prior to screening, researcher (principal investigator) reviewed a sample of more than 2000 articles, websites, reports, and guidelines to determine whether they should be included for further review or reject. Discrepancies were thoroughly identified and resolved before the review of the main group of more than 300 articles commenced. After excluding the articles based on the title, keywords, and abstract, the remaining articles were reviewed in detail, and the information was extracted on the instrument that was used to assess the dimension of research interest. A complete list of items was then collated within each research targets or objectives and reviewed to identify any missing elements.

6. Methods of data analysis

Data analysis method follows the procedures listed under the following sections. The data analysis part answered the basic questions raised in the problem statement. The detailed analysis of the developed and developing countries’ experiences on OSH regarding manufacturing industries was analyzed, discussed, compared and contrasted, and synthesized.

6.1 Quantitative data analysis

Quantitative data were obtained from primary and secondary data discussed above in this chapter. This data analysis was based on their data type using Excel, SPSS 20.0, Office Word format, and other tools. This data analysis focuses on numerical/quantitative data analysis.

Before analysis, data coding of responses and analysis were made. In order to analyze the data obtained easily, the data were coded to SPSS 20.0 software as the data obtained from questionnaires. This task involved identifying, classifying, and assigning a numeric or character symbol to data, which was done in only one way pre-coded [ 9 , 10 ]. In this study, all of the responses were pre-coded. They were taken from the list of responses, a number of corresponding to a particular selection was given. This process was applied to every earlier question that needed this treatment. Upon completion, the data were then entered to a statistical analysis software package, SPSS version 20.0 on Windows 10 for the next steps.

Under the data analysis, exploration of data has been made with descriptive statistics and graphical analysis. The analysis included exploring the relationship between variables and comparing groups how they affect each other. This has been done using cross tabulation/chi square, correlation, and factor analysis and using nonparametric statistic.

6.2 Qualitative data analysis

Qualitative data analysis used for triangulation of the quantitative data analysis. The interview, observation, and report records were used to support the findings. The analysis has been incorporated with the quantitative discussion results in the data analysis parts.

6.3 Data analysis software

The data were entered using SPSS 20.0 on Windows 10 and analyzed. The analysis supported with SPSS software much contributed to the finding. It had contributed to the data validation and correctness of the SPSS results. The software analyzed and compared the results of different variables used in the research questionnaires. Excel is also used to draw the pictures and calculate some analytical solutions.

7. The reliability and validity analysis of the quantitative data

7.1 reliability of data.

The reliability of measurements specifies the amount to which it is without bias (error free) and hence ensures consistent measurement across time and across the various items in the instrument [ 8 ]. In reliability analysis, it has been checked for the stability and consistency of the data. In the case of reliability analysis, the researcher checked the accuracy and precision of the procedure of measurement. Reliability has numerous definitions and approaches, but in several environments, the concept comes to be consistent [ 8 ]. The measurement fulfills the requirements of reliability when it produces consistent results during data analysis procedure. The reliability is determined through Cranach’s alpha as shown in Table 2 .

research design and framework

Internal consistency and reliability test of questionnaires items.

K stands for knowledge; M, management; T, technology; C, collaboration; P, policy, standards, and regulation; H, hazards and accident conditions; PPE, personal protective equipment.

7.2 Reliability analysis

Cronbach’s alpha is a measure of internal consistency, i.e., how closely related a set of items are as a group [ 11 ]. It is considered to be a measure of scale reliability. The reliability of internal consistency most of the time is measured based on the Cronbach’s alpha value. Reliability coefficient of 0.70 and above is considered “acceptable” in most research situations [ 12 ]. In this study, reliability analysis for internal consistency of Likert-scale measurement after deleting 13 items was found similar; the reliability coefficients were found for 76 items were 0.964 and for the individual groupings made shown in Table 2 . It was also found internally consistent using the Cronbach’s alpha test. Table 2 shows the internal consistency of the seven major instruments in which their reliability falls in the acceptable range for this research.

7.3 Validity

Face validity used as defined by Babbie [ 13 ] is an indicator that makes it seem a reasonable measure of some variables, and it is the subjective judgment that the instrument measures what it intends to measure in terms of relevance [ 14 ]. Thus, the researcher ensured, in this study, when developing the instruments that uncertainties were eliminated by using appropriate words and concepts in order to enhance clarity and general suitability [ 14 ]. Furthermore, the researcher submitted the instruments to the research supervisor and the joint supervisor who are both occupational health experts, to ensure validity of the measuring instruments and determine whether the instruments could be considered valid on face value.

In this study, the researcher was guided by reviewed literature related to compliance with the occupational health and safety conditions and data collection methods before he could develop the measuring instruments. In addition, the pretest study that was conducted prior to the main study assisted the researcher to avoid uncertainties of the contents in the data collection measuring instruments. A thorough inspection of the measuring instruments by the statistician and the researcher’s supervisor and joint experts, to ensure that all concepts pertaining to the study were included, ensured that the instruments were enriched.

8. Data quality management

Insight has been given to the data collectors on how to approach companies, and many of the questionnaires were distributed through MSc students at Addis Ababa Institute of Technology (AAiT) and manufacturing industries’ experience experts. This made the data quality reliable as it has been continually discussed with them. Pretesting for questionnaire was done on 10 workers to assure the quality of the data and for improvement of data collection tools. Supervision during data collection was done to understand how the data collectors are handling the questionnaire, and each filled questionnaires was checked for its completeness, accuracy, clarity, and consistency on a daily basis either face-to-face or by phone/email. The data expected in poor quality were rejected out of the acting during the screening time. Among planned 267 questionnaires, 189 were responded back. Finally, it was analyzed by the principal investigator.

9. Inclusion criteria

The data were collected from the company representative with the knowledge of OSH. Articles written in English and Amharic were included in this study. Database information obtained in relation to articles and those who have OSH area such as interventions method, method of accident identification, impact of occupational accidents, types of occupational injuries/disease, and impact of occupational accidents, and disease on productivity and costs of company and have used at least one form of feedback mechanism. No specific time period was chosen in order to access all available published papers. The questionnaire statements which are similar in the questionnaire have been rejected from the data analysis.

10. Ethical consideration

Ethical clearance was obtained from the School of Mechanical and Industrial Engineering, Institute of Technology, Addis Ababa University. Official letters were written from the School of Mechanical and Industrial Engineering to the respective manufacturing industries. The purpose of the study was explained to the study subjects. The study subjects were told that the information they provided was kept confidential and that their identities would not be revealed in association with the information they provided. Informed consent was secured from each participant. For bad working environment assessment findings, feedback will be given to all manufacturing industries involved in the study. There is a plan to give a copy of the result to the respective study manufacturing industries’ and ministries’ offices. The respondents’ privacy and their responses were not individually analyzed and included in the report.

11. Dissemination and utilization of the result

The result of this study will be presented to the Addis Ababa University, AAiT, School of Mechanical and Industrial Engineering. It will also be communicated to the Ethiopian manufacturing industries, Ministry of Labor and Social Affair, Ministry of Industry, and Ministry of Health from where the data was collected. The result will also be availed by publication and online presentation in Google Scholars. To this end, about five articles were published and disseminated to the whole world.

12. Conclusion

The research methodology and design indicated overall process of the flow of the research for the given study. The data sources and data collection methods were used. The overall research strategies and framework are indicated in this research process from problem formulation to problem validation including all the parameters. It has laid some foundation and how research methodology is devised and framed for researchers. This means, it helps researchers to consider it as one of the samples and models for the research data collection and process from the beginning of the problem statement to the research finding. Especially, this research flow helps new researchers to the research environment and methodology in particular.

Conflict of interest

There is no “conflict of interest.”

  • 1. Aaker A, Kumar VD, George S. Marketing Research. New York: John Wiley & Sons Inc; 2000
  • 2. Saunders M, Lewis P, Thornhill A. Research Methods for Business Student. 5th ed. Edinburgh Gate: Pearson Education Limited; 2009
  • 3. Miller P. Motivation in the Workplace. Work and Organizational Psychology. Oxford: Blackwell Publishers; 1991
  • 4. Fraenkel FJ, Warren NE. How to Design and Evaluate Research in Education. 4th ed. New York: McGraw-Hill; 2002
  • 5. Danniel WW. Biostatist: A Foundation for Analysis in the Health Science. 7th ed. New York: John Wiley & Sons; 1999
  • 6. Cochran WG. Sampling Techniques. 3rd ed. New York: John Wiley & Sons; 1977
  • 7. Saaty TL. The Analytical Hierarchy Process. Pittsburg: PWS Publications; 1990
  • 8. Sekaran U, Bougie R. Research Methods for Business: A Skill Building Approach. 5th ed. New Delhi: John Wiley & Sons, Ltd; 2010. pp. 1-468
  • 9. Luck DJ, Rubin RS. Marketing Research. 7th ed. New Jersey: Prentice-Hall International; 1987
  • 10. Wong TC. Marketing Research. Oxford, UK: Butterworth-Heinemann; 1999
  • 11. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951; 16 :297-334
  • 12. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. International Journal of Medical Education. 2011; 2 :53-55. DOI: 10.5116/ijme.4dfb.8dfd
  • 13. Babbie E. The Practice of Social Research. 12th ed. Belmont, CA: Wadsworth; 2010
  • 14. Polit DF, Beck CT. Generating and Assessing Evidence for Nursing Practice. 8th ed. Williams and Wilkins: Lippincott; 2008

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Continue reading from the same book

Edited by Evon Abu-Taieh

Published: 17 June 2020

By Sabína Gáliková Tolnaiová and Slavomír Gálik

1052 downloads

By Carlos Pedro Gonçalves

1617 downloads

By Konstantinos-George Thanos, Andrianna Polydouri, A...

1085 downloads

IntechOpen Author/Editor? To get your discount, log in .

Discounts available on purchase of multiple copies. View rates

Local taxes (VAT) are calculated in later steps, if applicable.

Support: [email protected]

Research design: the methodology for interdisciplinary research framework

  • Open access
  • Published: 27 April 2017
  • Volume 52 , pages 1209–1225, ( 2018 )

Cite this article

You have full access to this open access article

research design and framework

  • Hilde Tobi   ORCID: orcid.org/0000-0002-8804-0298 1 &
  • Jarl K. Kampen 1 , 2  

93k Accesses

93 Citations

7 Altmetric

Explore all metrics

Many of today’s global scientific challenges require the joint involvement of researchers from different disciplinary backgrounds (social sciences, environmental sciences, climatology, medicine, etc.). Such interdisciplinary research teams face many challenges resulting from differences in training and scientific culture. Interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences. For that purpose this paper presents the Methodology for Interdisciplinary Research (MIR) framework. The MIR framework was developed to help cross disciplinary borders, especially those between the natural sciences and the social sciences. The framework has been specifically constructed to facilitate the design of interdisciplinary scientific research, and can be applied in an educational program, as a reference for monitoring the phases of interdisciplinary research, and as a tool to design such research in a process approach. It is suitable for research projects of different sizes and levels of complexity, and it allows for a range of methods’ combinations (case study, mixed methods, etc.). The different phases of designing interdisciplinary research in the MIR framework are described and illustrated by real-life applications in teaching and research. We further discuss the framework’s utility in research design in landscape architecture, mixed methods research, and provide an outlook to the framework’s potential in inclusive interdisciplinary research, and last but not least, research integrity.

Similar content being viewed by others

research design and framework

Designing an Interdisciplinary Research Culture in Higher Education: A Case Study

research design and framework

A practical guideline how to tackle interdisciplinarity—A synthesis from a post-graduate group project

research design and framework

Participatory Approaches to Curriculum Design From a Design Research Perspective

Avoid common mistakes on your manuscript.

1 Introduction

Current challenges, e.g., energy, water, food security, one world health and urbanization, involve the interaction between humans and their environment. A (mono)disciplinary approach, be it a psychological, economical or technical one, is too limited to capture any one of these challenges. The study of the interaction between humans and their environment requires knowledge, ideas and research methodology from different disciplines (e.g., ecology or chemistry in the natural sciences, psychology or economy in the social sciences). So collaboration between natural and social sciences is called for (Walsh et al. 1975 ).

Over the past decades, different forms of collaboration have been distinguished although the terminology used is diverse and ambiguous. For the present paper, the term interdisciplinary research is used for (Aboelela et al. 2007 , p. 341):

any study or group of studies undertaken by scholars from two or more distinct scientific disciplines. The research is based upon a conceptual model that links or integrates theoretical frameworks from those disciplines, uses study design and methodology that is not limited to any one field, and requires the use of perspectives and skills of the involved disciplines throughout multiple phases of the research process.

Scientific disciplines (e.g., ecology, chemistry, biology, psychology, sociology, economy, philosophy, linguistics, etc.) are categorized into distinct scientific cultures: the natural sciences, the social sciences and the humanities (Kagan 2009 ). Interdisciplinary research may involve different disciplines within a single scientific culture, and it can also cross cultural boundaries as in the study of humans and their environment.

A systematic review of the literature on natural-social science collaboration (Fischer et al. 2011 ) confirmed the general impression of this collaboration to be a challenge. The nearly 100 papers in their analytic set mentioned more instances of barriers than of opportunities (72 and 46, respectively). Four critical factors for success or failure in natural-social science collaboration were identified: the paradigms or epistemologies in the current (mono-disciplinary) sciences, the skills and competences of the scientists involved, the institutional context of the research, and the organization of collaborations (Fischer et al. 2011 ). The so-called “paradigm war” between neopositivist versus constructivists within the social and behavioral sciences (Onwuegbuzie and Leech 2005 ) may complicate pragmatic collaboration further.

It has been argued that interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences (Frischknecht 2000 ) and accordingly, some interdisciplinary programs have been developed since (Baker and Little 2006 ; Spelt et al. 2009 ). The overall effect of interdisciplinary programs can be expected to be small as most programs are mono-disciplinary and based on a single paradigm (positivist-constructivist, qualitative-quantitative; see e.g., Onwuegbuzie and Leech 2005 ). We saw in our methodology teaching, consultancy and research practices working with heterogeneous groups of students and staff, that most had received mono-disciplinary training with a minority that had received multidisciplinary training, with few exceptions within the same paradigm. During our teaching and consultancy for heterogeneous groups of students and staff aimed at designing interdisciplinary research, we built the framework for methodology in interdisciplinary research (MIR). With the MIR framework, we aspire to contribute to the critical factors skills and competences (Fischer et al. 2011 ) for social and natural sciences collaboration. Note that the scale of interdisciplinary research projects we have in mind may vary from comparably modest ones (e.g., finding a link between noise reducing asphalt and quality of life; Vuye et al. 2016 ) to very large projects (finding a link between anthropogenic greenhouse gas emissions, climate change, and food security; IPCC 2015 ).

In the following section of this paper we describe the MIR framework and elaborate on its components. The third section gives two examples of the application of the MIR framework. The paper concludes with a discussion of the MIR framework in the broader contexts of mixed methods research, inclusive research, and other promising strains of research.

2 The methodology in interdisciplinary research framework

2.1 research as a process in the methodology in interdisciplinary research framework.

The Methodology for Interdisciplinary Research (MIR) framework was built on the process approach (Kumar 1999 ), because in the process approach, the research question or hypothesis is leading for all decisions in the various stages of research. That means that it helps the MIR framework to put the common goal of the researchers at the center, instead of the diversity of their respective backgrounds. The MIR framework also introduces an agenda: the research team needs to carefully think through different parts of the design of their study before starting its execution (Fig.  1 ). First, the team discusses the conceptual design of their study which contains the ‘why’ and ‘what’ of the research. Second, the team discusses the technical design of the study which contains the ‘how’ of the research. Only after the team agrees that the complete research design is sufficiently crystalized, the execution of the work (including fieldwork) starts.

The Methodology of Interdisciplinary Research framework

Whereas the conceptual and technical designs are by definition interdisciplinary team work, the respective team members may do their (mono)disciplinary parts of fieldwork and data analysis on a modular basis (see Bruns et al. 2017 : p. 21). Finally, when all evidence is collected, an interdisciplinary synthesis of analyses follows which conclusions are input for the final report. This implies that the MIR framework allows for a range of scales of research projects, e.g., a mixed methods project and its smaller qualitative and quantitative modules, or a multi-national sustainability project and its national sociological, economic and ecological modules.

2.2 The conceptual design

Interdisciplinary research design starts with the “conceptual design” which addresses the ‘why’ and ‘what’ of a research project at a conceptual level to ascertain the common goals pivotal to interdisciplinary collaboration (Fischer et al. 2011 ). The conceptual design includes mostly activities such as thinking, exchanging interdisciplinary knowledge, reading and discussing. The product of the conceptual design is called the “conceptual frame work” which comprises of the research objective (what is to be achieved by the research), the theory or theories that are central in the research project, the research questions (what knowledge is to be produced), and the (partial) operationalization of constructs and concepts that will be measured or recorded during execution. While the members of the interdisciplinary team and the commissioner of the research must reach a consensus about the research objective, the ‘why’, the focus in research design must be the production of the knowledge required to achieve that objective the ‘what’.

With respect to the ‘why’ of a research project, an interdisciplinary team typically starts with a general aim as requested by the commissioner or funding agency, and a set of theories to formulate a research objective. This role of theory is not always obvious to students from the natural sciences, who tend to think in terms of ‘models’ with directly observable variables. On the other hand, students from the social sciences tend to think in theories with little attention to observable variables. In the MIR framework, models as simplified descriptions or explanations of what is studied in the natural sciences play the same role in informing research design, raising research questions, and informing how a concept is understood, as do theories in social science.

Research questions concern concepts, i.e. general notions or ideas based on theory or common sense that are multifaceted and not directly visible or measurable. For example, neither food security (with its many different facets) nor a person’s attitude towards food storage may be directly observed. The operationalization of concepts, the transformation of concepts into observable indicators, in interdisciplinary research requires multiple steps, each informed by theory. For instance, in line with particular theoretical frameworks, sustainability and food security may be seen as the composite of a social, an economic and an ecological dimension (e.g., Godfray et al. 2010 ).

As the concept of interest is multi-disciplinary and multi-dimensional, the interdisciplinary team will need to read, discuss and decide on how these dimensions and their indicators are weighted to measure the composite interdisciplinary concept to get the required interdisciplinary measurements. The resulting measure or measures for the interdisciplinary concept may be of the nominal, ordinal, interval and ratio level, or a combination thereof. This operationalization procedure is known as the port-folio approach to widely defined measurements (Tobi 2014 ). Only after the research team has finalized the operationalization of the concepts under study, the research questions and hypotheses can be made operational. For example, a module with descriptive research questions may now be turned into an operational one like, what are the means and variances of X1, X2, and X3 in a given population? A causal research question may take on the form, is X (a composite of X1, X2 and X3) a plausible cause for the presence or absence of Y? A typical qualitative module could study, how do people talk about X1, X2 and X3 in their everyday lives?

2.3 The technical design

Members of an interdisciplinary team usually have had different training with respect to research methods, which makes discussing and deciding on the technical design more challenging but also potentially more creative than in a mono-disciplinary team. The technical design addresses the issues ‘how, where and when will research units be studied’ (study design), ‘how will measurement proceed’ (instrument selection or design), ‘how and how many research units will be recruited’ (sampling plan), and ‘how will collected data be analyzed and synthesized’ (analysis plan). The MIR framework provides the team a set of topics and their relationships to one another and to generally accepted quality criteria (see Fig.  1 ), which helps in designing this part of the project.

Interdisciplinary teams need be pragmatic as the research questions agreed on are leading in decisions on the data collection set-up (e.g., a cross-sectional study of inhabitants of a region, a laboratory experiment, a cohort study, a case control study, etc.), the so-called “study design” (e.g., Kumar 2014 ; De Vaus 2001 ; Adler and Clark 2011 ; Tobi and van den Brink 2017 ) instead of traditional ‘pet’ approaches. Typical study designs for descriptive research questions and research questions on associations are the cross-sectional study design. Longitudinal study designs are required to investigate development over time and cause-effect relationships ideally are studied in experiments (e.g., Kumar 2014 ; Shipley 2016 ). Phenomenological questions concern a phenomenon about which little is known and which has to be studied in the environment where it takes place, which calls for a case study design (e.g., Adler and Clark 2011 : p. 178). For each module, the study design is to be further explicated by the number of data collection waves, the level of control by the researcher and its reference period (e.g., Kumar 2014 ) to ensure the teams common understanding.

Then, decisions about the way data is to be collected, e.g., by means of certified instruments, observation, interviews, questionnaires, queries on existing data bases, or a combination of these are to be made. It is especially important to discuss the role of the observer (researcher) as this is often a source of misunderstanding in interdisciplinary teams. In the sciences, the observer is usually considered a neutral outsider when reading a standardized measurement instrument (e.g., a pyranometer to measure incoming solar radiation). In contrast, in the social sciences, the observer may be (part of) the measurement instrument, for example in participant observation or when doing in-depth interviews. After all, in participant observation the researcher observes from a member’s perspective and influences what is observed owing to the researcher’s participation (Flick 2006 : p. 220). Similarly in interviews, by which we mean “a conversation that has a structure and a purpose determined by the one party—the interviewer” (Kvale 2007 : p. 7), the interviewer and the interviewee are part of the measurement instrument (Kvale and Brinkmann 2009 : p. 2). In on-line and mail questionnaires the interviewer is eliminated as part of the instrument by standardizing the questions and answer options. Queries on existing data bases refer to the use of secondary data or secondary analysis. Different disciplines tend to use different bibliographic data bases (e.g., CAB Abstracts, ABI/INFORM or ERIC) and different data repositories (e.g., the European Social Survey at europeansocialsurvey.org or the International Council for Science data repository hosted by www.pangaea.de ).

Depending on whether or not the available, existing, measurement instruments tally with the interdisciplinary operationalisations from the conceptual design, the research team may or may not need to design instruments. Note that in some cases the social scientists’ instinct may be to rely on a questionnaire whereas the collaboration with another discipline may result in more objective possibilities (e.g., compare asking people about what they do with surplus medication, versus measuring chemical components from their input into the sewer system). Instrument design may take on different forms, such as the design of a device (e.g., pyranometer), a questionnaire (Dillman 2007 ) or a part thereof (e.g., a scale see DeVellis 2012 ; Danner et al. 2016 ), an interview guide with topics or questions for the interviewees, or a data extraction form in the context of secondary analysis and literature review (e.g., the Cochrane Collaboration aiming at health and medical sciences or the Campbell Collaboration aiming at evidence based policies).

Researchers from different disciplines are inclined to think of different research objects (e.g., animals, humans or plots), which is where the (specific) research questions come in as these identify the (possibly different) research objects unambiguously. In general, research questions that aim at making an inventory, whether it is an inventory of biodiversity or of lodging, call for a random sampling design. Both in the biodiversity and lodging example, one may opt for random sampling of geographic areas by means of a list of coordinates. Studies that aim to explain a particular phenomenon in a particular context would call for a purposive sampling design (non-random selection). Because studies of biodiversity and housing obey the same laws in terms of appropriate sampling design for similar research questions, individual students and researchers are sensitized to commonalities of their respective (mono)disciplines. For example, a research team interested in the effects of landslides on a socio-ecological system may select for their study one village that suffered from landslides and one village that did not suffer from landslides that have other characteristics in common (e.g., kind of soil, land use, land property legislation, family structure, income distribution, et cetera).

The data analysis plan describes how data will be analysed, for each of the separate modules and for the project at large. In the context of a multi-disciplinary quantitative research project, the data analysis plan will list the intended uni-, bi- and multivariate analyses such as measures for distributions (e.g., means and variances), measures for association (e.g., Pearson Chi square or Kendall Tau) and data reduction and modelling techniques (e.g., factor analysis and multiple linear regression or structural equation modelling) for each of the research modules using the data collected. When applicable, it will describe interim analyses and follow-up rules. In addition to the plans at modular level, the data analysis plan must describe how the input from the separate modules, i.e. different analyses, will be synthesized to answer the overall research question. In case of mixed methods research, the particular type of mixed methods design chosen describes how, when, and to what extent the team will synthesize the results from the different modules.

Unfortunately, in our experience, when some of the research modules rely on a qualitative approach, teams tend to refrain from designing a data analysis plan before starting the field work. While absence of a data analysis plan may be regarded acceptable in fields that rely exclusively on qualitative research (e.g., ethnography), failure to communicate how data will be analysed and what potential evidence will be produced posits a deathblow to interdisciplinarity. For many researchers not familiar with qualitative research, the black box presented as “qualitative data analysis” is a big hurdle, and a transparent and systematic plan is a sine qua non for any scientific collaboration. The absence of a data analysis plan for all modules results in an absence of synthesis of perspectives and skills of the disciplines involved, and in separate (disciplinary) research papers or separate chapters in the research report without an answer to the overall research question. So, although researchers may find it hard to write the data analysis plan for qualitative data, it is pivotal in interdisciplinary research teams.

Similar to the quantitative data analysis plan, the qualitative data analysis plan presents the description of how the researcher will get acquainted with the data collected (e.g., by constructing a narrative summary per interviewee or a paired-comparison of essays). Additionally, the rules to decide on data saturation need be presented. Finally, the types of qualitative analyses are to be described in the data analysis plan. Because there is little or no standardized terminology in qualitative data analysis, it is important to include a precise description as well as references to the works that describe the method intended (e.g., domain analysis as described by Spradley 1979 ; or grounded theory by means of constant-comparison as described by Boeije 2009 ).

2.4 Integration

To benefit optimally from the research being interdisciplinary the modules need to be brought together in the integration stage. The modules may be mono- or interdisciplinary and may rely on quantitative, qualitative or mixed methods approaches. So the MIR framework fits the view that distinguishes three multimethods approaches (quali–quali, quanti–quanti, and quali–quant).

Although the MIR framework has not been designed with the intention to promote mixed methods research, it is suitable for the design of mixed methods research as the kind of research that calls for both quantitative and qualitative components (Creswell and Piano Clark 2011 ). Indeed, just like the pioneers in mixed methods research (Creswell and Piano Clark 2011 : p. 2), the MIR framework deconstructs the package deals of paradigm and data to be collected. The synthesis of the different mono or interdisciplinary modules may benefit from research done on “the unique challenges and possibilities of integration of qualitative and quantitative approaches” (Fetters and Molina-Azorin 2017 : p. 5). We distinguish (sub) sets of modules being designed as convergent, sequential or embedded (adapted from mixed methods design e.g., Creswell and Piano Clark 2011 : pp. 69–70). Convergent modules, whether mono or interdisciplinary, may be done parallel and are integrated after completion. Sequential modules are done after one another and the first modules inform the latter ones (this includes transformative and multiphase mixed methods design). Embedded modules are intertwined. Here, modules depend on one another for data collection and analysis, and synthesis may be planned both during and after completion of the embedded modules.

2.5 Scientific quality and ethical considerations in the design of interdisciplinary research

A minimum set of jargon related to the assessment of scientific quality of research (e.g., triangulation, validity, reliability, saturation, etc.) can be found scattered in Fig.  1 . Some terms are reserved by particular paradigms, others may be seen in several paradigms with more or less subtle differences in meaning. In the latter case, it is important that team members are prepared to explain and share ownership of the term and respect the different meanings. By paying explicit attention to the quality concepts, researchers from different disciplines learn to appreciate each other’s concerns for good quality research and recognize commonalities. For example, the team may discuss measurement validity of both a standardized quantitative instrument and that of an interview and discover that the calibration of the machine serves a similar purpose as the confirmation of the guarantee of anonymity at the start of an interview.

Throughout the process of research design, ethics require explicit discussion among all stakeholders in the project. Ethical issues run through all components in the MIR framework in Fig.  1 . Where social and medical scientists may be more sensitive to ethical issues related to humans (e.g., the 1979 Belmont Report criteria of beneficence, justice, and respect), others may be more sensitive to issues related to animal welfare, ecology, legislation, the funding agency (e.g., implications for policy), data and information sharing (e.g., open access publishing), sloppy research practices, or long term consequences of the research. This is why ethics are an issue for the entire interdisciplinary team and cannot be discussed on project module level only.

3 The MIR framework in practice: two examples

3.1 teaching research methodology to heterogeneous groups of students, 3.1.1 institutional context and background of the mir framework.

Wageningen University and Research (WUR) advocates in its teaching and research an interdisciplinary approach to the study of global issues related to the motto “To explore the potential of nature to improve the quality of life.” Wageningen University’s student population is multidisciplinary and international (e.g., Tobi and Kampen 2013 ). Traditionally, this challenge of diversity in one classroom is met by covering a width of methodological topics and examples from different disciplines. However, when students of various programmes received methodological education in mixed classes, students of some disciplines would regard with disinterest or even disdain methods and techniques of the other disciplines. Different disciplines, especially from the qualitative respectively quantitative tradition in the social sciences (Onwuegbuzie and Leech 2005 : p. 273), claim certain study designs, methods of data collection and analysis as their territory, a claim reflected in many textbooks. We found that students from a qualitative tradition would not be interested, and would not even study, content like the design of experiments and quantitative data collection; and students from a quantitative tradition would ignore case study design and qualitative data collection. These students assumed they didn’t need any knowledge about ‘the other tradition’ for their future careers, despite the call for interdisciplinarity.

To enhance interdisciplinarity, WUR provides an MSc course mandatory for most students, in which multi-disciplinary teams do research for a commissioner. Students reported difficulties similar to the ones found in the literature: miscommunication due to talking different scientific languages and feelings of distrust and disrespect due to prejudice. This suggested that research methodology courses ought help prepare for interdisciplinary collaboration by introducing a single methodological framework that 1) creates sensitivity to the pros and challenges of interdisciplinary research by means of a common vocabulary and fosters respect for other disciplines, 2) starts from the research questions as pivotal in decision making on research methods instead of tradition or ontology, and 3) allows available methodologies and methods to be potentially applicable to any scientific research problem.

3.1.2 Teaching with MIR—the conceptual framework

As a first step, we replaced textbooks by ones refusing the idea that any scientific tradition has exclusive ownership of any methodological approach or method. The MIR framework further guides our methodology teaching in two ways. First, it presents a logical sequence of topics (first conceptual design, then technical design; first research question(s) or hypotheses, then study design; etc.). Second, it allows for a conceptual separation of topics (e.g., study design from instrument design). Educational programmes at Wageningen University and Research consistently stress the vital importance of good research design. In fact, 50% of the mark in most BSc and MSc courses in research methodology is based on the assessment of a research proposal that students design in small (2-4 students) and heterogeneous (discipline, gender and nationality) groups. The research proposal must describe a project which can be executed in practice, and which limitations (measurement, internal, and external validity) are carefully discussed.

Groups start by selecting a general research topic. They discuss together previously attained courses from a range of programs to identify personal and group interests, with the aim to reach an initial research objective and a general research question as input for the conceptual design. Often, their initial research objective and research question are too broad to be researchable (e.g., Kumar 2014 : p. 64; Adler and Clark 2011 : p. 71). In plenary sessions, the (basics of) critical assessment of empirical research papers is taught with special attention to the ‘what’ and ‘why’ section of research papers. During tutorials students generate research questions until the group agrees on a research objective, with one general research question that consists of a small set of specific research questions. Each of the specific research questions may stem from a different discipline, whereas answering the general research question requires integrating the answers to all specific research questions.

The group then identifies the key concepts in their research questions, while exchanging thoughts on possible attributes based on what they have learnt from previous courses (theories) and literature. When doing so they may judge the research question as too broad, in which case they will turn to the question strategies toolbox again. Once they agree on the formulation of the research questions and the choice of concepts, tasks are divided. In general, each student turns to the literature he/she is most familiar with or interested in, for the operationalization of the concept into measurable attributes and writes a paragraph or two about it. In the next meeting, the groups read and discuss the input and decide on the set-up and division of tasks with respect to the technical design.

3.1.3 Teaching with MIR—the technical framework

The technical part of research design distinguishes between study design, instrument design, sampling design, and the data analysis plan. In class, we first present students with a range of study designs (cross sectional, experimental, etc.). Student groups select an appropriate study design by comparing the demands made by the research questions with criteria for internal validity. When a (specific) research question calls for a study design that is not seen as practically feasible or ethically possible, they will rephrase the research question until the demands of the research question tally with the characteristics of at least one ethical, feasible and internally valid study design.

While following plenary sessions during which different random and non-random sampling or selection strategies are taught, groups start working on their sampling design. The groups make two decisions informed by their research question: the population(s) of research units, and the requirements of the sampling strategy for each population. Like many other aspects in research design, this can be an iterative process. For example, suppose the research question mentioned “local policy makers,” which is too vague for a sampling design. Then the decision may be to limit the study to “policy makers at the municipality level in the Netherlands” and adapt the general and the specific research questions accordingly. Next, the group identifies whether a sample design needs to focus on diversity (e.g., when the objective is to make an inventory of possible local policies), representativeness (e.g., when the objective is to estimate prevalence of types of local policies), or people with particular information (e.g., when the objective is to study people having experience with a given local policy). When a sample has to representative, the students must produce an assessment of external validity, whereas when the aim is to map diversity the students must discuss possible ways of source triangulation. Finally, in conjunction with the data analysis plan, students decide on the sample size and/or the saturation criteria.

When the group has agreed on their population(s) and the strategy for recruiting research units, the next step is to finalize the technical aspects of operationalisation i.e. addressing the issue of exactly how information will be extracted from the research units. Depending on what is practically feasible qua measurement, the choice of a data collection instrument may be a standardised (e.g., a spectrograph, a questionnaire) or less standardised (e.g., semi-structured interviews, visual inspection) one. The students have to discuss the possibilities of method triangulation, and explain the possible weaknesses of their data collection plan in terms of measurement validity and reliability.

3.1.4 Recent developments

Presently little attention is payed to the data analysis plan, procedures for synthesis and reporting because the programmes differ on their offer in data analysis courses, and because execution of the research is not part of the BSc and MSc methodology courses. Recently, we have designed one course for an interdisciplinary BSc program in which the research question is put central in learning and deciding on statistics and qualitative data analysis. Nonetheless, during the past years the number of methodology courses for graduate students that supported the MIR framework have been expanded, e.g., a course “From Topic to Proposal”; separate training modules on questionnaire construction, interviewing, and observation; and optional courses on quantitative and qualitative data analysis. These courses are open to (and attended by) PhD students regardless of their program. In Flanders (Belgium), the Flemish Training Network for Statistics and Methodology (FLAMES) has for the last four years successfully applied the approach outlined in Fig.  1 in its courses for research design and data collection methods. The division of the research process in terms of a conceptual design, technical design, operationalisation, analysis plan, and sampling plan, has proved to be appealing for students of disciplines ranging from linguistics to bioengineering.

3.2 Researching with MIR: noise reducing asphalt layers and quality of life

3.2.1 research objective and research question.

This example of the application of the MIR framework comes from a study about the effects of “noise reducing asphalt layers” on the quality of life (Vuye et al. 2016 ), a project commissioned by the City of Antwerp in 2015 and executed by a multidisciplinary research team of Antwerp University (Belgium). The principal researcher was an engineer from the Faculty of Applied Engineering (dept. Construction), supported by two researchers from the Faculty of Medicine and Health Sciences (dept. of Epidemiology and Social Statistics), one with a background in qualitative and one with a background in quantitative research methods. A number of meetings were held where the research team and the commissioners discussed the research objective (the ‘what’ and ‘why’).The research objective was in part dictated by the European Noise Directive 2002/49/EC, which forces all EU member states to draft noise action plans, and the challenge in this study was to produce evidence of a link between the acoustic and mechanical properties of different types of asphalt, and the quality of life of people living in the vicinity of the treated roads. While there was literature available about the effects of road surface on sound, and other studies had studied the link between noise and health, no study was found that produced evidence simultaneously about noise levels of roads and quality of life. The team therefore decided to test the hypothesis that traffic noise reduction has a beneficial effect on the quality of life of people into the central research. The general research question was, “to what extent does the placing of noise reducing asphalt layers increase the quality of life of the residents?”

3.2.2 Study design

In order to test the effect of types of asphalt, initially a pretest–posttest experiment was designed, which was expanded by several added experimental (change of road surface) and control (no change of road surface) groups. The research team gradually became aware that quality of life may not be instantly affected by lower noise levels, and that a time lag is involved. A second posttest aimed to follow up on this effect although it could only be implemented in a selection of experimental sites.

3.2.3 Instrument selection and design

Sound pressure levels were measured by an ISO-standardized procedure called the Statistical Pass-By (SPB) method. A detailed description of the method is in Vuye et al. ( 2016 ). No such objective procedure is available for measuring quality of life, which can only be assessed by self-reports of the residents. Some time was needed for the research team to accept that measuring a multidimensional concept like quality of life is more complicated than just having people rate their “quality of life” on a 10 point scale. For instance, questions had to be phrased in a way that gave not away the purpose of the research (Hawthorne effect), leading to the inclusion of questions about more nuisances than traffic noise alone. This led to the design of a self-administered questionnaire, with questions of Flanders Survey on Living Environment (Departement Leefmilieu, Natuur & Energie 2013 ) appended by new questions. Among other things, the questionnaire probed for experienced nuisance by sound, quality of sleep, effort to concentrate, effort to have a conversation inside or outside the home, physical complaints such as headaches, etc.

3.2.4 Sampling design

The selected sites needed to accommodate both types of measurements: that of noise from traffic and quality of life of residents. This was a complicating factor that required several rounds of deliberation. While countrywide only certain roads were available for changing the road surface, these roads had to be mutually comparable in terms of the composition of the population, type of residential area (e.g., reports from the top floor of a tall apartment building cannot be compared to those at ground level), average volume of traffic, vicinity of hospitals, railroads and airports, etc. At the level of roads therefore, targeted sampling was applied, whereas at the level of residents the aim was to realize a census of all households within a given perimeter from the treated road surfaces. Considerations about the reliability of applied instruments were guiding decisions with respect to sampling. While the measurements of the SPB method were sufficiently reliable to allow for relatively few measurements, the questionnaire suffered from considerable nonresponse which hampered statistical power. It was therefore decided to increase the power of the study by adding control groups in areas where the road surface was not replaced. This way, detecting an effect of the intervention did not solely depend on the turnout of the pre and the post-test.

3.2.5 Data analysis plan

The statistical analysis had to account for the fact that data were collected at two different levels: the level of the residents filling out the questionnaires, and the level of the roads which surface was changed. Because survey participation was confidential, results of the pre- and posttest could only be compared at aggregate (street) level. The analysis had to control for confounding variables (e.g., sample composition, variety in traffic volume, etc.), experimental factors (varieties in experimental conditions, and controls), and non-normal dependent variables. The statistical model appropriate for analysis of such data is a Generalised Linear Mixed Model.

3.2.6 Execution

Data were collected during the course of 2015, 2016 and 2017 and are awaiting final analysis in Spring 2017. Intermediate analyses resulted in several MSc theses, conference presentations, and working papers that reported on parts of the research.

4 Discussion

In this paper we presented the Methodology in Interdisciplinary Research framework that we developed over the past decade building on our experience as lecturers, consultants and researchers. The MIR framework recognizes research methodology and methods as important content in the critical factor skills and competences. It approaches research and collaboration as a process that needs to be designed with the sole purpose to answer the general research question. For the conceptual design the team members have to discuss and agree on the objective of their communal efforts without squeezing it into one single discipline and, thus, ignoring complexity. The specific research questions, when formulated, contribute to (self) respect in collaboration as they represent and stand witness of the need for interdisciplinarity. In the technical design, different parts were distinguished to stimulate researchers to think and design research out of their respective disciplinary boxes and consider, for example, an experimental design with qualitative data collection, or a case study design based on quantitative information.

In our teaching and consultancy, we first developed a MIR framework for social sciences, economics, health and environmental sciences interdisciplinarity. It was challenged to include research in the design discipline of landscape architecture. What characterizes research in landscape architecture and other design principles, is that the design product as well as the design process may be the object of study. Lenzholder et al. ( 2017 ) therefore distinguish three kinds of research in landscape architecture. The first kind, “Research into design” studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, “Research for design” generates knowledge that feeds into the noun and the verb ‘design’, which means it precedes the design(ing). The third kind, Research through Design(ing) employs designing as a research method. At first, just like Deming and Swaffield ( 2011 ), we were a bit skeptical about “designing” as a research method. Lenzholder et al. ( 2017 ) pose that the meaning of research through design has evolved through a (neo)positivist, constructivist and transformative paradigm to include a pragmatic stance that resembles the pragmatic stance assumed in the MIR framework. We learned that, because landscape architecture is such an interdisciplinary field, the process approach and the distinction between a conceptual and technical research design was considered very helpful and embraced by researchers in landscape architecture (Tobi and van den Brink 2017 ).

Mixed methods research (MMR) has been considered to study topics as diverse as education (e.g., Powell et al. 2008 ), environmental management (e.g., Molina-Azorin and Lopez-Gamero 2016 ), health psychology (e.g., Bishop 2015 ) and information systems (e.g., Venkatesh et al. 2013 ). Nonetheless, the MIR framework is the first to put MMR in the context of integrating disciplines beyond social inquiry (Greene 2008 ). The splitting of the research into modules stimulates the identification and recognition of the contribution of both distinct and collaborating disciplines irrespective of whether they contribute qualitative and/or quantitative research in the interdisciplinary research design. As mentioned in Sect.  2.4 the integration of the different research modules in one interdisciplinary project design may follow one of the mixed methods designs. For example, we witnessed at several occasions the integration of social and health sciences in interdisciplinary teams opting for sequential modules in a sequential exploratory mixed methods fashion (e.g., Adamson 2005 : 234). In sustainability science research, we have seen the design of concurrent modules for a concurrent nested mixed methods strategy (ibid) in research integrating the social and natural sciences and economics.

The limitations of the MIR framework are those of any kind of collaboration: it cannot work wonders in the absence of awareness of the necessity and it requires the willingness to work, learn, and research together. We developed MIR framework in and alongside our own teaching, consultancy and research, it has not been formally evaluated and compared in an experiment with teaching, consultancy and research with, for example, the regulative cycle for problem solving (van Strien 1986 ), or the wheel of science from Babbie ( 2013 ). In fact, although we wrote “developed” in the previous sentence, we are fully aware of the need to further develop and refine the framework as is.

The importance of the MIR framework lies in the complex, multifaceted nature of issues like sustainability, food security and one world health. For progress in the study of these pressing issues the understanding, construction and quality of interdisciplinary portfolio measurements (Tobi 2014 ) are pivotal and require further study as well as procedures facilitating the integration across different disciplines.

Another important strain of further research relates to the continuum of Responsible Conduct of Research (RCR), Questionable Research Practices (QRP), and deliberate misconduct (Steneck 2006 ). QRP includes failing to report all of a study’s conditions, stopping collecting data earlier than planned because one found the result one had been looking for, etc. (e.g., John et al. 2012 ; Simmons et al. 2011 ; Kampen and Tamás 2014 ). A meta-analysis on selfreports obtained through surveys revealed that about 2% of researchers had admitted to research misconduct at least once, whereas up to 33% admitted to QRPs (Fanelli 2009 ). While the frequency of QRPs may easily eclipse that of deliberate fraud (John et al. 2012 ) these practices have received less attention than deliberate misconduct. Claimed research findings may often be accurate measures of the prevailing biases and methodological rigor in a research field (Fanelli and Ioannidis 2013 ; Fanelli 2010 ). If research misconduct and QRP are to be understood then the disciplinary context must be grasped as a locus of both legitimate and illegitimate activity (Fox 1990 ). It would be valuable to investigate how working in interdisciplinary teams and, consequently, exposure to other standards of QRP and RCR influence research integrity as the appropriate research behavior from the perspective of different professional standards (Steneck 2006 : p. 56). These differences in scientific cultures concern criteria for quality in design and execution of research, reporting (e.g., criteria for authorship of a paper, preferred publication outlets, citation practices, etc.), archiving and sharing of data, and so on.

Other strains of research include interdisciplinary collaboration and negotiation, where we expect contributions from the “science of team science” (Falk-Krzesinski et al. 2010 ); and compatibility of the MIR framework with new research paradigms such as “inclusive research” (a mode of research involving people with intellectual disabilities as more than just objects of research; e.g., Walmsley and Johnson 2003 ). Because of the complexity and novelty of inclusive health research a consensus statement was developed on how to conduct health research inclusively (Frankena et al., under review). The eight attributes of inclusive health research identified may also be taken as guiding attributes in the design of inclusive research according to the MIR framework. For starters, there is the possibility of inclusiveness in the conceptual framework, particularly in determining research objectives, and in discussing possible theoretical frameworks with team members with an intellectual disability which Frankena et al. labelled the “Designing the study” attribute. There are also opportunities for inclusiveness in the technical design, and in execution. For example, the inclusiveness attribute “generating data” overlaps with the operationalization and measurement instrument design/selection and the attribute “analyzing data” aligns with the data analysis plan in the technical design.

On a final note, we hope to have aroused the reader’s interest in, and to have demonstrated the need for, a methodology for interdisciplinary research design. We further hope that the MIR framework proposed and explained in this article helps those involved in designing an interdisciplinary research project to get a clearer view of the various processes that must be secured during the project’s design and execution. And we look forward to further collaboration with scientists from all cultures to contribute to improving the MIR framework and make interdisciplinary collaborations successful.

Aboelela, S.W., Larson, E., Bakken, S., Carrasquillo, O., Formicola, A., Glied, S.A., Gebbie, K.M.: Defining interdisciplinary research: conclusions from a critical review of the literature. Health Serv. Res 42 (1), 329–346 (2007)

Article   Google Scholar  

Adamson, J.: Combined qualitative and quantitative designs. In: Bowling, A., Ebrahim, S. (eds.) Handbook of Health Research Methods: Investigation, Measurement and Analysis, pp. 230–245. Open University Press, Maidenhead (2005)

Google Scholar  

Adler, E.S., Clark, R.: An Invitation to Social Research: How it’s Done, 4th edn. Sage, London (2011)

Babbie, E.R.: The Practice of Social Research, 13th edn. Wadsworth Cengage Learning, Belmont Ca (2013)

Baker, G.H., Little, R.G.: Enhancing homeland security: development of a course on critical infrastructure systems. J. Homel. Secur. Emerg. Manag. (2006). doi: 10.2202/1547-7355.1263

Bishop, F.L.: Using mixed methods research designs in health psychology: an illustrated discussion from a pragmatist perspective. Br. J. Health. Psychol. 20 (1), 5–20 (2015)

Boeije, H.R.: Analysis in Qualitative Research. Sage, London (2009)

Bruns, D., van den Brink, A., Tobi, H., Bell, S.: Advancing landscape architecture research. In: van den Brink, A., Bruns, D., Tobi, H., Bell, S. (eds.) Research in Landscape Architecture: Methods And Methodology, pp. 11–23. Routledge, New York (2017)

Creswell, J.W., Piano Clark, V.L.: Designing and Conducting Mixed Methods Research, 2nd edn. Sage, Los Angeles (2011)

Danner, D., Blasius, J., Breyer, B., Eifler, S., Menold, N., Paulhus, D.L., Ziegler, M.: Current challenges, new developments, and future directions in scale construction. Eur. J. Psychol. Assess. 32 (3), 175–180 (2016)

Deming, M.E., Swaffield, S.: Landscape Architecture Research. Wiley, Hoboken (2011)

Departement Leefmilieu, Natuur en Energie: Uitvoeren van een uitgebreide schriftelijke enquête en een beperkte CAWI-enquête ter bepaling van het percentage gehinderden door geur, geluid en licht in Vlaanderen–SLO-3. Leuven: Market Analysis & Synthesis. www.lne.be/sites/default/files/atoms/files/lne-slo-3-eindrapport.pdf (2013). Accessed 8 March 2017

De Vaus, D.: Research Design in Social Research. Sage, London (2001)

DeVellis, R.F.: Scale Development: Theory and Applications, 3rd edn. Sage, Los Angeles (2012)

Dillman, D.A.: Mail and Internet Surveys, 2nd edn. Wiley, Hobroken (2007)

Falk-Krzesinski, H.J., Borner, K., Contractor, N., Fiore, S.M., Hall, K.L., Keyton, J., Uzzi, B., et al.: Advancing the science of team science. CTS Clin. Transl. Sci. 3 (5), 263–266 (2010)

Fanelli, D.: How many scientists fabricate and falsify research? A systematic review and metaanalysis of survey data. PLoS ONE (2009). doi: 10.1371/journal.pone.0005738

Fanelli, D.: Positive results increase down the hierarchy of the sciences. PLoS ONE (2010). doi: 10.1371/journal.pone.0010068

Fanelli, D., Ioannidis, J.P.A.: US studies may overestimate effect sizes in softer research. Proc. Natl. Acad. Sci. USA 110 (37), 15031–15036 (2013)

Fetters, M.D., Molina-Azorin, J.F.: The journal of mixed methods research starts a new decade: principles for bringing in the new and divesting of the old language of the field. J. Mixed Methods Res. 11 (1), 3–10 (2017)

Fischer, A.R.H., Tobi, H., Ronteltap, A.: When natural met social: a review of collaboration between the natural and social sciences. Interdiscip. Sci. Rev. 36 (4), 341–358 (2011)

Flick, U.: An Introduction to Qualitative Research, 3rd edn. Sage, London (2006)

Fox, M.F.: Fraud, ethics, and the disciplinary contexts of science and scholarship. Am. Sociol. 21 (1), 67–71 (1990)

Frischknecht, P.M.: Environmental science education at the Swiss Federal Institute of Technology (ETH). Water Sci. Technol. 41 (2), 31–36 (2000)

Godfray, H.C.J., Beddington, J.R., Crute, I.R., Haddad, L., Lawrence, D., Muir, J.F., Pretty, J., Robinson, S., Thomas, S.M., Toulmin, C.: Food security: the challenge of feeding 9 billion people. Science 327 (5967), 812–818 (2010)

Greene, J.C.: Is mixed methods social inquiry a distinctive methodology? J. Mixed Methods Res. 2 (1), 7–22 (2008)

IPCC.: Climate Change 2014 Synthesis Report. Geneva: Intergovernmental Panel on Climate Change. www.ipcc.ch/pdf/assessment-report/ar5/syr/SYR_AR5_FINAL_full_wcover.pdf (2015) Accessed 8 March 2017

John, L.K., Loewenstein, G., Prelec, D.: Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 (5), 524–532 (2012)

Kagan, J.: The Three Cultures: Natural Sciences, Social Sciences and the Humanities in the 21st Century. Cambridge University Press, Cambridge (2009)

Book   Google Scholar  

Kampen, J.K., Tamás, P.: Should I take this seriously? A simple checklist for calling bullshit on policy supporting research. Qual. Quant. 48 , 1213–1223 (2014)

Kumar, R.: Research Methodology: A Step-by-Step Guide for Beginners, 1st edn. Sage, Los Angeles (1999)

Kumar, R.: Research Methodology: A Step-by-Step Guide for Beginners, 4th edn. Sage, Los Angeles (2014)

Kvale, S.: Doing Interviews. Sage, London (2007)

Kvale, S., Brinkmann, S.: Interviews: Learning the Craft of Qualitative Interviews, 2nd edn. Sage, London (2009)

Lenzholder, S., Duchhart, I., van den Brink, A.: The relationship between research design. In: van den Brink, A., Bruns, D., Tobi, H., Bell, S. (eds.) Research in Landscape Architecture: Methods and Methodology, pp. 54–64. Routledge, New York (2017)

Molina-Azorin, J.F., Lopez-Gamero, M.D.: Mixed methods studies in environmental management research: prevalence, purposes and designs. Bus. Strateg. Environ. 25 (2), 134–148 (2016)

Onwuegbuzie, A.J., Leech, N.L.: Taking the “Q” out of research: teaching research methodology courses without the divide between quantitative and qualitative paradigms. Qual. Quant. 39 (3), 267–296 (2005)

Powell, H., Mihalas, S., Onwuegbuzie, A.J., Suldo, S., Daley, C.E.: Mixed methods research in school psychology: a mixed methods investigation of trends in the literature. Psychol. Sch. 45 (4), 291–309 (2008)

Shipley, B.: Cause and Correlation in Biology, 2nd edn. Cambridge University Press, Cambridge (2016)

Simmons, J.P., Nelson, L.D., Simonsohn, U.: False positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol. Sci. 22 , 1359–1366 (2011)

Spelt, E.J.H., Biemans, H.J.A., Tobi, H., Luning, P.A., Mulder, M.: Teaching and learning in interdisciplinary higher education: a systematic review. Educ. Psychol. Rev. 21 (4), 365–378 (2009)

Spradley, J.P.: The Ethnographic Interview. Holt, Rinehart and Winston, New York (1979)

Steneck, N.H.: Fostering integrity in research: definitions, current knowledge, and future directions. Sci. Eng. Eth. 12 (1), 53–74 (2006)

Tobi, H.: Measurement in interdisciplinary research: the contributions of widely-defined measurement and portfolio representations. Measurement 48 , 228–231 (2014)

Tobi, H., Kampen, J.K.: Survey error in an international context: an empirical assessment of crosscultural differences regarding scale effects. Qual. Quant. 47 (1), 553–559 (2013)

Tobi, H., van den Brink, A.: A process approach to research in landscape architecture. In: van den Brink, A., Bruns, D., Tobi, H., Bell, S. (eds.) Research in Landscape Architecture: Methods and Methodology, pp. 24–34. Routledge, New York (2017)

van Strien, P.J.: Praktijk als wetenschap: Methodologie van het sociaal-wetenschappelijk handelen [Practice as science. Methodology of social scientific acting.]. Van Gorcum, Assen (1986)

Venkatesh, V., Brown, S.A., Bala, H.: Bridging the qualitative-quantitative divide: guidelines for conducting mixed methods research in information systems. MIS Q 37 (1), 21–54 (2013)

Vuye, C., Bergiers, A., Vanhooreweder, B.: The acoustical durability of thin noise reducing asphalt layers. Coatings (2016). doi: 10.3390/coatings6020021

Walmsley, J., Johnson, K.: Inclusive Research with People with Learning Disabilities: Past, Present and Futures. Jessica Kingsley, London (2003)

Walsh, W.B., Smith, G.L., London, M.: Developing an interface between engineering and social sciences- interdisciplinary team-approach to solving societal problems. Am. Psychol. 30 (11), 1067–1071 (1975)

Download references

Acknowledgements

The MIR framework is the result of many discussions with students, researchers and colleagues, with special thanks to Peter Tamás, Jennifer Barrett, Loes Maas, Giel Dik, Ruud Zaalberg, Jurian Meijering, Vanessa Torres van Grinsven, Matthijs Brink, Gerda Casimir, and, last but not least, Jenneken Naaldenberg.

Author information

Authors and affiliations.

Biometris, Wageningen University and Research, PO Box 16, 6700 AA, Wageningen, The Netherlands

Hilde Tobi & Jarl K. Kampen

Statua, Dept. of Epidemiology and Medical Statistics, Antwerp University, Venusstraat 35, 2000, Antwerp, Belgium

Jarl K. Kampen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Hilde Tobi .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Tobi, H., Kampen, J.K. Research design: the methodology for interdisciplinary research framework. Qual Quant 52 , 1209–1225 (2018). https://doi.org/10.1007/s11135-017-0513-8

Download citation

Published : 27 April 2017

Issue Date : May 2018

DOI : https://doi.org/10.1007/s11135-017-0513-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research design
  • Interdisciplinarity
  • Consultancy
  • Multidisciplinary collaboration
  • Find a journal
  • Publish with us
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research design and framework

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the data collection methods , such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

Qualitative ResearchQuantitative Research
Focus on explaining and understanding experiences and perspectives.Focus on quantifying and measuring phenomena.
Use of non-numerical data, such as words, images, and observations.Use of numerical data, such as statistics and surveys.
Usually uses small sample sizes.Usually uses larger sample sizes.
Typically emphasizes in-depth exploration and interpretation.Typically emphasizes precision and objectivity.
Data analysis involves interpretation and narrative analysis.Data analysis involves statistical analysis and hypothesis testing.
Results are presented descriptively.Results are presented numerically and statistically.

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

research design and framework

Life@QuestionPro: Thomas Maiwald-Immer’s Experience

Aug 9, 2024

Top 13 Reporting Tools to Transform Your Data Insights & More

Top 13 Reporting Tools to Transform Your Data Insights & More

Aug 8, 2024

Employee satisfaction

Employee Satisfaction: How to Boost Your  Workplace Happiness?

Aug 7, 2024

jotform vs formstack

Jotform vs Formstack: Which Form Builder Should You Choose?

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

CHAPTER FIVE RESEARCH DESIGN AND METHODOLOGY 5.1. Introduction Citation: Lelissa TB (2018); Research Methodology; University of South Africa, PHD Thesis

  • December 2018

Tesfaye Boru at University of South Africa

  • University of South Africa

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Samah AbduLateif Buzieh

Alemayehu Bakalo

  • Chalchissa Amantie
  • Shani Naudé
  • Jannie Pretorius
  • Arlen Verta Ramadhan

Abdul Rehman

  • Asma Abdul Khaliq
  • Engelica Abriam
  • Joan Belvistre
  • Alyssa Marie Denna
  • Marilyn Gaoat

Moleboheng Ramulumo

  • LOVELY V. ECHALAR

Ali Dugaal

  • Newman Wadesango
  • Alice Nyandoro
  • Lovemore Sitsha
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Patient Cent Res Rev
  • v.11(1); Spring 2024
  • PMC11000705

Logo of jpcrr

Research Frameworks: Critical Components for Reporting Qualitative Health Care Research

Qualitative health care research can provide insights into health care practices that quantitative studies cannot. However, the potential of qualitative research to improve health care is undermined by reporting that does not explain or justify the research questions and design. The vital role of research frameworks for designing and conducting quality research is widely accepted, but despite many articles and books on the topic, confusion persists about what constitutes an adequate underpinning framework, what to call it, and how to use one. This editorial clarifies some of the terminology and reinforces why research frameworks are essential for good-quality reporting of all research, especially qualitative research.

Qualitative research provides valuable insights into health care interactions and decision-making processes – for example, why and how a clinician may ignore prevailing evidence and continue making clinical decisions the way they always have. 1 The perception of qualitative health care research has improved since a 2016 article by Greenhalgh et al. highlighted the higher contributions and citation rates of qualitative research than those of contemporaneous quantitative research. 2 The Greenhalgh et al. article was subsequently supported by an open letter from 76 senior academics spanning 11 countries to the editors of the British Medical Journal . 3 Despite greater recognition and acceptance, qualitative research continues to have an “uneasy relationship with theory,” 4 which contributes to poor reporting.

As an editor for the Journal of Patient-Centered Research and Reviews , as well as Human Resources for Health , I have seen several exemplary qualitative articles with clear and coherent reporting. On the other hand, I have often been concerned by a lack of rigorous reporting, which may reflect and reinforce the outdated perception of qualitative research as the “soft option.” 5 Qualitative research is more than conducting a few semi-structured interviews, transcribing the audio recordings verbatim, coding the transcripts, and developing and reporting themes, including a few quotes. Qualitative research that benefits health care is time-consuming and labor-intensive, requires robust design, and is rooted in theory, along with comprehensive reporting. 6

What Is “Theory”?

So fundamental is theory to qualitative research that I initially toyed with titling this editorial, “ Theory: the missing link in qualitative health care research articles ,” before deeming that focus too broad. As far back as 1967, Merton 6 warned that “the word theory threatens to become meaningless.” While it cannot be overstated that “atheoretical” studies lack the underlying logic that justifies researchers’ design choices, the word theory is so overused that it is difficult to understand what constitutes an adequate theoretical foundation and what to call it.

Theory, as used in the term theoretical foundation , refers to the existing body of knowledge. 7 , 8 The existing body of knowledge consists of more than formal theories , with their explanatory and predictive characteristics, so theory implies more than just theories . Box 1 9 – 12 defines the “building blocks of formal theories.” 9 Theorizing or theory-building starts with concepts at the most concrete, experiential level, becoming progressively more abstract until a higher-level theory is developed that explains the relationships between the building blocks. 9 Grand theories are broad, representing the most abstract level of theorizing. Middle-range and explanatory theories are progressively less abstract, more specific to particular phenomena or cases (middle-range) or variables (explanatory), and testable.

The Building Blocks of Formal Theories 9

words we assign to mental representations of events or phenomena ,
higher-order clusters of concepts
expressions of relationships among several constructs
“sets of interrelated constructs, definitions, and propositions that present a systematic view of phenomena by specifying relations among variables and phenomena” general sets “of principles that are independent of the specific case, situation, phenomenon or observation to be explained”

The Importance of Research Frameworks

Researchers may draw on several elements to frame their research. Generally, a framework is regarded as “a set of ideas that you use when you are forming your decisions and judgements” 13 or “a system of rules, ideas, or beliefs that is used to plan or decide something.” 14 Research frameworks may consist of a single formal theory or part thereof, any combination of several theories or relevant constructs from different theories, models (as simplified representations of formal theories), concepts from the literature and researchers’ experiences.

Although Merriam 15 was of the view that every study has a framework, whether explicit or not, there are advantages to using an explicit framework. Research frameworks map “the territory being investigated,” 8 thus helping researchers to be explicit about what informed their research design, from developing research questions and choosing appropriate methods to data analysis and interpretation. Using a framework makes research findings more meaningful 12 and promotes generalizability by situating the study and interpreting data in more general terms than the study itself. 16

Theoretical and Conceptual Frameworks

The variation in how the terms theoretical and conceptual frameworks are used may be confusing. Some researchers refer to only theoretical frameworks 17 , 18 or conceptual frameworks, 19 – 21 while others use the terms interchangeably. 7 Other researchers distinguish between the two. For example, Miles, Huberman & Saldana 8 see theoretical frameworks as based on formal theories and conceptual frameworks derived inductively from locally relevant concepts and variables, although they may include theoretical aspects. Conversely, some researchers believe that theoretical frameworks include formal theories and concepts. 18 Others argue that any differences between the two types of frameworks are semantic and, instead, emphasize using a research framework to provide coherence across the research questions, methods and interpretation of the results, irrespective of what that framework is called.

Like Ravitch and Riggan, 22 I regard conceptual frameworks (CFs) as the broader term. Including researchers’ perspectives and experiences in CFs provides valuable sources of originality. Novel perspectives guard against research repeating what has already been stated. 23 The term theoretical framework (TF) may be appropriate where formal published and identifiable theories or parts of such theories are used. 24 However, existing formal theories alone may not provide the current state of relevant concepts essential to understanding the motivation for and logic underlying a study. Some researchers may argue that relevant concepts may be covered in the literature review, but what is the point of literature reviews and prior findings unless authors connect them to the research questions and design? Indeed, Sutton & Straw 25 exclude literature reviews and lists of prior findings as an adequate foundation for a study, along with individual lists of variables or constructs (even when the constructs are defined), predictions or hypotheses, and diagrams that do not propose relationships. One or more of these aspects could be used in a research framework (eg, in a TF), and the literature review could (and should) focus on the theories or parts of theories (constructs), offer some critique of the theory and point out how they intend to use the theory. This would be more meaningful than merely describing the theory as the “background” to the study, without explicitly stating why and how it is being used. Similarly, a CF may include a discussion of the theories being used (basically, a TF) and a literature review of the current understanding of any relevant concepts that are not regarded as formal theory.

It may be helpful for authors to specify whether they are using a theoretical or a conceptual framework, but more importantly, authors should make explicit how they constructed and used their research framework. Some studies start with research frameworks of one type and end up with another type, 8 , 22 underscoring the need for authors to clarify the type of framework used and how it informed their research. Accepting the sheer complexity surrounding research frameworks and lamenting the difficulty of reducing the confusion around these terms, Box 2 26 – 31 and Box 3 offer examples highlighting the fundamental elements of theoretical and conceptual frameworks while acknowledging that they share a common purpose.

Examples of How Theoretical Frameworks May Be Used

The Southern African Association of Health Educationalist’s best publication of 2023 reported on a non-inferiority randomized control trial comparing video demonstrations and bedside tutorials for teaching pediatric clinical skills. The authors combined the social cognitive of sequential skill acquisition , and Peyton’s approach to teaching procedural and physical examination skills , to provide the justification for skill demonstrations forming the first step in bedside teaching. This premise formed the basis for the study and informed the interpretation of the results.
Maxwell describes how a researcher used a theoretical framework based on three formal theories to understand the “day-to-day work” of a medical group practice and to emphasize aspects of his results. This example illustrates the use of existing formal theories (one of which Maxwell describes as being less “identified than the other two”) to understand the phenomenon of interest and provide a frame of reference for interpreting the results.

Examples of How Conceptual Frameworks May Be Used

There is complexity around how conceptual frameworks are developed and used to inform research design, so consider the following examples: the first is based on the work of one of my doctoral students in medical education (with permission from Dr. Neetha Erumeda). The second is a fictitious account based on the normalization process model, which has been used in qualitative health care research.
In a study evaluating a postgraduate medical training program, Dr. Erumeda constructed a conceptual framework based on a logic . Logic models graphically represent causal relationships between programmatic inputs, activities, outputs, and outcomes linearly, and they can be based on different , eg, theories of action, which focus on programmatic inputs and activities, or theories of change, which focus on programmatic outcomes. Dr. Erumeda based her initial CF on a formal of change. She then selected to include in her logic model, based on the literature and of teaching in the program being evaluated. Once she had a diagrammatic representation of her logic model and the concepts she would focus on, she discussed the current understanding of each concept from the literature. After an analysis of her results, Dr. Erumeda modified her initial CF by incorporating her findings and the insights. Her final logic model represented a theory of action, allowing her to offer recommendations to improve the training program.
To study the implementation of a complex innovation into a health care system, one might employ the normalization process , which is a representation of . The model consists of four constructs regarding the innovation: 1) how it is enacted by the people doing it (interactional workability), 2) how it is understood within the networks of people around it (relational integration), 3) how it fits with existing divisions of labor (skill set workability), and 4) how it is sponsored or controlled by the organization in which it is taking place (contextual integration).
Constructing a would require researchers to consider how the innovation relates to each of the constructs in the model, to identify that make up the constructs and to consider their of the concepts (eg, how they conceive the prevailing work ethic or experience the managerial hierarchy). They may also be able to postulate between different constructs or concepts or decide to focus on particular aspects of the model, which they could explore conceptually using the literature. Their research design would be influenced by their areas of interest, which would, in turn, determine their research methods. The findings could allow them to modify their model with evidence-based relationships and new concepts.

Misconceptions About Qualitative Research

Qualitative research’s “uneasy relationship with theory” 4 may be due to several misconceptions. One possible misconception is that qualitative research aims to build theory and thus does not need theoretical grounding. The reality is that all qualitative research methods, not just Grounded Theory studies focused on theory building, may lead to theory construction. 16 Similarly, all types of qualitative research, including Grounded Theory studies, should be guided by research frameworks. 16

Not using a research framework may also be due to misconceptions that qualitative research aims to understand people’s perspectives and experiences without examining them from a particular theoretical perspective or that theoretical foundations may influence researchers’ interpretations of participants’ meanings. In fact, in the same way that participants’ meanings vary, qualitative researchers’ interpretations (as opposed to descriptions) of participants’ meaning-making will differ. 32 , 33 Research frameworks thus provide a frame of reference for “making sense of the data.” 34

Studies informed by well-defined research frameworks can make a world of difference in alleviating misconceptions. Good qualitative reporting requires research frameworks that make explicit the combination of relevant theories, theoretical constructs and concepts that will permeate every aspect of the research. Irrespective of the term used, research frameworks are critical components of reporting not only qualitative but also all types of research.

Acknowledgments

In memory of Martie Sanders: supervisor, mentor, and colleague. My deepest gratitude for your unfailing support and guidance. I feel your loss.

Conflicts of Interest: None.

Research Design vs. Research Methods

What's the difference.

Research design and research methods are two essential components of any research study. Research design refers to the overall plan or structure of the study, outlining the objectives, research questions, and the overall approach to be used. It involves making decisions about the type of study, the target population, and the data collection and analysis techniques to be employed. On the other hand, research methods refer to the specific techniques and tools used to gather and analyze data. This includes selecting the appropriate sampling method, designing surveys or interviews, and choosing statistical tests for data analysis. While research design provides the framework for the study, research methods are the practical tools used to implement the design and collect the necessary data.

Research Design

AttributeResearch DesignResearch Methods
DefinitionThe overall plan or strategy to answer research questionsThe specific techniques or tools used to collect and analyze data
ObjectiveTo provide a framework for conducting researchTo gather and analyze data to answer research questions
ScopeEncompasses the entire research processFocuses on data collection and analysis
TypesExperimental, quasi-experimental, descriptive, exploratory, etc.Surveys, interviews, observations, experiments, case studies, etc.
FlexibilityCan be flexible and adaptable based on research needsCan be rigid or flexible depending on the chosen methods
TimeframeEstablishes the overall timeline for the researchVaries based on the chosen methods and research goals
Data AnalysisMay involve statistical analysis, qualitative coding, etc.Includes statistical analysis, content analysis, thematic analysis, etc.
ValidityConcerned with the overall quality and accuracy of the researchFocuses on the reliability and validity of data collection methods

Research Methods

Further Detail

Introduction.

Research is a systematic process that aims to gather and analyze information to answer specific questions or solve problems. It involves careful planning and execution to ensure reliable and valid results. Two key components of any research study are the research design and research methods. While they are closely related, they serve distinct purposes and have different attributes. In this article, we will explore and compare the attributes of research design and research methods.

Research Design

Research design refers to the overall plan or strategy that guides the entire research process. It outlines the structure and framework of the study, including the objectives, research questions, and the overall approach to be used. The research design provides a roadmap for researchers to follow, ensuring that the study is conducted in a systematic and organized manner.

One of the key attributes of research design is its flexibility. Researchers can choose from various research designs, such as experimental, correlational, descriptive, or exploratory, depending on the nature of their research questions and the available resources. Each design has its own strengths and limitations, and researchers must carefully consider these factors when selecting the most appropriate design for their study.

Another important attribute of research design is its ability to establish the causal relationship between variables. Experimental research designs, for example, are specifically designed to determine cause and effect relationships by manipulating independent variables and measuring their impact on dependent variables. This attribute is particularly valuable when researchers aim to make causal inferences and draw conclusions about the effectiveness of interventions or treatments.

Research design also plays a crucial role in determining the generalizability of the findings. Some research designs, such as case studies or qualitative research, may provide rich and in-depth insights into a specific context or phenomenon but may lack generalizability to a larger population. On the other hand, quantitative research designs, such as surveys or experiments, often aim for a representative sample and strive for generalizability to a broader population.

Furthermore, research design influences the data collection methods and tools used in a study. It helps researchers decide whether to use qualitative or quantitative data, or a combination of both, and guides the selection of appropriate data collection techniques, such as interviews, observations, questionnaires, or experiments. The research design ensures that the chosen methods align with the research objectives and provide the necessary data to answer the research questions.

Research Methods

Research methods, on the other hand, refer to the specific techniques and procedures used to collect and analyze data within a research study. While research design provides the overall framework, research methods are the practical tools that researchers employ to gather the necessary information.

One of the key attributes of research methods is their diversity. Researchers can choose from a wide range of methods, such as surveys, interviews, observations, experiments, case studies, content analysis, or statistical analysis, depending on the nature of their research questions and the available resources. Each method has its own strengths and limitations, and researchers must carefully select the most appropriate methods to ensure the validity and reliability of their findings.

Another important attribute of research methods is their ability to provide empirical evidence. By collecting data through systematic and rigorous methods, researchers can obtain objective and measurable information that can be analyzed and interpreted. This attribute is crucial for generating reliable and valid results, as it ensures that the findings are based on evidence rather than personal opinions or biases.

Research methods also play a significant role in ensuring the ethical conduct of research. Ethical considerations, such as informed consent, privacy protection, and minimizing harm to participants, are essential in any research study. The choice of research methods should align with these ethical principles and guidelines to ensure the well-being and rights of the participants.

Furthermore, research methods allow researchers to analyze and interpret the collected data. Statistical analysis, for example, enables researchers to identify patterns, relationships, and trends within the data, providing a deeper understanding of the research questions. The choice of appropriate analysis methods depends on the nature of the data and the research objectives, and researchers must possess the necessary skills and knowledge to conduct the analysis accurately.

Lastly, research methods contribute to the reproducibility and transparency of research. By clearly documenting the methods used, researchers enable others to replicate the study and verify the findings. This attribute is crucial for the advancement of knowledge and the validation of research results.

Research design and research methods are two essential components of any research study. While research design provides the overall plan and structure, research methods are the practical tools used to collect and analyze data. Both have distinct attributes that contribute to the reliability, validity, and generalizability of research findings. By understanding and carefully considering the attributes of research design and research methods, researchers can conduct high-quality studies that contribute to the advancement of knowledge in their respective fields.

Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.

  • Privacy Policy

Research Method

Home » Theoretical Framework – Types, Examples and Writing Guide

Theoretical Framework – Types, Examples and Writing Guide

Table of Contents

Theoretical Framework

Theoretical Framework

Definition:

Theoretical framework refers to a set of concepts, theories, ideas , and assumptions that serve as a foundation for understanding a particular phenomenon or problem. It provides a conceptual framework that helps researchers to design and conduct their research, as well as to analyze and interpret their findings.

In research, a theoretical framework explains the relationship between various variables, identifies gaps in existing knowledge, and guides the development of research questions, hypotheses, and methodologies. It also helps to contextualize the research within a broader theoretical perspective, and can be used to guide the interpretation of results and the formulation of recommendations.

Types of Theoretical Framework

Types of Types of Theoretical Framework are as follows:

Conceptual Framework

This type of framework defines the key concepts and relationships between them. It helps to provide a theoretical foundation for a study or research project .

Deductive Framework

This type of framework starts with a general theory or hypothesis and then uses data to test and refine it. It is often used in quantitative research .

Inductive Framework

This type of framework starts with data and then develops a theory or hypothesis based on the patterns and themes that emerge from the data. It is often used in qualitative research .

Empirical Framework

This type of framework focuses on the collection and analysis of empirical data, such as surveys or experiments. It is often used in scientific research .

Normative Framework

This type of framework defines a set of norms or values that guide behavior or decision-making. It is often used in ethics and social sciences.

Explanatory Framework

This type of framework seeks to explain the underlying mechanisms or causes of a particular phenomenon or behavior. It is often used in psychology and social sciences.

Components of Theoretical Framework

The components of a theoretical framework include:

  • Concepts : The basic building blocks of a theoretical framework. Concepts are abstract ideas or generalizations that represent objects, events, or phenomena.
  • Variables : These are measurable and observable aspects of a concept. In a research context, variables can be manipulated or measured to test hypotheses.
  • Assumptions : These are beliefs or statements that are taken for granted and are not tested in a study. They provide a starting point for developing hypotheses.
  • Propositions : These are statements that explain the relationships between concepts and variables in a theoretical framework.
  • Hypotheses : These are testable predictions that are derived from the theoretical framework. Hypotheses are used to guide data collection and analysis.
  • Constructs : These are abstract concepts that cannot be directly measured but are inferred from observable variables. Constructs provide a way to understand complex phenomena.
  • Models : These are simplified representations of reality that are used to explain, predict, or control a phenomenon.

How to Write Theoretical Framework

A theoretical framework is an essential part of any research study or paper, as it helps to provide a theoretical basis for the research and guide the analysis and interpretation of the data. Here are some steps to help you write a theoretical framework:

  • Identify the key concepts and variables : Start by identifying the main concepts and variables that your research is exploring. These could include things like motivation, behavior, attitudes, or any other relevant concepts.
  • Review relevant literature: Conduct a thorough review of the existing literature in your field to identify key theories and ideas that relate to your research. This will help you to understand the existing knowledge and theories that are relevant to your research and provide a basis for your theoretical framework.
  • Develop a conceptual framework : Based on your literature review, develop a conceptual framework that outlines the key concepts and their relationships. This framework should provide a clear and concise overview of the theoretical perspective that underpins your research.
  • Identify hypotheses and research questions: Based on your conceptual framework, identify the hypotheses and research questions that you want to test or explore in your research.
  • Test your theoretical framework: Once you have developed your theoretical framework, test it by applying it to your research data. This will help you to identify any gaps or weaknesses in your framework and refine it as necessary.
  • Write up your theoretical framework: Finally, write up your theoretical framework in a clear and concise manner, using appropriate terminology and referencing the relevant literature to support your arguments.

Theoretical Framework Examples

Here are some examples of theoretical frameworks:

  • Social Learning Theory : This framework, developed by Albert Bandura, suggests that people learn from their environment, including the behaviors of others, and that behavior is influenced by both external and internal factors.
  • Maslow’s Hierarchy of Needs : Abraham Maslow proposed that human needs are arranged in a hierarchy, with basic physiological needs at the bottom, followed by safety, love and belonging, esteem, and self-actualization at the top. This framework has been used in various fields, including psychology and education.
  • Ecological Systems Theory : This framework, developed by Urie Bronfenbrenner, suggests that a person’s development is influenced by the interaction between the individual and the various environments in which they live, such as family, school, and community.
  • Feminist Theory: This framework examines how gender and power intersect to influence social, cultural, and political issues. It emphasizes the importance of understanding and challenging systems of oppression.
  • Cognitive Behavioral Theory: This framework suggests that our thoughts, beliefs, and attitudes influence our behavior, and that changing our thought patterns can lead to changes in behavior and emotional responses.
  • Attachment Theory: This framework examines the ways in which early relationships with caregivers shape our later relationships and attachment styles.
  • Critical Race Theory : This framework examines how race intersects with other forms of social stratification and oppression to perpetuate inequality and discrimination.

When to Have A Theoretical Framework

Following are some situations When to Have A Theoretical Framework:

  • A theoretical framework should be developed when conducting research in any discipline, as it provides a foundation for understanding the research problem and guiding the research process.
  • A theoretical framework is essential when conducting research on complex phenomena, as it helps to organize and structure the research questions, hypotheses, and findings.
  • A theoretical framework should be developed when the research problem requires a deeper understanding of the underlying concepts and principles that govern the phenomenon being studied.
  • A theoretical framework is particularly important when conducting research in social sciences, as it helps to explain the relationships between variables and provides a framework for testing hypotheses.
  • A theoretical framework should be developed when conducting research in applied fields, such as engineering or medicine, as it helps to provide a theoretical basis for the development of new technologies or treatments.
  • A theoretical framework should be developed when conducting research that seeks to address a specific gap in knowledge, as it helps to define the problem and identify potential solutions.
  • A theoretical framework is also important when conducting research that involves the analysis of existing theories or concepts, as it helps to provide a framework for comparing and contrasting different theories and concepts.
  • A theoretical framework should be developed when conducting research that seeks to make predictions or develop generalizations about a particular phenomenon, as it helps to provide a basis for evaluating the accuracy of these predictions or generalizations.
  • Finally, a theoretical framework should be developed when conducting research that seeks to make a contribution to the field, as it helps to situate the research within the broader context of the discipline and identify its significance.

Purpose of Theoretical Framework

The purposes of a theoretical framework include:

  • Providing a conceptual framework for the study: A theoretical framework helps researchers to define and clarify the concepts and variables of interest in their research. It enables researchers to develop a clear and concise definition of the problem, which in turn helps to guide the research process.
  • Guiding the research design: A theoretical framework can guide the selection of research methods, data collection techniques, and data analysis procedures. By outlining the key concepts and assumptions underlying the research questions, the theoretical framework can help researchers to identify the most appropriate research design for their study.
  • Supporting the interpretation of research findings: A theoretical framework provides a framework for interpreting the research findings by helping researchers to make connections between their findings and existing theory. It enables researchers to identify the implications of their findings for theory development and to assess the generalizability of their findings.
  • Enhancing the credibility of the research: A well-developed theoretical framework can enhance the credibility of the research by providing a strong theoretical foundation for the study. It demonstrates that the research is based on a solid understanding of the relevant theory and that the research questions are grounded in a clear conceptual framework.
  • Facilitating communication and collaboration: A theoretical framework provides a common language and conceptual framework for researchers, enabling them to communicate and collaborate more effectively. It helps to ensure that everyone involved in the research is working towards the same goals and is using the same concepts and definitions.

Characteristics of Theoretical Framework

Some of the characteristics of a theoretical framework include:

  • Conceptual clarity: The concepts used in the theoretical framework should be clearly defined and understood by all stakeholders.
  • Logical coherence : The framework should be internally consistent, with each concept and assumption logically connected to the others.
  • Empirical relevance: The framework should be based on empirical evidence and research findings.
  • Parsimony : The framework should be as simple as possible, without sacrificing its ability to explain the phenomenon in question.
  • Flexibility : The framework should be adaptable to new findings and insights.
  • Testability : The framework should be testable through research, with clear hypotheses that can be falsified or supported by data.
  • Applicability : The framework should be useful for practical applications, such as designing interventions or policies.

Advantages of Theoretical Framework

Here are some of the advantages of having a theoretical framework:

  • Provides a clear direction : A theoretical framework helps researchers to identify the key concepts and variables they need to study and the relationships between them. This provides a clear direction for the research and helps researchers to focus their efforts and resources.
  • Increases the validity of the research: A theoretical framework helps to ensure that the research is based on sound theoretical principles and concepts. This increases the validity of the research by ensuring that it is grounded in established knowledge and is not based on arbitrary assumptions.
  • Enables comparisons between studies : A theoretical framework provides a common language and set of concepts that researchers can use to compare and contrast their findings. This helps to build a cumulative body of knowledge and allows researchers to identify patterns and trends across different studies.
  • Helps to generate hypotheses: A theoretical framework provides a basis for generating hypotheses about the relationships between different concepts and variables. This can help to guide the research process and identify areas that require further investigation.
  • Facilitates communication: A theoretical framework provides a common language and set of concepts that researchers can use to communicate their findings to other researchers and to the wider community. This makes it easier for others to understand the research and its implications.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Dissertation Methodology

Dissertation Methodology – Structure, Example...

Research Contribution

Research Contribution – Thesis Guide

Delimitations

Delimitations in Research – Types, Examples and...

Significance of the Study

Significance of the Study – Examples and Writing...

Research Methodology

Research Methodology – Types, Examples and...

Research Findings

Research Findings – Types Examples and Writing...

  • Study protocol
  • Open access
  • Published: 05 August 2024

A pragmatic, stepped-wedge, hybrid type II trial of interoperable clinical decision support to improve venous thromboembolism prophylaxis for patients with traumatic brain injury

  • Christopher J. Tignanelli   ORCID: orcid.org/0000-0002-8079-5565 1 , 2 , 3 , 4 ,
  • Surbhi Shah 5 ,
  • David Vock 6 ,
  • Lianne Siegel 6 ,
  • Carlos Serrano 6 ,
  • Elliott Haut 7 ,
  • Sean Switzer 8 ,
  • Christie L. Martin 9 ,
  • Rubina Rizvi 2 , 3 ,
  • Vincent Peta 1 ,
  • Peter C. Jenkins 10 ,
  • Nicholas Lemke 1 ,
  • Thankam Thyvalikakath 11 , 12 ,
  • Jerome A. Osheroff 13 ,
  • Denise Torres 14 ,
  • David Vawdrey 15 ,
  • Rachael A. Callcut 16 ,
  • Mary Butler 3 , 17 &
  • Genevieve B. Melton 1 , 2 , 3  

Implementation Science volume  19 , Article number:  57 ( 2024 ) Cite this article

198 Accesses

2 Altmetric

Metrics details

Venous thromboembolism (VTE) is a preventable medical condition which has substantial impact on patient morbidity, mortality, and disability. Unfortunately, adherence to the published best practices for VTE prevention, based on patient centered outcomes research (PCOR), is highly variable across U.S. hospitals, which represents a gap between current evidence and clinical practice leading to adverse patient outcomes.

This gap is especially large in the case of traumatic brain injury (TBI), where reluctance to initiate VTE prevention due to concerns for potentially increasing the rates of intracranial bleeding drives poor rates of VTE prophylaxis. This is despite research which has shown early initiation of VTE prophylaxis to be safe in TBI without increased risk of delayed neurosurgical intervention or death. Clinical decision support (CDS) is an indispensable solution to close this practice gap; however, design and implementation barriers hinder CDS adoption and successful scaling across health systems. Clinical practice guidelines (CPGs) informed by PCOR evidence can be deployed using CDS systems to improve the evidence to practice gap. In the Scaling AcceptabLE cDs (SCALED) study, we will implement a VTE prevention CPG within an interoperable CDS system and evaluate both CPG effectiveness (improved clinical outcomes) and CDS implementation.

The SCALED trial is a hybrid type 2 randomized stepped wedge effectiveness-implementation trial to scale the CDS across 4 heterogeneous healthcare systems. Trial outcomes will be assessed using the RE 2 -AIM planning and evaluation framework. Efforts will be made to ensure implementation consistency. Nonetheless, it is expected that CDS adoption will vary across each site. To assess these differences, we will evaluate implementation processes across trial sites using the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework (a determinant framework) using mixed-methods. Finally, it is critical that PCOR CPGs are maintained as evidence evolves. To date, an accepted process for evidence maintenance does not exist. We will pilot a “Living Guideline” process model for the VTE prevention CDS system.

The stepped wedge hybrid type 2 trial will provide evidence regarding the effectiveness of CDS based on the Berne-Norwood criteria for VTE prevention in patients with TBI. Additionally, it will provide evidence regarding a successful strategy to scale interoperable CDS systems across U.S. healthcare systems, advancing both the fields of implementation science and health informatics.

Trial registration

Clinicaltrials.gov – NCT05628207. Prospectively registered 11/28/2022, https://classic.clinicaltrials.gov/ct2/show/NCT05628207 .

Contributions to the Literature

This paper provides a study protocol for a new and novel stepped wedge study variation which includes external control sites to take into account external influences on the uptake of traumatic brain injury guidelines nationally

This paper provides a study design for one of the largest trauma pragmatic trials in the U.S. of 9 heterogenous hospitals

This study is also unique and first-in-kind feature as the guideline may change over time during the study due to the “living” nature of the guideline being implemented.

Introduction

Venous thromboembolism (VTE) is a preventable complication of traumatic brain injury (TBI), which has a substantial impact on patient morbidity, mortality, disability. It is also associated with significant economic burden > $1.5 billion per year [ 1 , 2 ]. VTE is considered a preventable medical condition in the majority of cases [ 2 , 3 ]. Unfortunately, adherence with patient centered outcomes research (PCOR)-informed VTE prevention best practices is highly variable and often poor across U.S. hospitals. Compliance with best practice is especially relevant in the case of TBI as 54% of TBI patients will develop a VTE if they do not receive appropriate anticoagulation [ 4 ]. The delivery of appropriate VTE prophylaxis to TBI patients is such an important quality measure that adherence is tracked nationally and benchmarked by the American College of Surgeons Trauma Quality Improvement Program (ACS-TQIP) [ 5 ]. We have previously shown that instituting a hospital-wide VTE prevention initiative modeled after the Berne-Norwood criteria for VTE prophylaxis in TBI was associated with significantly increased compliance with VTE-related process and improved outcome metrics [ 6 ]. Specifically, we observed improved adherence with the Berne-Norwood criteria [ 7 , 8 ], reduced time to initiation of VTE prophylaxis, and reduced VTE events [ 9 ]. Multiple studies have shown that VTE prophylaxis in trauma patients not only reduces VTE events, but also significantly reduces mortality [ 10 ]. We noted the same reduction in mortality for TBI patients following the initiation of a VTE prophylaxis guideline for patients with TBI [ 11 ]. Unfortunately, despite widely published PCOR-informed best practice, nationally there is reluctance to initiate VTE prevention due to concerns for progression of intracranial hemorrhage. This is despite research which has shown early initiation of VTE prophylaxis to be safe in TBI without increased risk of delayed neurosurgical intervention or death [ 12 , 13 , 14 , 15 , 16 ].

Since approximately 40% of TBI patients do not receive DVT prophylaxis in a timely manner, there is a critical and timely need to close the gap between current PCOR evidence and clinical practice. [ 17 , 18 , 19 , 20 , 21 , 22 , 23 ]. Clinical decision support (CDS) systems are an indispensable solution to close this practice gap; however, design and implementation barriers hinder CDS adoption [ 24 , 25 ]. Another significant challenge to the implementation of CDS is that health information technology (IT) needs a common language for PCOR evidence to translate it into practice across multiple organizations [ 26 ]. Because of these challenges, we will deploy CDS using fast healthcare interoperability resources (FHIR) standards to rapidly implement PCOR evidence into practice [ 27 , 28 ]. We hypothesize that, FHIR standards will reduce CDS development and maintenance costs, increase PCOR uptake in rural and other underserved sites, and speed the development timeline to build a comprehensive suite of CDS for PCOR evidence [ 29 ].

Few studies have investigated specific barriers to and facilitating factors for adoption of interoperable FHIR-based CDS [ 30 ]. For example, many current studies investigating barriers and facilitators for interoperable CDS are limited to expert opinion [ 30 , 31 ] or lack a formal implementation science framework-guided investigation [ 32 , 33 ]. Barriers to and facilitating factors for adoption of interoperable CDS following real-life implementation and multicenter scaling guided by validated implementation science frameworks should be rigorously investigated. This study will facilitate comprehensive exploration of clinician and environmental (internal and external) contextual elements that influence interoperable CDS implementation success. In this study, we will scale and assess the effectiveness of a CDS system for a VTE prophylaxis guideline in patients with TBI and evaluate implementation across 9 sites within 4 U.S. trauma systems.

Study aims and implementation framework

This trial consists of a stepped wedge hybrid effectiveness-implementation trial to scale the CDS system across 4 trauma systems and in parallel evaluate implementation strategy guided by the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework (Fig.  1 a) [ 34 ]. We anticipate variability in CDS adoption across sites during the implementation trial. This variation represents a unique opportunity to study implementation at each site and understand what strategies, system factors, and engagement of specific stakeholders are associated with improved CDS adoption. We will rigorously evaluate each implementation phase, guided by The EPIS Implementation Framework [ 34 ], our determinant framework (Fig.  1 b). We will apply the EPIS framework to guide assessment of implementation phases, barriers, and facilitators (Fig.  2 ) [ 34 ]. EPIS comprises 16 constructs over 4 domains (outer context, inner context, bridging factors, and innovation factors). We selected EPIS as our determinant framework as it includes clearly delineated implementation stages and allows for examination of change at multiple levels, across time, and through phases that build toward implementation. While EPIS was initially developed for implementation in public service, it has since been translated to healthcare, especially for complex multi-institutional healthcare interventions [ 34 , 35 , 36 ].

figure 1

a Randomized Stepped Wedge design of the SCALED clinical trial. b Parallel, implementation evaluation guided by Explore, Preparation, Implementation and Sustain (EPIS) framework

figure 2

Implementation evaluation across study sites

Trial overview, setting, and inclusion/exclusion criteria

This trial will be conducted at 4 healthcare systems with 1–3 hospitals per system and is projected to occur over a 3 to 4-year period. The trial uses a randomized stepped-wedge design to scale an interoperable CDS system for the Berne-Norwood TBI CPG. Figure  1 a provides a schematic for the trial design. The order of health systems and sites will be randomly determined. This study will include a heterogeneous number of hospitals by trauma verification status, electronic health record (EHR) platform, bed size, and setting (Table  1 ). Our target population is adult patients admitted with an acute TBI defined as International Classification of Disease 10 Clinical Modification (ICD-10-CM): S06.1 – S06.9 or S06.A. Patients who die within 24 h of hospital admission and patients documented as “comfort cares” during the first 72 h of hospitalization will be excluded, as they would have a limited opportunity to receive adherence with the Berne-Norwood criteria. Additionally, patients with a pre-existing VTE or inferior vena cava (IVC) filter at the time of admission, and patients with a mechanical heart valve or ventricular assist device will be excluded from final analysis.

This study will also include up to 3 control sites (Fig.  1 a), a feature not typically included with historic stepped-wedge trial designs, which will strengthen our ability to understand external influences on the study findings. These control sites, which do not receive the CDS intervention and do not have any planned initiatives around guideline implementation, will allow the study to assess baseline adherence and variation in clinical practice over the study period.

CDS Intervention

TBI diagnosis upon admission will activate an interoperable CDS system leveraging the Stanson Health (Charlotte, NC) CDS platform [ 37 ], which is being expanded to include interoperable offerings for TBI VTE prophylaxis. This system provides a knowledge representation framework to faithfully express the intent of the Berne-Norwood prevention criteria computationally (Table  2 ). The interoperable FHIR data standard will be used for bi-directional data transfer between each site’s EHR and the CDS platform. Workflow integration includes a combination of both passive and interruptive provider and trauma system leader information and “nudges”. Table 2 represents the Standards-based, Machine-readable, Adaptive, Requirements-based, and Testable (SMART) L2 layer [ 38 ] of the Berne-Norwood criteria.

CDS user-centered design

We will complete a rapid cycle CDS evaluation to optimize CDS workflow integration by conducting a user-driven simulation and expert-driven heuristic usability optimization as we have previously done [ 39 ]. For rapid cycle CDS evaluation, multidisciplinary trauma end-user “teams” will complete up to 3 scenarios designed to represent various extremes in TBI VTE prevention decision making. Simulation usability testing will be overseen by usability experts, who will catalogue usability issues that arise during simulation. Via consensus ranking, the development and planning teams will rank usability issues from 0 (cosmetic) to 5 (usability catastrophe). Using 10 predefined heuristics for usability design [ 40 ], we will conduct a heuristic evaluation of the CDS, then catalogue and rank usability issues. These results will inform CDS application design, optimized for TBI workflow integration.

Implementation strategy

Following CDS development, our healthcare system relies on a time-tested approach for the implementation and scaling of user-centered CDS: this approach is called the Scaling AcceptabLE cDs (SCALED) Strategy [ 41 ]. This framework integrates multiple evidence-based implementation strategies (Table  3 ).

Study outcomes

The primary implementation outcome is patient-level adherence with the CPG: Specifically, did the patient received guideline-concordant care? Adherence will be measured as an all-or-none measure (binary endpoint at the encounter/patient-level). Thus, if a patient is low-risk for TBI progression, by 24 h they should have risk-specific VTE prevention ordered; if they receive this after 24 h, or if they receive the intermediate risk VTE prevention regimen, this would be deemed non-adherent. The primary effectiveness outcome is VTE (binary endpoint at the patient-encounter level). Safety outcomes evaluated include: TBI progression, in-hospital mortality, and bleeding events. A secondary hypothesis is that as the trial scales to additional sites, iterative implementations will be more efficient (reduced implementation time) and more effective (improved adoption). Secondary hypotheses will be evaluated using the RE 2 -AIM framework [ 42 , 43 ] and are displayed in Table  4 .

Clinical trial data collection methods

Data sources used in this trial include the Stanson Health CDS eCaseReport and site trauma registry. The eCaseReport is a living registry of all patients, and their associated clinical trial data elements, that were eligible for the CDS. All sites also maintain a trauma registry adhering to the National Trauma Data Standards [ 44 ], a requirement for ACS trauma center verification. This dataset is manually annotated by trained clinical abstractors. Data will be sent to the biostatistical team at 6-month intervals. Control and pre-implementation sites will provide their trauma registry in addition to supplemental standards-based EHR extraction of clinical trial data elements or manual abstraction. A data dictionary has been created for the study and will be made available on the trial webpage.

Multiple methods evaluation of implementation success at each EPIS phase

Survey instruments will be prepared using Likert-type scales. Outcomes will be calculated based on scoring guides for the following validated scales: Program Sustainability Assessment Tool (PSAT) [ 45 ], Clinical Sustainability Assessment Tool (CSAT) [ 46 ], Implementation Leadership Scale (ILS) [ 47 ], and Evidenced-based Practice Attitude Scale-36 (EBPAS-36) [ 48 ]. Two scales do not have scoring rubrics: the Organizational Readiness for Change Questionnaire [ 49 , 50 ] and the Normalization Measure Development (NoMAD) Questionnaire [ 51 , 52 , 53 ]. Since both of these scales group questions into constructs, they will be analyzed by generating mean Likert scores and standard deviations per construct, and a mean across constructs, at each of the four implementation phases [ 54 ].

To deeply investigate barriers and facilitators of successful implementation, semi-structured qualitative interviews of key personnel (clinical leadership and end-users, IT leadership and staff) will be conducted at each of the 4 implementation phases. Studies suggest saturation of new ideas occurs after approximately 12 interviews [ 55 ]. Additional samples will be added as needed if thematic saturation is not achieved. Following informed consent, interviews will be performed by a trained qualitative research assistant, audio recorded, and transcribed verbatim. An interview guide, informed by the EPIS framework, was developed to collect key informant experiences with CDS implementation with a focus on inner and outer context factors [ 56 ]. A hybrid approach, primarily deductive and secondarily inductive, approach will be applied. All interviews will be independently double-coded and coding discrepancies will be resolved through discussion. A descriptive thematic analysis approach [ 57 ] will be used to characterize the codes into themes and sub-themes representing the barriers and facilitators to implementation success.

Results for all instruments will be primarily stratified according to site implementation success at each study phase. Additional stratifications may include respondent role, discipline, and hospital system. Bar charts displaying mean survey domains with integrative quotations from the qualitative analysis will be used to facilitate data visualization and understanding of key themes representing barriers and facilitators to successful CDSS implementation.

Statistical analysis

Mixed-effects logistic regression models will be fit to test whether or not CDS implementation changes the likelihood of a VTE event during TBI admission (effectiveness outcome) and the likelihood that the clinical guideline was followed (implementation outcome). The models for these outcomes include fixed-effects for month (when available, to account for secular trends) and an indicator variable for whether the center had the CDS integrated in the EHR. The primary test statistic will be a Wald test of the coefficient for this treatment indicator. We will include random center-specific intercepts to account for correlation within center. Assuming there are 9 sites enrolled with an average of 400 TBI admissions per year and the typical site has between 20%-40% adherence to the clinical guidelines, we will have > 80.0% and > 99.9% power to detect a 5 and 10 percentage point increase in the adherence. Similarly, assuming the typical site has between a VTE event rate of 5–6%, we will have > 80.0% power to detect a 40%-50% reduction in VTE consistent with our published data [ 11 ].

Study oversight

This study is overseen by the University of Minnesota Surgical Clinical Trials Office and by an independent Data Safety Monitoring Board (DSMB). Even though this intervention is deploying a TBI clinical guideline that is currently considered best practice, we believe the addition of a DSMB will improve trial safety, data quality, and trial integrity [ 58 ]. DSMB membership will be independent from the study investigators and will consist of 3 members including: 1 trauma surgeon, 1 informaticist, and 1 statistician. Annual reports including data from all sites, including control sites, will be shared with the DSMB to assure timely monitoring of safety and data quality. The trial will not be stopped early in the event of CDS efficacy because a critical secondary outcome focuses on studying implementation and effectiveness over time.

VTE guideline monitoring and maintenance

Given the potential for a changing evidence-base, it is possible that best practice VTE prevention guidance may change during the study period or afterwards. A critical element in improving adherence with PCOR evidence is updating guidance based on this evidence – in this study, this requires ensuring that the CDS system remains current.

We will pilot a model for producing and maintaining TBI VTE prophylaxis 'Living Guidance and CDS' to ensure that the CDS remains current (Fig.  3 ). The University of Minnesota Evidence-based Practice Center (EPC) Evidence Generation team will conduct and maintain a “living” systematic review. Systematic review data will be uploaded to the AHRQ’s Systematic Review Data Repository (SRDR). “Living” implies that every 6 months the EPC team will evaluate and synthesize new evidence related to TBI VTE prophylaxis, update the existing systematic review and deliver it to a multi-stakeholder Guideline Committee. The Guideline Committee will then use the GRADE (Grading of Recommendations, Assessment, Development and Evaluations) evidence-to-decision (EtD) framework to develop VTE prophylaxis guidelines for patients with TBI [ 59 , 60 , 61 ]. A computational representation of these guidelines will be updated and maintained within the CDS platform by Stanson Health, the CDS Vendor.

figure 3

Pilot process for “Living Guideline”

Spreading successful results beyond study sites

The ultimate goal of this study is to spread successful CDS tools and strategies to broadly improve TBI VTE-related care processes and outcomes. The research outlined above will surface sharable insights about what information needs to be presented to which people in what formats through what channels at what times to reliably deliver guideline-based care – i.e., specific instantiations of the “CDS 5 Rights Framework” applied to this target [ 62 ]. We will use Health Service Blueprint tools to describe our recommended implementation approaches; these tools are being applied in an increasing number of public and private care delivery organizations as a structured approach to ‘get the CDS 5 Right right’ for various improvement targets. We will further adapt and apply Health Service Blueprint foundations supported by VA and AHRQ [ 63 ] to capture VTE care transformation guidance in Health Service Blueprint tooling [ 64 ]. Presenting recommended CDS-enabled workflow, information flow – as well as and related implementation considerations and broader healthcare ecosystem implications – in this structured format will help organizations beyond the initial study participants put study results into action efficiently and effectively.

In this paper, we present the protocol for the SCALED trial, a stepped-wedge cluster randomized trial of a CDS intervention to improve adherence with VTE prevention best practices for patients with TBI. As a hybrid type 2 trial, this study will evaluate both implementation and effectiveness outcomes. In addition to investigating effectiveness, we will also be able to provide insight into the implementation challenges for deploying interoperable CDS across heterogenous health systems. In our pilot study [ 9 ], while patients who received guideline-concordant care had significantly improved outcomes, we noted that not all patients receive guideline concordant care following implementation. Additionally, best strategies for scaling interoperable CDS systems are poorly studied. Thus, this study represents one of the earliest implementation evaluations of scaling interoperable CDS systems across heterogeneous health systems.

This study has several strengths. First, it will rigorously test implementation of a CPG for VTE prevention across 9 U.S. trauma centers using a multi-faceted CDS platform supporting both passive and interruptive decision support. Second, it will rigorously investigate scalable and interoperable CDS strategies to deploy CPGs. Third, this study leverages a centralized eCaseReport generated by the CDS system, a solution which can drive data collection for future pragmatic trials. Importantly, this study takes place at trauma centers which are geographically distinct, utilize different EHR vendors, include both ACS-verified level 1 through level 3 trauma centers, and include rural, community, and university-based trauma centers. In addition to helping spread recommended care transformation strategies beyond additional study sites, documenting these approaches in Health Service Blueprint tools will also support creation of learning communities for sharing, implementing, and enhancing these strategies.

This study also has limitations. First, we are only investigating 4 trauma systems which already have fairly advanced informatics divisions and experience implementing interoperable CDS systems. Thus, these findings may not be broadly applicable to health systems with less informatics experience and expertise. Second, we are only investigating implementation across two EHR vendors: Epic and Cerner, thus these findings may not be applicable to health systems with different EHR vendors such as Meditech or Allscripts. However, the Health Service Blueprint implementation strategy representations should still enable users of other systems to glean valuable insights about components of the transformation approach less dependent on specific EHRs used.

In summary, this study will implement and scale a CDS-enabled care transformation approach across a diverse collaborative CDS community, serving as an important demonstration of this critical healthcare challenge. We will integrate lessons learned for a planned national scaling in collaboration with U.S. trauma societies. Finally, we will pilot an approach for the “Living Guideline” and use that to maintain evidenced-based decision logic within CDS platforms.

Availability of data and materials

Following trial completion data will be made available upon request through the University of Minnesota Data Repository.

Heit JA. Venous thromboembolism: disease burden, outcomes and risk factors. J Thromb Haemost. 2005;3(8):1611–7.

Article   CAS   PubMed   Google Scholar  

Yorkgitis BK, Berndtson AE, Cross A, Kennedy R, Kochuba MP, Tignanelli C, Tominaga GT, Jacobs DG, Marx WH, Ashley DW, Ley EJ, Napolitano L, Costantini TW. American Association for the Surgery of Trauma/American College of Surgeons-Committee on Trauma Clinical Protocol for inpatient venous thromboembolism prophylaxis after trauma. J Trauma Acute Care Surg. 2022;92(3):597–604.

Article   PubMed   Google Scholar  

Nicholson M, Chan N, Bhagirath V, Ginsberg J. Prevention of Venous Thromboembolism in 2020 and Beyond. J Clin Med. 2020;9(8):2467.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Geerts WH, Code KI, Jay RM, Chen E, Szalai JP. A prospective study of venous thromboembolism after major trauma. N Engl J Med. 1994;331(24):1601–6.

Nathens AB, Cryer HG, Fildes J. The American College of Surgeons Trauma Quality Improvement Program. Surg Clin North Am. 2012;92(2):441–54, x−xi.

Ingraham NE, Lotfi-Emran S, Thielen BK, Techar K, Morris RS, Holtan SG, Dudley RA, Tignanelli CJ. Immunomodulation in COVID-19. Lancet Respir Med. 2020;8(6):544–6.

Phelan HA, Eastman AL, Madden CJ, Aldy K, Berne JD, Norwood SH, Scott WW, Bernstein IH, Pruitt J, Butler G, Rogers L, Minei JP. TBI risk stratification at presentation: a prospective study of the incidence and timing of radiographic worsening in the Parkland Protocol. J Trauma Acute Care Surg. 2012;73(2 Suppl 1):S122–7.

Pastorek RA, Cripps MW, Bernstein IH, Scott WW, Madden CJ, Rickert KL, Wolf SE, Phelan HA. The Parkland Protocol’s modified Berne-Norwood criteria predict two tiers of risk for traumatic brain injury progression. J Neurotrauma. 2014;31(20):1737–43.

Article   PubMed   PubMed Central   Google Scholar  

Tignanelli CJ, Gipson J, Nguyen A, Martinez R, Yang S, Reicks PL, Sybrant C, Roach R, Thorson M, West MA. Implementation of a Prophylactic Anticoagulation Guideline for Patients with Traumatic Brain Injury. Jt Comm J Qual Patient Saf. 2020;46(4):185–91.

PubMed   Google Scholar  

Jacobs BN, Cain-Nielsen AH, Jakubus JL, Mikhail JN, Fath JJ, Regenbogen SE, Hemmila MR. Unfractionated heparin versus low-molecular-weight heparin for venous thromboembolism prophylaxis in trauma. J Trauma Acute Care Surg. 2017;83(1):151–8.

Tignanelli CJ, Silverman GM, Lindemann EA, Trembley AL, Gipson JC, Beilman G, Lyng JW, Finzel R, McEwan R, Knoll BC, Pakhomov S, Melton GB. Natural language processing of prehospital emergency medical services trauma records allows for automated characterization of treatment appropriateness. J Trauma Acute Care Surg. 2020;88(5):607–14.

Kim J, Gearhart MM, Zurick A, Zuccarello M, James L, Luchette FA. Preliminary report on the safety of heparin for deep venous thrombosis prophylaxis after severe head injury. J Trauma. 2002;53(1):38–42; discussion 3.

Cothren CC, Smith WR, Moore EE, Morgan SJ. Utility of once-daily dose of low-molecular-weight heparin to prevent venous thromboembolism in multisystem trauma patients. World J Surg. 2007;31(1):98–104.

Norwood SH, Berne JD, Rowe SA, Villarreal DH, Ledlie JT. Early venous thromboembolism prophylaxis with enoxaparin in patients with blunt traumatic brain injury. J Trauma. 2008;65(5):1021–6; discussion 6-7.

CAS   PubMed   Google Scholar  

Scudday T, Brasel K, Webb T, Codner P, Somberg L, Weigelt J, Herrmann D, Peppard W. Safety and efficacy of prophylactic anticoagulation in patients with traumatic brain injury. J Am Coll Surg. 2011;213(1):148–53; discussion 53-4.

Byrne JP, Mason SA, Gomez D, Hoeft C, Subacius H, Xiong W, Neal M, Pirouzmand F, Nathens AB. Timing of Pharmacologic Venous Thromboembolism Prophylaxis in Severe Traumatic Brain Injury: A Propensity-Matched Cohort Study. J Am Coll Surg. 2016;223(4):621-31e5.

Lau R, Stevenson F, Ong BN, Dziedzic K, Eldridge S, Everitt H, Kennedy A, Kontopantelis E, Little P, Qureshi N, Rogers A, Treweek S, Peacock R, Murray E. Addressing the evidence to practice gap for complex interventions in primary care: a systematic review of reviews protocol. BMJ Open. 2014;4(6): e005548.

Tignanelli CJ, Vander Kolk WE, Mikhail JN, Delano MJ, Hemmila MR. Noncompliance with American College of Surgeons Committee on Trauma recommended criteria for full trauma team activation is associated with undertriage deaths. J Trauma Acute Care Surg. 2018;84(2):287–94.

Robbins AJ, Ingraham NE, Sheka AC, Pendleton KM, Morris R, Rix A, Vakayil V, Chipman JG, Charles A, Tignanelli CJ. Discordant Cardiopulmonary Resuscitation and Code Status at Death. J Pain Symptom Manage. 2021;61(4):770–780.e1.

Tignanelli CJ, Watarai B, Fan Y, Petersen A, Hemmila M, Napolitano L, Jarosek S, Charles A. Racial Disparities at Mixed-Race and Minority Hospitals: Treatment of African American Males With High-Grade Splenic Injuries. Am Surg. 2020;86(5):441–9.

Tignanelli CJ, Rix A, Napolitano LM, Hemmila MR, Ma S, Kummerfeld E. Association Between Adherence to Evidence-Based Practices for Treatment of Patients With Traumatic Rib Fractures and Mortality Rates Among US Trauma Centers. JAMA Netw Open. 2020;3(3): e201316.

Oliphant BW, Tignanelli CJ, Napolitano LM, Goulet JA, Hemmila MR. American College of Surgeons Committee on Trauma verification level affects trauma center management of pelvic ring injuries and patient mortality. J Trauma Acute Care Surg. 2019;86(1):1–10.

Tignanelli CJ, Wiktor AJ, Vatsaas CJ, Sachdev G, Heung M, Park PK, Raghavendran K, Napolitano LM. Outcomes of Acute Kidney Injury in Patients With Severe ARDS Due to Influenza A(H1N1) pdm09 Virus. Am J Crit Care. 2018;27(1):67–73.

Khairat S, Marc D, Crosby W, Al SA. Reasons For Physicians Not Adopting Clinical Decision Support Systems: Critical Analysis. JMIR Med Inform. 2018;6(2): e24.

Jones EK, Ninkovic I, Bahr M, Dodge S, Doering M, Martin D, Ottosen J, Allen T, Melton GB, Tignanelli CJ. A novel, evidence-based, comprehensive clinical decision support system improves outcomes for patients with traumatic rib fractures. J Trauma Acute Care Surg. 2023;95(2):161–71.

Marcos M, Maldonado JA, Martinez-Salvador B, Bosca D, Robles M. Interoperability of clinical decision-support systems and electronic health records using archetypes: a case study in clinical trial eligibility. J Biomed Inform. 2013;46(4):676–89.

FHIR Clinical Guidelines. http://build.fhir.org/ig/HL7/cqf-recommendations/ . Accessed 14 Sep 2021. 

Mandel JC, Kreda DA, Mandl KD, Kohane IS, Ramoni RB. SMART on FHIR: a standards-based, interoperable apps platform for electronic health records. J Am Med Inform Assoc. 2016;23(5):899–908.

Goldberg HS, Paterno MD, Rocha BH, Schaeffer M, Wright A, Erickson JL, Middleton B. A highly scalable, interoperable clinical decision support service. J Am Med Inform Assoc. 2014;21(e1):e55-62.

Marcial LH, Blumenfeld B, Harle C, Jing X, Keller MS, Lee V, Lin Z, Dover A, Midboe AM, Al-Showk S, Bradley V, Breen J, Fadden M, Lomotan E, Marco-Ruiz L, Mohamed R, O’Connor P, Rosendale D, Solomon H, Kawamoto K. Barriers, Facilitators, and Potential Solutions to Advancing Interoperable Clinical Decision Support: Multi-Stakeholder Consensus Recommendations for the Opioid Use Case. AMIA Annu Symp Proc. 2019;2019:637–46.

Lomotan EA, Meadows G, Michaels M, Michel JJ, Miller K. To Share is Human! Advancing Evidence into Practice through a National Repository of Interoperable Clinical Decision Support. Appl Clin Inform. 2020;11(1):112–21.

Dolin RH, Boxwala A, Shalaby J. A Pharmacogenomics Clinical Decision Support Service Based on FHIR and CDS Hooks. Methods Inf Med. 2018;57(S 02):e115–23.

Dorr DA, D’Autremont C, Pizzimenti C, Weiskopf N, Rope R, Kassakian S, Richardson JE, McClure R, Eisenberg F. Assessing Data Adequacy for High Blood Pressure Clinical Decision Support: A Quantitative Analysis. Appl Clin Inform. 2021;12(4):710–20.

Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implement Sci. 2019;14(1):1.

Becan JE, Bartkowski JP, Knight DK, Wiley TRA, DiClemente R, Ducharme L, Welsh WN, Bowser D, McCollister K, Hiller M, Spaulding AC, Flynn PM, Swartzendruber A, Dickson MF, Fisher JH, Aarons GA. A model for rigorously applying the Exploration, Preparation, Implementation, Sustainment (EPIS) framework in the design and measurement of a large scale collaborative multi-site study. Health Justice. 2018;6(1):9.

Idalski Carcone A, Coyle K, Gurung S, Cain D, Dilones RE, Jadwin-Cakmak L, Parsons JT, Naar S. Implementation Science Research Examining the Integration of Evidence-Based Practices Into HIV Prevention and Clinical Care: Protocol for a Mixed-Methods Study Using the Exploration, Preparation, Implementation, and Sustainment (EPIS) Model. JMIR Res Protoc. 2019;8(5): e11202.

Jackson JM, Witek MA, Hupert ML, Brady C, Pullagurla S, Kamande J, Aufforth RD, Tignanelli CJ, Torphy RJ, Yeh JJ, Soper SA. UV activation of polymeric high aspect ratio microstructures: ramifications in antibody surface loading for circulating tumor cell selection. Lab Chip. 2014;14(1):106–17.

Mazzag B, Tignanelli CJ, Smith GD. The effect of residual Ca2+ on the stochastic gating of Ca2+-regulated Ca2+ channel models. J Theor Biol. 2005;235(1):121–50.

Jones EK, Hultman G, Schmoke K, Ninkovic I, Dodge S, Bahr M, Melton GB, Marquard J, Tignanelli CJ. Combined Expert and User-Driven Usability Assessment of Trauma Decision Support Systems Improves User-Centered Design. Surgery. 2022;172(5):1537–48.

Jakob N. Enhancing the explanatory power of usability heuristics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '94). New York: Association for Computing Machinery; 1994. p. 152–8. https://doi.org/10.1145/191666.191729 .

Shah S, Switzer S, Shippee ND, Wogensen P, Kosednar K, Jones E, Pestka DL, Badlani S, Butler M, Wagner B, White K, Rhein J, Benson B, Reding M, Usher M, Melton GB, Tignanelli CJ. Implementation of an Anticoagulation Practice Guideline for COVID-19 via a Clinical Decision Support System in a Large Academic Health System and Its Evaluation: Observational Study. JMIR Med Inform. 2021;9(11): e30743.

Ingraham NE, Jones EK, King S, Dries J, Phillips M, Loftus T, Evans HL, Melton GB, Tignanelli CJ. Re-Aiming Equity Evaluation in Clinical Decision Support: A Scoping Review of Equity Assessments in Surgical Decision Support Systems. Ann Surg. 2023;277(3):359–64.

Holtrop JS, Estabrooks PA, Gaglio B, Harden SM, Kessler RS, King DK, Kwan BM, Ory MG, Rabin BA, Shelton RC, Glasgow RE. Understanding and applying the RE-AIM framework: Clarifications and resources. J Clin Transl Sci. 2021;5(1): e126.

https://www.facs.org/-/media/files/quality-programs/trauma/ntdb/ntds/data-dictionaries/ntds_data_dictionary_2022.ashx . Accessed 14 Sep 2021. ACoSNTDSDDA.

https://www.cdc.gov/pcd/issues/2014/13_0184.htm . Accessed 1/3/2021.

Malone S, Prewitt K, Hackett R, Lin JC, McKay V, Walsh-Bailey C, Luke DA. The Clinical Sustainability Assessment Tool: measuring organizational capacity to promote sustainability in healthcare. Implement Sci Commun. 2021;2(1):77.

Aarons GA, Ehrhart MG, Farahnak LR. The Implementation Leadership Scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. 2014;9(1):45.

Rye M, Torres EM, Friborg O, Skre I, Aarons GA. The Evidence-based Practice Attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples. Implement Sci. 2017;12(1):44.

Holt DT, Armenakis AA, Feild HS, Harris SG. Readiness for Organizational Change. J Appl Behav Sci. 2007;43(2):232–55.

Article   Google Scholar  

Weiner BJ. A theory of organizational readiness for change. Implement Sci. 2009;4:67.

Goodridge D, Rana M, Harrison EL, Rotter T, Dobson R, Groot G, Udod S, Lloyd J. Assessing the implementation processes of a large-scale, multi-year quality improvement initiative: survey of health care providers. BMC Health Serv Res. 2018;18(1):237.

Vis C, Ruwaard J, Finch T, Rapley T, de Beurs D, van Stel H, van Lettow B, Mol M, Kleiboer A, Riper H, Smit J. Toward an Objective Assessment of Implementation Processes for Innovations in Health Care: Psychometric Evaluation of the Normalization Measure Development (NoMAD) Questionnaire Among Mental Health Care Professionals. J Med Internet Res. 2019;21(2): e12376.

NoMAD. https://www.implementall.eu/17-nomad.html . Accessed 1/2/2021.

Ng F, McGrath BA, Seth R, et al. Measuring multidisciplinary staff engagement in a national tracheostomy quality improvement project using the NoMAD instrument. Br J Anesth. 2019;123(4):e506.

Guest G, Bunce A, Johnson L. How Many Interviews Are Enough?: An Experiment with Data Saturation and Variability. Field Methods. 2006;18:59–82.

Beidas RS, Stewart RE, Adams DR, Fernandez T, Lustbader S, Powell BJ, Aarons GA, Hoagwood KE, Evans AC, Hurford MO, Rubin R, Hadley T, Mandell DS, Barg FK. A Multi-Level Examination of Stakeholder Perspectives of Implementation of Evidence-Based Practices in a Large Urban Publicly-Funded Mental Health System. Adm Policy Ment Health. 2016;43(6):893–908.

Braun V, Clarke V. Thematic analysis. In Cooper H, Camic PM, Long DL, Panter AT, Rindskopf D, Sher KJ, editors. APA handbooks in psychology®. APA handbook of research methods in psychology, vol. 2. Research designs: Quantitative, qualitative, neuropsychological, and biological. American Psychological Association; 2012. p. 57–71.

Fiscella K, Sanders M, Holder T, Carroll JK, Luque A, Cassells A, Johnson BA, Williams SK, Tobin JN. The role of data and safety monitoring boards in implementation trials: When are they justified? J Clin Transl Sci. 2020;4(3):229–32.

Alonso-Coello P, Schunemann HJ, Moberg J, Brignardello-Petersen R, Akl EA, Davoli M, Treweek S, Mustafa RA, Rada G, Rosenbaum S, Morelli A, Guyatt GH, Oxman AF, Group GW. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ. 2016;353:i2016.

Rosenbaum SE, Moberg J, Glenton C, Schunemann HJ, Lewin S, Akl E, Mustafa RA, Morelli A, Vogel JP, Alonso-Coello P, Rada G, Vasquez J, Parmelli E, Gulmezoglu AM, Flottorp SA, Oxman AD. Developing Evidence to Decision Frameworks and an Interactive Evidence to Decision Tool for Making and Using Decisions and Recommendations in Health Care. Glob Chall. 2018;2(9):1700081.

Alonso-Coello P, Oxman AD, Moberg J, Brignardello-Petersen R, Akl EA, Davoli M, Treweek S, Mustafa RA, Vandvik PO, Meerpohl J, Guyatt GH, Schunemann HJ, Group GW. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 2: Clinical practice guidelines. BMJ. 2016;353:i2089.

Osheroff JA. CDS and and the CDS & LHS 5 Rights. CDS/PI Collaborative: Getting Better Faster Together.

ACTS Project Team. Patient Journey and Service Blueprint How Tos. AHRQ evidence-based Care Transformation Support (ACTS) Home. [Online] October 2021. https://cmext.ahrq.gov/confluence/display/PUB/Patient+Journey+and+Service+Blueprint+How+Tos .

CDS Approach for Optimizing VTE Prophylaxis (VTEP) Society of Hospital Medicine (SHM) Recommendations1 Version 2; March, 2013. [online] https://www.healthit.gov/sites/default/files/cds/Detailed%20Inpatient%20CDS-QI%20Worksheet%20-%20VTE%20Example%20-%20Recommendations.xlsx .

Download references

This research was supported by the Agency for Healthcare Research and Quality (AHRQ), grant R18HS028583, the University of Minnesota Center for Learning Health System Sciences – a partnership between the University of Minnesota Medical School and the School of Public Health. The authors have no other conflicts of interest.

Author information

Authors and affiliations.

Department of Surgery, University of Minnesota, 420 Delaware St SE, MMC 195, Minneapolis, MN, 55455, USA

Christopher J. Tignanelli, Vincent Peta, Nicholas Lemke & Genevieve B. Melton

Institute for Health Informatics, University of Minnesota, Minneapolis, MN, USA

Christopher J. Tignanelli, Rubina Rizvi & Genevieve B. Melton

Center for Learning Health Systems Science, University of Minnesota, Minneapolis, MN, USA

Christopher J. Tignanelli, Rubina Rizvi, Mary Butler & Genevieve B. Melton

Center for Quality Outcomes, Discovery and Evaluation, University of Minnesota, Minneapolis, MN, USA

Christopher J. Tignanelli

Department of Medicine, Mayo Clinic, Scottsdale, AZ, USA

Surbhi Shah

Division of Biostatistics and Health Data Science, University of Minnesota, Minneapolis, MN, USA

David Vock, Lianne Siegel & Carlos Serrano

Department of Surgery, Johns Hopkins University, Baltimore, MD, USA

Elliott Haut

M Health Fairview, Minneapolis, MN, USA

Sean Switzer

School of Nursing, University of Minnesota, Minneapolis, MN, USA

Christie L. Martin

Department of Surgery, Indiana University School of Medicine, Indianapolis, IN, USA

Peter C. Jenkins

Center for Biomedical Informatics, Regenstrief Institute, Indianapolis, IN, USA

Thankam Thyvalikakath

Indiana University School of Dentistry, Indianapolis, IN, USA

TMIT Consulting, LLC, Naples, FL, USA

Jerome A. Osheroff

Department of Surgery, Geisinger Health, Danville, PA, USA

Denise Torres

Department of Biomedical Informatics, Geisinger Health, Danville, PA, USA

David Vawdrey

Department of Surgery, UC Davis School of Medicine, Sacramento, CA, USA

Rachael A. Callcut

School of Publish Health, University of Minnesota, Minneapolis, MN, USA

Mary Butler

You can also search for this author in PubMed   Google Scholar

Contributions

CT conceived and jointly designed the study protocol and helped write and critically revise this protocol paper, SS conceived and jointly designed the study protocol and helped write and critically revise this protocol paper, DV jointly designed the study protocol and helped write and critically revise this protocol paper, LS jointly designed the study protocol and helped write and critically revise this protocol paper, CS jointly designed the study protocol and helped write and critically revise this protocol paper, EH jointly designed the study protocol and helped write and critically revise this protocol paper, SS jointly designed the study protocol and helped write and critically revise this protocol paper, CM jointly designed the study protocol and helped write and critically revise this protocol paper, RR jointly designed the study protocol and helped write and critically revise this protocol paper, VP jointly designed the study protocol and helped write and critically revise this protocol paper, PJ jointly designed the study protocol and helped write and critically revise this protocol paper, NL jointly designed the study protocol and helped write and critically revise this protocol paper, TT jointly designed the study protocol and helped write and critically revise this protocol paper, JO jointly designed the study protocol and helped write and critically revise this protocol paper, DT jointly designed the study protocol and helped write and critically revise this protocol paper, DV jointly designed the study protocol and helped write and critically revise this protocol paper, RC jointly designed the study protocol and helped write and critically revise this protocol paper, MB jointly designed the study protocol and helped write and critically revise this protocol paper, GM conceived and jointly designed the study protocol and helped write and critically revise this protocol paper.

Corresponding author

Correspondence to Christopher J. Tignanelli .

Ethics declarations

Ethics approval and consent to participate.

This study protocol was given the determination of “Exempt” as secondary research for which consent is not required. The Mixed Methods investigation was given the determination of “Not Human Research” as a quality improvement activity.

Consent for publication

Not applicable.

Competing interests

The authors have no conflicts of interest to report.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Tignanelli, C.J., Shah, S., Vock, D. et al. A pragmatic, stepped-wedge, hybrid type II trial of interoperable clinical decision support to improve venous thromboembolism prophylaxis for patients with traumatic brain injury. Implementation Sci 19 , 57 (2024). https://doi.org/10.1186/s13012-024-01386-4

Download citation

Received : 09 June 2024

Accepted : 14 July 2024

Published : 05 August 2024

DOI : https://doi.org/10.1186/s13012-024-01386-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Traumatic brain injury
  • Prophylaxis
  • Venous thromboembolism
  • Stepped wedge
  • Implementation science
  • Mixed methods
  • Clinical decision support
  • Randomized controlled trial
  • Learning health system
  • Health informatics

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research design and framework

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Computer Science > Computation and Language

Title: rag foundry: a framework for enhancing llms for retrieval augmented generation.

Abstract: Implementing Retrieval-Augmented Generation (RAG) systems is inherently complex, requiring deep understanding of data, use cases, and intricate design decisions. Additionally, evaluating these systems presents significant challenges, necessitating assessment of both retrieval accuracy and generative quality through a multi-faceted approach. We introduce RAG Foundry, an open-source framework for augmenting large language models for RAG use cases. RAG Foundry integrates data creation, training, inference and evaluation into a single workflow, facilitating the creation of data-augmented datasets for training and evaluating large language models in RAG settings. This integration enables rapid prototyping and experimentation with various RAG techniques, allowing users to easily generate datasets and train RAG models using internal or specialized knowledge sources. We demonstrate the framework effectiveness by augmenting and fine-tuning Llama-3 and Phi-3 models with diverse RAG configurations, showcasing consistent improvements across three knowledge-intensive datasets. Code is released as open-source in this https URL .
Comments: 10 pages
Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Information Retrieval (cs.IR); Machine Learning (cs.LG)
Cite as: [cs.CL]
  (or [cs.CL] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Tre damer foran et OsloMet bygg

PhD Fellowship position in enabling inclusiveness and well-being in cities through an analytical urban design framework

  • 4 September 2024
  • Fixed-term post / Full time
  • Apply for this job

Oslo Metropolitan University is Norway's third largest university with almost 22,000 students and over 2,500 employees. We have campuses in central Oslo and at Romerike. OsloMet educates students and conducts research that contributes to the sustainability of the Norwegian welfare state and the metropolitan region.

The Faculty of Technology, Art and Design (TKD) offers higher education and research and development (R&D) activities within technical subjects, arts and design. The Faculty has approximately 4.000 students and 400 staff members and is situated at Pilestredet Campus in downtown Oslo and at Kjeller Campus in Viken.

The Department of Built Environment offers bachelors and master’s degree programs in engineering, in addition to research and development activities. The department has approximately 60 staff members and 850 students.

PhD Fellowship position in enabling inclusiveness and well-being in cities through an analytical urban design framework: A comparative case study of Scandinavia and the Western Balkans

The Department of Built Environment has a vacant PhD Fellowship position. This PhD project focuses on innovation strategies for cities through the lens of inclusive, safe and economically vital cities for the well-being of people. Until now, the multi-dimensional well-being indicators for cities are underresearched. Thus, this PhD project aims for developing a novel mixed-method innovation framework that allows practitioners a tool at hand that allows planning and designing resilient future cities, develop and test scenarios for informed-decision making and to shape policies through a data-policy interaction approach.

Area of research

The idea of well-being has gained attention and become more important for people all over the world living in cities, towns, and rural areas. Well-being includes the physical, social and emotional state of a person. Inclusive, accessible, safe, economically vital cities with high quality environments are adding to a healthy, well-balanced lifestyle allowing for people to be physically active, socially interact and gain emotional wellness in a safe city. The notion of well-being is connected to the concepts of ‘Happy Cities’, ‘Healthy Cities’, and ‘Trauma-Informed Cities’. Thus, the idea of well-being in cities is a multi-dimensionional concept allowing to create future resilient cities.

This PhD study researches the interplay of socio-spatial, economic and aesthetic indicators creating an urban analytical design framework across scales enabling shaping policies to create safe, vital cities for the well-being of people.

The position is for a period of three years. The goal must be to complete the PhD program/degree within the decided time frame.

Qualification requirements and conditions

  • Master's degree within urban design, urban planning, spatial planning, geography or relevant disciplines. The degree must contain 120 credits (ECTS).
  • An academic profile that suits the department’s needs.
  • Good communication skills in English, oral and written.

The following grade requirements are a condition for employment in the position:

  • Minimum average grade B on subjects included in the master's degree.
  • Minimum grade B on the master's thesis.
  • Minimum average grade C on the subjects included in the bachelor's degree. If you have an integrated master's degree, the grades from the first three standard years of the degree will be assessed.

Admission to the doctoral program in Engineering Science at the Faculty of Technology, Art and Design within three months of employment is a prerequisite for the position. If you already have a doctorate in a related field, you will not qualify for the position.

In assessing the applicants, emphasis will be placed on the department's overall needs and the applicant's potential for research within the field.

General criteria for employment in academic positions are covered by the Regulations on the University and College Ac t.

Preferred selection criteria

  • Good collaboration and communication skills
  • Ability to work independently and in teams
  • Ability to self-motivate
  • Ability to build local and international research networks
  • Ability to use theoretical knowledge for practice-oriented solutions

Personal suitability will be emphasised.

It is important to OsloMet to reflect the population of our region, and all qualified candidates are welcome to apply. We make active endeavours to further develop OsloMet as an inclusive workplace and to make adaptations to the workplace where required. You are also welcome to apply for a position with us if you have had periods where you have not been in employment, education or training.

We can offer you

  • An exciting job opportunity at Norway’s third largest and most urban university
  • Opportunities for professional development
  • Beneficial pension arrangements with the Norwegian Public Service Pension Fund
  • Beneficial welfare schemes and a wide range of sports and cultural offers
  • Free Norwegian language classes to employees and their partners/spouses
  • Workplace in downtown Oslo with multiple cultural offers

Practical information about relocation to Oslo and living in Norway .

Application

To be considered for the position, you must upload the following documents by the application deadline:

  • Application letter describing your motivation and how your professional profile is relevant for this position.
  • Your master's degree must be completed at the time of application. If you have not received a diploma before the application deadline, you must attach a preliminary transcript in English or a Scandinavian language from your university by the application deadline, in addition to an official confirmation from the educational institution that all examinations for the master's degree, including the master's thesis, have been completed. Official diploma and transcript must be forwarded by the joining date.
  • Name and contact information of two references (name, relationship, e-mail and telephone number).
  • Scientific work that you want to be assessed and Master thesis.
  • Applicants from EU/EEA countries.
  • Applicants who have completed at least one year of study in Australia, Canada, Ireland, New Zealand, Great Britain or the United States. Applicants who hold an “International Baccalaureate (IB)” diploma.

The following language tests are approved documentation: TOEFL, IELTS, Cambridge Certificate in Advanced English (CAE) or Cambridge Certificate of Proficiency in English (CPE). In these tests, you must have achieved at least the following scores:

  • TOEFL: 600 (paper-based test), 92 (Internet-based test)
  • IELTS: 6.5, where none of the sections should have a lower score than 5.5 (only the Academic IELTS test is accepted).

We only process applications sent via our electronic recruitment system and all documents must be uploaded for your application to be processed. The documents must be in English or a Scandinavian language. Translations must be authorized. You must present originals at any interview. OsloMet checks documents, so that you as a candidate will get a real evaluation and fair competition.

The engagement is to be made in accordance with the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. A background check may be conducted to verify information in submitted CVs and available documents. Background checks are not conducted without the consent of the applicant and relevant applicants will receive further information about this.

Other information

If you would like more information about the position, feel free to contact:

  • Head of the Department, Yonas Zewdu Ayele, telephone: +47 67 23 60 85; E-mail [email protected]
  • Professor, Claudia van der Laag, telephone: +47 67 23 62 25; E-mail: [email protected]

The position is paid according to the pay scale for Norwegian state employees, position code 1017 PhD fellow, NOK 532 200 per year.

If you have documents that cannot be uploaded electronically, please contact [email protected] .

If you would like to apply for the position, you must do so electronically through our recruitment system.

Deadline for applications: 04.09.2024

Ref.: 24/19730

OsloMet is a Charter & Code certified institution by the EU Commission holding the right to use the logo HR Excellence in Research (HRS4R). OsloMet is a member of the EURAXESS network supporting a positive work environment for researchers in motion.

OsloMet has signed The Declaration on Research Assessment (DORA) . DORA recognizes the need to improve the ways in which the outputs of scholarly research are evaluated.

The engagement is to be made in accordance with the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who are seen to be in conflict with the criteria in the latter law will not be considered for the position.

Visit OsloMet at LinkedIn , Facebook , Instagram

  • Accessibility statement
  • Cookies policy
  • Employee directory
  • Employee website
  • Student website
  • Upcoming events
  • Work for us
  • Skip to main content
  • Skip to search
  • Skip to footer

Products and Services

2 persons sitting in front of laptop

Cisco Security

Master your goals. innovate. we'll tackle threats..

Get powerful security across all your networks, cloud, endpoints, and email to protect everything that matters, from anywhere.

If it's connected, you're protected

Hacker working at multiple computer screens

Cisco Security “The Hacker”

More connected users and devices creates more complexity. Cisco Security Cloud makes security easier for IT and safer for everyone anywhere security meets the network.

Deliver smarter, stronger security

Protect your organization across a multicloud environment, while simplifying security operations, improving scalability, and driving data-informed outcomes, powered by Cisco Talos.

Unlock better user experiences

Create a seamless experience that frustrates attackers, not users, by granting access from any device, anywhere, and adding more proactive security controls.

Deliver cost-effective defenses

Improve ROI by consolidating vendors, reducing complexity and integrating your security.

Strengthen security resilience

Unified, end-to-end protection maximizes value, minimizes risk, and closes security gaps everywhere to defend against evolving threats. Protect access, apps, and innovation across your network to secure your future.

research design and framework

Cisco Secure Firewall

Better visibility and actionable insights across networks, clouds, endpoints, and email allows users to respond confidently to the most sophisticated threats at machine scale.

Featured security products

Cisco hypershield.

A new groundbreaking security architecture that makes hyperscaler technology accessible to enterprises of all sizes and delivers AI-native security for modern data centers and cloud.

Cisco Secure Access (SSE)

A converged cybersecurity solution, grounded in zero trust, that radically reduces risk and delights both end users and IT staff by safely connecting anything to anywhere.

Detect the most sophisticated threats sooner across all vectors and prioritize by impact for faster responses.

Cisco Multicloud Defense

Gain multidirectional protection across clouds to stop inbound attacks, data exfiltration, and lateral movement.

Secure applications and enable frictionless access with strong MFA and more. Establish user and device trust, gain visibility into devices, and enable secure access to all apps.

Cisco Identity Services Engine (ISE)

Simplify highly secure network access control with software-defined access and automation.

Security Suites delivered by Cisco Security Cloud

User Protection Suite

Cisco User Protection Suite

Get secure access to any application, on any device, from anywhere. Defend against threats targeting users and deliver seamless access for hybrid work.

Cloud Protection Suite

Cisco Cloud Protection Suite

Secure your apps and data with a powerful, flexible framework for a hybrid and multicloud world.

Breach Protection Suite

Cisco Breach Protection Suite

Secure your business by investigating, prioritizing, and resolving incidents through unified defense and contextual insights from data-backed, AI-powered security.

Customer stories and insights

Global partnerships fight to end child exploitation together.

Marriott International

"Marriott has long championed human rights and human trafficking awareness. Combating CSAM is an important extension of that work. The IWF provided the level of rigor we needed in a URL list, and Cisco's security technology provided the means to easily apply it."

Abbe Horswill, Director, Human Rights and Social Impact

Company: Marriott International

The NFL relies on Cisco

NFL logo

"From securing stadiums, broadcasts, and fans to protecting the largest live sporting event in America, the right tools and the right team are key in making sure things run smoothly, avoiding disruptions to the game, and safeguarding the data and devices that make mission-critical gameday operations possible."

Add value to security solutions

Cisco Security Enterprise Agreement

Instant savings

Experience security software buying flexibility with one easy-to-manage agreement.

Services for security

Let the experts secure your business

Get more from your investments and enable constant vigilance to protect your organization.

Sharpen your security insights

Cisco Cybersecurity Viewpoints

Set your vision to a more secure future with Cisco Cybersecurity Viewpoints. With specialized content from podcasts to industry news, you'll walk away with a deeper understanding of the trends, research, and topics in our rapidly changing world.

IMAGES

  1. Research Design in Qualitative Research

    research design and framework

  2. Research Design Conceptual Framework

    research design and framework

  3. -Framework for Research Design

    research design and framework

  4. What is Research Design in Qualitative Research

    research design and framework

  5. Research Design Diagram

    research design and framework

  6. Types Of Qualitative Research Design With Examples

    research design and framework

COMMENTS

  1. What Is a Research Design

    Step 2: Choose a type of research design. Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research. Types of quantitative research designs. Quantitative designs can be split into four main types.

  2. Research Design

    Step 2: Choose a type of research design. Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research. Types of quantitative research designs. Quantitative designs can be split into four main types.

  3. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  4. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  5. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  6. PDF CHAPTER CONCEPTUAL FRAMEWORKS IN RESEARCH distribute

    framework is a generative source of thinking, planning, conscious action, and reflection throughout the research process. A conceptual framework makes the case for why a study is significant and relevant and for how the study design (including data collection and analysis methods) appropri - ately and rigorously answers the research questions.

  7. Research Design

    Research Design Framework. "I use terms like 'canvas' and 'design' because research requires both analytical and creative knowledge, skills, and abilities. There is no one best way to conduct research, and the answer to ALL research methods questions is, 'it depends.'". ( Latham, 2022 ). While this framework provides structure ...

  8. What is Research Design? Types, Elements and Examples

    A research design is the plan or framework used to conduct a research study. It involves outlining the overall approach and methods that will be used to collect and analyze data in order to answer research questions or test hypotheses.

  9. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  10. Research Design and Methodology

    2. Research design. The research design is intended to provide an appropriate framework for a study. A very significant decision in research design process is the choice to be made regarding research approach since it determines how relevant information for a study will be obtained; however, the research design process involves many interrelated decisions [].

  11. Research design: the methodology for interdisciplinary research framework

    The first kind, "Research into design" studies the design product post hoc and the MIR framework suits the interdisciplinary study of such a product. In contrast, "Research for design" generates knowledge that feeds into the noun and the verb 'design', which means it precedes the design (ing).

  12. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    Other studies have presented a research logic model or flowchart of the research design as a conceptual framework. These constructions can be quite valuable in helping readers understand the data-collection and analysis process. However, a model depicting the study design does not serve the same role as a conceptual framework.

  13. Research Design: What it is, Elements & Types

    Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success. Creating a research topic explains the type of research (experimental,survey research,correlational ...

  14. (Pdf) the Research Design

    If your design is poor, the results of the research also will not be promising.[ 2 ] Research design is defined as a framework of methods and techniques chosen by a researcher to combine various ...

  15. (PDF) CHAPTER FIVE RESEARCH DESIGN AND METHODOLOGY 5.1. Introduction

    Research Design A research design is the 'procedures for collecting, analyzing, interpreting and reporting data in research studies' (Creswell & Plano Clark 2007, p.58). ... the SCP framework ...

  16. Research Frameworks: Critical Components for Reporting Qualitative

    The Importance of Research Frameworks. Researchers may draw on several elements to frame their research. Generally, a framework is regarded as "a set of ideas that you use when you are forming your decisions and judgements" 13 or "a system of rules, ideas, or beliefs that is used to plan or decide something." 14 Research frameworks may consist of a single formal theory or part thereof ...

  17. What is a research framework and why do we need one?

    A research framework provides an underlying structure or model to support our collective research efforts. Up until now, we've referenced, referred to and occasionally approached research as more of an amalgamated set of activities. But as we know, research comes in many different shapes and sizes, is variable in scope, and can be used to ...

  18. Research Design vs. Research Methods

    Research design and research methods are two essential components of any research study. While research design provides the overall plan and structure, research methods are the practical tools used to collect and analyze data. Both have distinct attributes that contribute to the reliability, validity, and generalizability of research findings.

  19. Methodological Framework

    Definition: Methodological framework is a set of procedures, methods, and tools that guide the research process in a systematic and structured manner. It provides a structure for conducting research, collecting and analyzing data, and drawing conclusions. The framework outlines the steps to be taken in a research project, including the research ...

  20. PDF CHAPTER 4 RESEARCH DESIGN AND METHODOLOGY

    For Durrheim (2004:29), research design is a strategic framework for action that serves as a bridge between research questions and the execution, or implementation of the research strategy. 4.2.3 RESEARCH METHODOLOGY. Schwardt (2007:195) defines research methodology as a theory of how an inquiry should

  21. A Design Research Framework

    A Design Research Framework. Oct 25, 2022. Written By Erika Hall. Design Research Process Model (PDF) |. Alternative Style for Color Perception/B&W Printing (PDF) Recent discussions have been swirling around the phrase " democratization of research " concerning who should participate in what kind of research in design (product/service ...

  22. Theoretical Framework

    Theoretical Framework. Definition: Theoretical framework refers to a set of concepts, theories, ideas, and assumptions that serve as a foundation for understanding a particular phenomenon or problem.It provides a conceptual framework that helps researchers to design and conduct their research, as well as to analyze and interpret their findings.. In research, a theoretical framework explains ...

  23. PDF Chapter 3 Research framework and Design 3.1. Introduction

    3.1. Introduction. Chapter 3Research framework and Design3.1. IntroductionResearch m. thodology is the indispensable part of any research work. This guides the researcher about the flow of research and provides the. ramework through which the research is to be carried out. This chapter expounds the research paradigm, research approach, research ...

  24. Systems & Design Thinking: A Conceptual Framework for Their Integration

    Design Thinking approaches to problem resolution: Systems Thinking methodologies arose from the consideration of social systems. The stakeholders are the designers. Design Thinking methodologies arose from the consideration of products and artifacts. The problems are ultimately resolved by people identified as a designer by trade.

  25. A pragmatic, stepped-wedge, hybrid type II trial of interoperable

    Study aims and implementation framework. This trial consists of a stepped wedge hybrid effectiveness-implementation trial to scale the CDS system across 4 trauma systems and in parallel evaluate implementation strategy guided by the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework (Fig. 1a) [].We anticipate variability in CDS adoption across sites ...

  26. Flexible Metal-Organic Frameworks: From Local Structural Design to

    ConspectusFlexible metal-organic frameworks (MOFs), also known as soft porous crystals, exhibit dynamic behaviors in response to external physical and chemical stimuli such as light, heat, electric or magnetic field, or the presence of particular matters, on the premise of maintaining their crystalline state. The reversible structural transformation of flexible MOFs, a unique characteristic ...

  27. RAG Foundry: A Framework for Enhancing LLMs for Retrieval Augmented

    Implementing Retrieval-Augmented Generation (RAG) systems is inherently complex, requiring deep understanding of data, use cases, and intricate design decisions. Additionally, evaluating these systems presents significant challenges, necessitating assessment of both retrieval accuracy and generative quality through a multi-faceted approach. We introduce RAG Foundry, an open-source framework ...

  28. A Computation‐guided Design of Highly Defined and Dense Bimetallic

    A Computation-guided Design of Highly Defined and Dense Bimetallic Active Sites on a Two-dimensional Conductive Metal-organic Framework for Efficient H2O2 Electrosynthesis. Ji Liang, Corresponding Author. Ji Liang ... Institute of Metal Research Chinese Academy of Sciences, Shenyang National Laboratory for Materials Science, Institute of Metal ...

  29. PhD Fellowship position in enabling inclusiveness and well-being in

    The Faculty of Technology, Art and Design (TKD) offers higher education and research and development (R&D) activities within technical subjects, arts and design. The Faculty has approximately 4.000 students and 400 staff members and is situated at Pilestredet Campus in downtown Oslo and at Kjeller Campus in Viken.

  30. Cisco Security Products and Solutions

    "From securing stadiums, broadcasts, and fans to protecting the largest live sporting event in America, the right tools and the right team are key in making sure things run smoothly, avoiding disruptions to the game, and safeguarding the data and devices that make mission-critical gameday operations possible."