Educational resources and simple solutions for your research journey

Research hypothesis: What it is, how to write it, types, and examples

What is a Research Hypothesis: How to Write it, Types, and Examples

is hypothesis used in qualitative research

Any research begins with a research question and a research hypothesis . A research question alone may not suffice to design the experiment(s) needed to answer it. A hypothesis is central to the scientific method. But what is a hypothesis ? A hypothesis is a testable statement that proposes a possible explanation to a phenomenon, and it may include a prediction. Next, you may ask what is a research hypothesis ? Simply put, a research hypothesis is a prediction or educated guess about the relationship between the variables that you want to investigate.  

It is important to be thorough when developing your research hypothesis. Shortcomings in the framing of a hypothesis can affect the study design and the results. A better understanding of the research hypothesis definition and characteristics of a good hypothesis will make it easier for you to develop your own hypothesis for your research. Let’s dive in to know more about the types of research hypothesis , how to write a research hypothesis , and some research hypothesis examples .  

Table of Contents

What is a hypothesis ?  

A hypothesis is based on the existing body of knowledge in a study area. Framed before the data are collected, a hypothesis states the tentative relationship between independent and dependent variables, along with a prediction of the outcome.  

What is a research hypothesis ?  

Young researchers starting out their journey are usually brimming with questions like “ What is a hypothesis ?” “ What is a research hypothesis ?” “How can I write a good research hypothesis ?”   

A research hypothesis is a statement that proposes a possible explanation for an observable phenomenon or pattern. It guides the direction of a study and predicts the outcome of the investigation. A research hypothesis is testable, i.e., it can be supported or disproven through experimentation or observation.     

is hypothesis used in qualitative research

Characteristics of a good hypothesis  

Here are the characteristics of a good hypothesis :  

  • Clearly formulated and free of language errors and ambiguity  
  • Concise and not unnecessarily verbose  
  • Has clearly defined variables  
  • Testable and stated in a way that allows for it to be disproven  
  • Can be tested using a research design that is feasible, ethical, and practical   
  • Specific and relevant to the research problem  
  • Rooted in a thorough literature search  
  • Can generate new knowledge or understanding.  

How to create an effective research hypothesis  

A study begins with the formulation of a research question. A researcher then performs background research. This background information forms the basis for building a good research hypothesis . The researcher then performs experiments, collects, and analyzes the data, interprets the findings, and ultimately, determines if the findings support or negate the original hypothesis.  

Let’s look at each step for creating an effective, testable, and good research hypothesis :  

  • Identify a research problem or question: Start by identifying a specific research problem.   
  • Review the literature: Conduct an in-depth review of the existing literature related to the research problem to grasp the current knowledge and gaps in the field.   
  • Formulate a clear and testable hypothesis : Based on the research question, use existing knowledge to form a clear and testable hypothesis . The hypothesis should state a predicted relationship between two or more variables that can be measured and manipulated. Improve the original draft till it is clear and meaningful.  
  • State the null hypothesis: The null hypothesis is a statement that there is no relationship between the variables you are studying.   
  • Define the population and sample: Clearly define the population you are studying and the sample you will be using for your research.  
  • Select appropriate methods for testing the hypothesis: Select appropriate research methods, such as experiments, surveys, or observational studies, which will allow you to test your research hypothesis .  

Remember that creating a research hypothesis is an iterative process, i.e., you might have to revise it based on the data you collect. You may need to test and reject several hypotheses before answering the research problem.  

How to write a research hypothesis  

When you start writing a research hypothesis , you use an “if–then” statement format, which states the predicted relationship between two or more variables. Clearly identify the independent variables (the variables being changed) and the dependent variables (the variables being measured), as well as the population you are studying. Review and revise your hypothesis as needed.  

An example of a research hypothesis in this format is as follows:  

“ If [athletes] follow [cold water showers daily], then their [endurance] increases.”  

Population: athletes  

Independent variable: daily cold water showers  

Dependent variable: endurance  

You may have understood the characteristics of a good hypothesis . But note that a research hypothesis is not always confirmed; a researcher should be prepared to accept or reject the hypothesis based on the study findings.  

is hypothesis used in qualitative research

Research hypothesis checklist  

Following from above, here is a 10-point checklist for a good research hypothesis :  

  • Testable: A research hypothesis should be able to be tested via experimentation or observation.  
  • Specific: A research hypothesis should clearly state the relationship between the variables being studied.  
  • Based on prior research: A research hypothesis should be based on existing knowledge and previous research in the field.  
  • Falsifiable: A research hypothesis should be able to be disproven through testing.  
  • Clear and concise: A research hypothesis should be stated in a clear and concise manner.  
  • Logical: A research hypothesis should be logical and consistent with current understanding of the subject.  
  • Relevant: A research hypothesis should be relevant to the research question and objectives.  
  • Feasible: A research hypothesis should be feasible to test within the scope of the study.  
  • Reflects the population: A research hypothesis should consider the population or sample being studied.  
  • Uncomplicated: A good research hypothesis is written in a way that is easy for the target audience to understand.  

By following this research hypothesis checklist , you will be able to create a research hypothesis that is strong, well-constructed, and more likely to yield meaningful results.  

Research hypothesis: What it is, how to write it, types, and examples

Types of research hypothesis  

Different types of research hypothesis are used in scientific research:  

1. Null hypothesis:

A null hypothesis states that there is no change in the dependent variable due to changes to the independent variable. This means that the results are due to chance and are not significant. A null hypothesis is denoted as H0 and is stated as the opposite of what the alternative hypothesis states.   

Example: “ The newly identified virus is not zoonotic .”  

2. Alternative hypothesis:

This states that there is a significant difference or relationship between the variables being studied. It is denoted as H1 or Ha and is usually accepted or rejected in favor of the null hypothesis.  

Example: “ The newly identified virus is zoonotic .”  

3. Directional hypothesis :

This specifies the direction of the relationship or difference between variables; therefore, it tends to use terms like increase, decrease, positive, negative, more, or less.   

Example: “ The inclusion of intervention X decreases infant mortality compared to the original treatment .”   

4. Non-directional hypothesis:

While it does not predict the exact direction or nature of the relationship between the two variables, a non-directional hypothesis states the existence of a relationship or difference between variables but not the direction, nature, or magnitude of the relationship. A non-directional hypothesis may be used when there is no underlying theory or when findings contradict previous research.  

Example, “ Cats and dogs differ in the amount of affection they express .”  

5. Simple hypothesis :

A simple hypothesis only predicts the relationship between one independent and another independent variable.  

Example: “ Applying sunscreen every day slows skin aging .”  

6 . Complex hypothesis :

A complex hypothesis states the relationship or difference between two or more independent and dependent variables.   

Example: “ Applying sunscreen every day slows skin aging, reduces sun burn, and reduces the chances of skin cancer .” (Here, the three dependent variables are slowing skin aging, reducing sun burn, and reducing the chances of skin cancer.)  

7. Associative hypothesis:  

An associative hypothesis states that a change in one variable results in the change of the other variable. The associative hypothesis defines interdependency between variables.  

Example: “ There is a positive association between physical activity levels and overall health .”  

8 . Causal hypothesis:

A causal hypothesis proposes a cause-and-effect interaction between variables.  

Example: “ Long-term alcohol use causes liver damage .”  

Note that some of the types of research hypothesis mentioned above might overlap. The types of hypothesis chosen will depend on the research question and the objective of the study.  

is hypothesis used in qualitative research

Research hypothesis examples  

Here are some good research hypothesis examples :  

“The use of a specific type of therapy will lead to a reduction in symptoms of depression in individuals with a history of major depressive disorder.”  

“Providing educational interventions on healthy eating habits will result in weight loss in overweight individuals.”  

“Plants that are exposed to certain types of music will grow taller than those that are not exposed to music.”  

“The use of the plant growth regulator X will lead to an increase in the number of flowers produced by plants.”  

Characteristics that make a research hypothesis weak are unclear variables, unoriginality, being too general or too vague, and being untestable. A weak hypothesis leads to weak research and improper methods.   

Some bad research hypothesis examples (and the reasons why they are “bad”) are as follows:  

“This study will show that treatment X is better than any other treatment . ” (This statement is not testable, too broad, and does not consider other treatments that may be effective.)  

“This study will prove that this type of therapy is effective for all mental disorders . ” (This statement is too broad and not testable as mental disorders are complex and different disorders may respond differently to different types of therapy.)  

“Plants can communicate with each other through telepathy . ” (This statement is not testable and lacks a scientific basis.)  

Importance of testable hypothesis  

If a research hypothesis is not testable, the results will not prove or disprove anything meaningful. The conclusions will be vague at best. A testable hypothesis helps a researcher focus on the study outcome and understand the implication of the question and the different variables involved. A testable hypothesis helps a researcher make precise predictions based on prior research.  

To be considered testable, there must be a way to prove that the hypothesis is true or false; further, the results of the hypothesis must be reproducible.  

Research hypothesis: What it is, how to write it, types, and examples

Frequently Asked Questions (FAQs) on research hypothesis  

1. What is the difference between research question and research hypothesis ?  

A research question defines the problem and helps outline the study objective(s). It is an open-ended statement that is exploratory or probing in nature. Therefore, it does not make predictions or assumptions. It helps a researcher identify what information to collect. A research hypothesis , however, is a specific, testable prediction about the relationship between variables. Accordingly, it guides the study design and data analysis approach.

2. When to reject null hypothesis ?

A null hypothesis should be rejected when the evidence from a statistical test shows that it is unlikely to be true. This happens when the test statistic (e.g., p -value) is less than the defined significance level (e.g., 0.05). Rejecting the null hypothesis does not necessarily mean that the alternative hypothesis is true; it simply means that the evidence found is not compatible with the null hypothesis.  

3. How can I be sure my hypothesis is testable?  

A testable hypothesis should be specific and measurable, and it should state a clear relationship between variables that can be tested with data. To ensure that your hypothesis is testable, consider the following:  

  • Clearly define the key variables in your hypothesis. You should be able to measure and manipulate these variables in a way that allows you to test the hypothesis.  
  • The hypothesis should predict a specific outcome or relationship between variables that can be measured or quantified.   
  • You should be able to collect the necessary data within the constraints of your study.  
  • It should be possible for other researchers to replicate your study, using the same methods and variables.   
  • Your hypothesis should be testable by using appropriate statistical analysis techniques, so you can draw conclusions, and make inferences about the population from the sample data.  
  • The hypothesis should be able to be disproven or rejected through the collection of data.  

4. How do I revise my research hypothesis if my data does not support it?  

If your data does not support your research hypothesis , you will need to revise it or develop a new one. You should examine your data carefully and identify any patterns or anomalies, re-examine your research question, and/or revisit your theory to look for any alternative explanations for your results. Based on your review of the data, literature, and theories, modify your research hypothesis to better align it with the results you obtained. Use your revised hypothesis to guide your research design and data collection. It is important to remain objective throughout the process.  

5. I am performing exploratory research. Do I need to formulate a research hypothesis?  

As opposed to “confirmatory” research, where a researcher has some idea about the relationship between the variables under investigation, exploratory research (or hypothesis-generating research) looks into a completely new topic about which limited information is available. Therefore, the researcher will not have any prior hypotheses. In such cases, a researcher will need to develop a post-hoc hypothesis. A post-hoc research hypothesis is generated after these results are known.  

6. How is a research hypothesis different from a research question?

A research question is an inquiry about a specific topic or phenomenon, typically expressed as a question. It seeks to explore and understand a particular aspect of the research subject. In contrast, a research hypothesis is a specific statement or prediction that suggests an expected relationship between variables. It is formulated based on existing knowledge or theories and guides the research design and data analysis.

7. Can a research hypothesis change during the research process?

Yes, research hypotheses can change during the research process. As researchers collect and analyze data, new insights and information may emerge that require modification or refinement of the initial hypotheses. This can be due to unexpected findings, limitations in the original hypotheses, or the need to explore additional dimensions of the research topic. Flexibility is crucial in research, allowing for adaptation and adjustment of hypotheses to align with the evolving understanding of the subject matter.

8. How many hypotheses should be included in a research study?

The number of research hypotheses in a research study varies depending on the nature and scope of the research. It is not necessary to have multiple hypotheses in every study. Some studies may have only one primary hypothesis, while others may have several related hypotheses. The number of hypotheses should be determined based on the research objectives, research questions, and the complexity of the research topic. It is important to ensure that the hypotheses are focused, testable, and directly related to the research aims.

9. Can research hypotheses be used in qualitative research?

Yes, research hypotheses can be used in qualitative research, although they are more commonly associated with quantitative research. In qualitative research, hypotheses may be formulated as tentative or exploratory statements that guide the investigation. Instead of testing hypotheses through statistical analysis, qualitative researchers may use the hypotheses to guide data collection and analysis, seeking to uncover patterns, themes, or relationships within the qualitative data. The emphasis in qualitative research is often on generating insights and understanding rather than confirming or rejecting specific research hypotheses through statistical testing.

Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.  

Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place –  Get All Access now starting at just $14 a month !    

Related Posts

Back to school 2024 sale

Back to School – Lock-in All Access Pack for a Year at the Best Price

journal turnaround time

Journal Turnaround Time: Researcher.Life and Scholarly Intelligence Join Hands to Empower Researchers with Publication Time Insights 

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

publications-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Visually hypothesising in scientific paper writing: confirming and refuting qualitative research hypotheses using diagrams.

is hypothesis used in qualitative research

1. Introduction

2. overview of visual communication and post-positivist research, visual communication in post-positivist qualitative research, 3. understanding qualitative research (and hypotheses): types, notions, contestations and epistemological underpinnings, 3.1. what is qualitative research, 3.2. types of qualitative research and their epistemological underpinnings, 3.3. what is a research hypothesis can it be used in qualitative research, 3.4. analogical arguments in support of using hypotheses in qualitative research, 3.5. can a hypothesis be “tested” in qualitative research, 4. the process of developing and using hypotheses in qualitative research.

“Begin a research study without having to test a hypothesis. Instead, it allows them to develop hypotheses by listening to what the research participants say. Because the method involves developing hypotheses after the data are collected, it is called hypothesis-generating research rather than hypothesis-testing research. The grounded theory method uses two basic principles: (1) questioning rather than measuring, and (2) generating hypotheses using theoretical coding.”

4.1. Formulating the Qualitative Research Hypothesis

  • The qualitative hypothesis should be based on a research problem derived from the research questions.
  • It should be supported by literature evidence (on the relationship or association between variables).
  • It should be informed by past research and observations.
  • It must be falsifiable or disprovable (see Popper [ 77 ]).
  • It should be analysable using data collected from the field or literature.
  • It has to be testable or verifiable, provable, nullifiable, refutable, confirmable or disprovable based on the results of analysing data collected from the field or literature.

4.2. Refuting or Verifying a Qualitative Research Hypothesis Diagrammatically (with Illustration)

“A core development concern in Nigeria is the magnitude of challenges rural people face. Inefficient infrastructures, lack of employment opportunities and poor social amenities are some of these challenges. These challenges persist mainly due to ineffective approaches used in tackling them. This research argues that an approach based on territorial development would produce better outcomes. The reason is that territorial development adopts integrated policies and actions with a focus on places as opposed to sectoral approaches. The research objectives were to evaluate rural development approaches and identify a specific approach capable of activating poverty reduction. It addressed questions bordering on past rural development approaches and how to improve urban-rural linkages in rural areas. It also addressed questions relating to ways that rural areas can reduce poverty through territorial development…” [ 16 ], p. 1
“Nigeria has legal and institutional opportunities for comprehensive improvement of rural areas through territorial development. However, due to the absence of a concrete rural development plan and area-based rural development strategies, this has not been materialized”.
  • Proposition 1: Legal and institutional opportunities that can lead to comprehensive improvement of rural areas through territorial development exist in Nigeria .
  • Proposition 2: However, due to the absence of a concrete rural development plan and area-based rural development strategies, this has not been materialized .
  • Independent variables: legal and institutional opportunities; incessant structural changes in its political history; and policy negligence.
  • Dependent variable: comprehensive rural improvements through territorial development.

5. Discussion and Conclusion

Acknowledgments, conflicts of interest.

  • Friesen, J.; Van Stan, J.T.; Elleuche, S. Communicating Science through Comics: A Method. Publications 2018 , 6 , 38. [ Google Scholar ] [ CrossRef ]
  • Hammersley, M. The Dilemma of Qualitative Method: Herbert Blumer and the Chicago Tradition ; Routledge: London, UK, 1989; ISBN 0-415-01772-6. [ Google Scholar ]
  • Ulichny, P. The Role of Hypothesis Testing in Qualitative Research. A Researcher Comments. TESOL Q. 1991 , 25 , 200–202. [ Google Scholar ] [ CrossRef ]
  • Bluhm, D.J.; Harman, W.; Lee, T.W.; Mitchell, T.R. Qualitative research in management: A decade of progress. J. Manag. Stud. 2011 , 48 , 1866–1891. [ Google Scholar ] [ CrossRef ]
  • Maudsley, G. Mixing it but not mixed-up: Mixed methods research in medical education (a critical narrative review). Med. Teach. 2011 , 33 , e92–e104. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Agee, J. Developing qualitative research questions: A reflective process. Int. J. Qual. Stud. Educ. 2009 , 22 , 431–447. [ Google Scholar ] [ CrossRef ]
  • Yin, R.K. Qualitative Research from Start to Finish , 2nd ed.; Guilford Publications: New York, NY, USA, 2015; ISBN 9781462517978. [ Google Scholar ]
  • Alvesson, M.; Sköldberg, K. Reflexive Methodology: New Vistas for Qualitative Research ; Sage: London, UK, 2017; ISBN 9781473964242. [ Google Scholar ]
  • Geertz, C. The Interpretation of Cultures: Selected Essays ; Basic Books: New York, NY, USA, 1973; ISBN 465-03425-X. [ Google Scholar ]
  • John-Steiner, V.; Mahn, H. Sociocultural approaches to learning and development: A Vygotskian framework. Educ. Psychol. 1996 , 31 , 191–206. [ Google Scholar ] [ CrossRef ]
  • Zeidler, D.L. STEM education: A deficit framework for the twenty first century? A sociocultural socioscientific response. Cult. Stud. Sci. Educ. 2016 , 11 , 11–26. [ Google Scholar ] [ CrossRef ]
  • Koro-Ljungberg, M. Reconceptualizing Qualitative Research: Methodologies without Methodology ; Sage Publications: Los Angeles, CA, USA, 2015; ISBN 9781483351711. [ Google Scholar ]
  • Chigbu, U.E. The Application of Weyarn’s (Germany) Rural Development Approach to Isuikwuato (Nigeria): An Appraisal of the Possibilities and Limitations. Unpublished Master’s Thesis, Technical University of Munich, Munich, Germany, 2009. [ Google Scholar ]
  • Ntiador, A.M. Development of an Effective Land (Title) Registration System through Inter-Agency Data Integration as a Basis for Land Management. Master’s Thesis, Technical University of Munich, Munich, Germany, 2009. [ Google Scholar ]
  • Sakaria, P. Redistributive Land Reform: Towards Accelerating the National Resettlement Programme of Namibia. Master’s Thesis, Technical University of Munich, Munich, Germany, 2016. [ Google Scholar ]
  • Chigbu, U.E. Territorial Development: Suggestions for a New Approach to Rural Development in Nigeria. Ph.D. Thesis, Technical University of Munich, Munich, Germany, 2013. [ Google Scholar ]
  • Miller, K. Communication Theories: Perspectives, Processes, and Contexts , 2nd ed.; Peking University Press: Beijing, China, 2007; ISBN 9787301124314. [ Google Scholar ]
  • Taylor, T.R.; Lindlof, B.C. Qualitative Communication Research Methods , 3rd ed.; SAGE: Thousand Oaks, CA, USA, 2011; ISBN 978-1412974738. [ Google Scholar ]
  • Bergman, M. Positivism. In The International Encyclopedia of Communication Theory and Philosophy ; Wiley and Sons: Hoboken, NJ, USA, 2016; ISBN 9781118766804. [ Google Scholar ]
  • Robson, C. Real World Research. A Resource for Social Scientists and Practitioner-Researchers , 2nd ed.; Blackwell: Malden, MA, USA, 2002; ISBN 978-0-631-21305-5. [ Google Scholar ]
  • Bohn, R.; Short, J. Measuring Consumer Information. Int. J. Commun. 2012 , 6 , 980–1000. [ Google Scholar ]
  • Zacks, J.; Levy, E.; Tversky, B.; Schinao, D. Graphs in Print, Diagrammatic Representation and Reasoning ; Springer: London, UK, 2002. [ Google Scholar ]
  • Merieb, E.N.; Hoehn, K. Human Anatomy & Physiology , 7th ed.; Pearson International: Cedar Rapids, IA, USA, 2007; ISBN 978-0132197991. [ Google Scholar ]
  • Semetko, H.; Scammell, M. The SAGE Handbook of Political Communication ; SAGE Publications: London, UK, 2012; ISBN 9781446201015. [ Google Scholar ]
  • Thorpe, S.; Fize, D.; Marlot, C. Speed of processing in the human visual system. Nature 1996 , 381 , 520–522. [ Google Scholar ] [ CrossRef ]
  • Holcomb, P.; Grainger, J. On the Time Course of Visual Word Recognition. J. Cognit. Neurosci. 2006 , 18 , 1631–1643. [ Google Scholar ] [ CrossRef ]
  • Messaris, P. Visual Communication: Theory and Research. J. Commun. 2003 , 53 , 551–556. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Mirzoeff, M. An Introduction to Visual Culture ; Routledge: New York, NY, USA, 1999. [ Google Scholar ]
  • Prosser, J. (Ed.) Image-Based Research: A Sourcebook for Qualitative Researchers ; Routledge: New York, NY, USA, 1998. [ Google Scholar ]
  • Howells, R. Visual Culture: An Introduction ; Polity Press: Cambridge, UK, 2002. [ Google Scholar ]
  • Thomson, P. (Ed.) Doing Visual Research with Children and Young People ; Routledge: London, UK, 2009. [ Google Scholar ]
  • Jensen, K.B. (Ed.) A Handbook of Media and Communication Research: Qualitative and Quantitative Methodologies ; Routledge: London, UK; New York, NY, USA, 2013. [ Google Scholar ]
  • Bestley, R.; Noble, I. Visual Research: An Introduction to Research Methods in Graphic Design , 3rd ed.; Bloomsbury Publishing: London, UK, 2016. [ Google Scholar ]
  • Wilke, R.; Hill, M. On New Forms of Science Communication and Communication in Science: A Videographic Approach to Visuality in Science Slams and Academic Group Talk. Qual. Inq. 2019 . [ Google Scholar ] [ CrossRef ]
  • Adam, F. Measuring National Innovation Performance: The Innovation Union Scoreboard Revisited. In SpringerBriefs in Economics ; Springer: London, UK, 2014; Volume 5, ISBN 978-3-642-39464-5. [ Google Scholar ]
  • Flick, U. An Introduction to Qualitative Research , 5th ed.; Sage: Thousand Oaks, CA, USA, 2006. [ Google Scholar ]
  • Creswell, J.W. Five qualitative approaches to inquiry. Qual. Inq. Res. Des. Choos. Five Approaches 2007 , 2 , 53–80. [ Google Scholar ]
  • Auerbach, C.F.; Silverstein, L.B. Qualitative Data: An Introduction to Coding and Analysis ; New York University Press: New York, NY, USA; London, UK, 2003; ISBN 9780814706954. [ Google Scholar ]
  • Adams, T.E. A review of narrative ethics. Qual. Inq. 2008 , 14 , 175–194. [ Google Scholar ] [ CrossRef ]
  • Hancock, B.; Ockleford, B.; Windridge, K. An Introduction to Qualitative Research ; NHS: Nottingham/Sheffield, UK, 2009. [ Google Scholar ]
  • Henwood, K. Qualitative research. Encycl. Crit. Psychol. 2014 , 1611–1614. [ Google Scholar ] [ CrossRef ]
  • Glaser, B.G.; Strauss, A.L. Discovery of Grounded Theory: Strategies for Qualitative Research ; Routledge: New York, NY, USA; London, UK, 2017; ISBN 978-0202302607. [ Google Scholar ]
  • Ary, D.; Jacobs, L.C.; Irvine, C.K.S.; Walker, D. Introduction to Research in Education ; Cengage Learning: Belmont, CA, USA, 2018; ISBN 978-1133596745. [ Google Scholar ]
  • Sullivan, G.; Sargeant, J. Qualities of Qualitative Research: Part I Editorial). J. Grad. Med. Educ. 2011 , 449–452. [ Google Scholar ] [ CrossRef ]
  • Bryman, A. Social Research Methods , 5th ed.; Oxford University Press: Oxford, UK, 2015; ISBN 978-0199689453. [ Google Scholar ]
  • Creswell, J.W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 4th ed.; Sage: Thousand Oaks, CA, USA, 2013; ISBN 978-1452226101. [ Google Scholar ]
  • Taylor, S.J.; Robert, B.; DeVault, M. Introduction to Qualitative Research Methods: A Guidebook and Resource ; John Wiley & Sons: Hoboken, NJ, USA, 2015; ISBN 978-1-118-76721-4. [ Google Scholar ]
  • Litosseliti, L. (Ed.) Research Methods in Linguistics ; Bloomsbury Publishing: New York, NY, USA, 2010; ISBN 9780826489937. [ Google Scholar ]
  • Berger, A.A. Media and Communication Research Methods: An Introduction to Qualitative and Quantitative Approaches , 4th ed.; Sage Publications: Los Angeles, CA, USA, 2019; ISBN 9781483377568. [ Google Scholar ]
  • Creswell, J.W. Qualitative Inquiry and Research Design: Choosing among Five Approaches , 3rd ed.; SAGE: Thousand Oaks, CA, USA, 2012; ISBN 978-1412995306. [ Google Scholar ]
  • Saunders, B.; Sim, J.; Kingstone, T.; Baker, S.; Waterfield, J.; Bartlam, B.; Burroughs, H.; Jinks, C. Saturation in qualitative research: Exploring its conceptualization and operationalization. Qual. Quant. 2018 , 52 , 1893–1907. [ Google Scholar ] [ CrossRef ]
  • Leonard, K. Six Types of Qualitative Research. Bizfluent , 22 January 2019. Available online: https://bizfluent.com/info-8580000-six-types-qualitative-research.html (accessed on 3 February 2019).
  • Campbell, D.T. Methodology and Epistemology for Social Science, Selected Papers ; University of Chicago Press: Chicago, IL, USA, 1998. [ Google Scholar ]
  • Crotty, M. The Foundations of Social Research: Meanings and Perspectives in the Research Process ; Sage Publications: London, UK, 1998. [ Google Scholar ]
  • Gilbert, N. Researching Social Life ; Sage Publications: London, UK, 2008. [ Google Scholar ]
  • Kerlinger, F.N. The attitude structure of the individual: A Q-study of the educational attitudes of professors and laymen. Genet. Psychol. Monogr. 1956 , 53 , 283–329. [ Google Scholar ]
  • Ary, D.; Jacobs, L.C.; Razavieh, A. Introduction to Research in Education ; Harcourt Brace College Publishers: Fort Worth, TX, USA, 1996; ISBN 0155009826. [ Google Scholar ]
  • Creswell, J.W. Research Design: Qualitative & Quantative Approaches ; Sage: Thousand Oaks, CA, USA, 1994; ISBN 978-0803952553. [ Google Scholar ]
  • Mourougan, S.; Sethuraman, K. Hypothesis Development and Testing. J. Bus. Manag. 2017 , 19 , 34–40. [ Google Scholar ] [ CrossRef ]
  • Green, J.L.; Camilli, G.; Elmore, P.B. (Eds.) Handbook of Complementary Methods in Education Research ; American Education Research Association: Washington, DC, USA, 2012; ISBN 978-0805859331. [ Google Scholar ]
  • Pyrczak, F. Writing Empirical Research Reports: A Basic Guide for Students of the Social and Behavioral Sciences , 8th ed.; Routledge: New York, NY, USA, 2016; ISBN 978-1936523368. [ Google Scholar ]
  • Gravetter, F.J.; Forzano, L.A.B. Research Methods for the Behavioral Sciences , 4th ed.; Wadsworth Cengage Learning: Belmont, CA, USA, 2018; ISBN 978-1111342258. [ Google Scholar ]
  • Malterud, K. Qualitative research: Standards, challenges, and guidelines. Lancet 2001 , 358 , 483–488. [ Google Scholar ] [ CrossRef ]
  • Malterud, K.; Hollnagel, H. Encouraging the strengths of women patients: A case study from general practice on empowering dialogues. Scand. J. Public Health 1999 , 27 , 254–259. [ Google Scholar ] [ CrossRef ]
  • Flyvbjerg, B. Five misunderstandings about case-study research. Qual. Inq. 2006 , 12 , 219–245. [ Google Scholar ] [ CrossRef ]
  • Mabikke, S. Improving Land and Water Governance in Uganda: The Role of Institutions in Secure Land and Water Rights in Lake Victoria Basin. Ph.D. Thesis, Technical University of Munich, Munich, Germany, 2014. [ Google Scholar ]
  • Sabatier, P.A. The advocacy coalition framework: Revisions and relevance for Europe. J. Eur. Public Policy 1998 , 5 , 98–130. [ Google Scholar ] [ CrossRef ]
  • Holloway, I.; Galvin, K. Qualitative Research in Nursing and Healthcare , 4th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2016; ISBN 978-1-118-87449-3. [ Google Scholar ]
  • Christensen, L.B.; Johnson, R.B.; Turner, L.A. Research Methods, Design and Analysis , 12th ed.; Pearson: London, UK; New York, NY, USA, 2011; ISBN 9780205961252. [ Google Scholar ]
  • Smith, B.; McGannon, K.R. Developing rigor in qualitative research: Problems and opportunities within sport and exercise psychology. Int. Rev. Sport Exerc. Psychol. 2018 , 11 , 101–121. [ Google Scholar ] [ CrossRef ]
  • Konijn, E.A.; van de Schoot, R.; Winter, S.D.; Ferguson, C.J. Possible solution to publication bias through Bayesian statistics, including proper null hypothesis testing. Commun. Methods Meas. 2015 , 9 , 280–302. [ Google Scholar ] [ CrossRef ]
  • Pfister, L.; Kirchner, J.W. Debates—Hypothesis testing in hydrology: Theory and practice. Water Resour. Res. 2017 , 53 , 1792–1798. [ Google Scholar ] [ CrossRef ]
  • Strotz, L.C.; Simões, M.; Girard, M.G.; Breitkreuz, L.; Kimmig, J.; Lieberman, B.S. Getting somewhere with the Red Queen: Chasing a biologically modern definition of the hypothesis. Biol. Lett. 2018 , 14 , 2017734. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Steger, M.F.; Owens, G.P.; Park, C.L. Violations of war: Testing the meaning-making model among Vietnam veterans. J. Clin. Psychol. 2015 , 71 , 105–116. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Garland, E.L.; Kiken, L.G.; Faurot, K.; Palsson, O.; Gaylord, S.A. Upward spirals of mindfulness and reappraisal: Testing the mindfulness-to-meaning theory with autoregressive latent trajectory modelling. Cognit. Ther. Res. 2017 , 41 , 381–392. [ Google Scholar ] [ CrossRef ]
  • Gentsch, K.; Loderer, K.; Soriano, C.; Fontaine, J.R.; Eid, M.; Pekrun, R.; Scherer, K.R. Effects of achievement contexts on the meaning structure of emotion words. Cognit. Emot. 2018 , 32 , 379–388. [ Google Scholar ] [ CrossRef ]
  • Popper, K. Conjectures and Refutations. In Readings in the Philosophy of Science ; Schick, T., Ed.; Routledge and Keagan Paul: London, UK, 1963; pp. 33–39. [ Google Scholar ]
  • Chilisa, B. Indigenous Research Methodologies ; Sage: Thousand Oaks, CA, USA, 2011; ISBN 9781412958820. [ Google Scholar ]
  • Wilson, S. Research as Ceremony: Indigenous Research Methods ; Fernwood Press: Black Point, NS, Canada, 2008; ISBN 9781552662816. [ Google Scholar ]
  • Chigbu, U.E. Concept and Approach to Land Management Interventions for Rural Development in Africa. In Geospatial Technologies for Effective Land Governance ; El-Ayachi, M., El Mansouri, L., Eds.; IGI Global: Hershey, PA, USA, 2019; pp. 1–14. [ Google Scholar ]
  • Kawulich, B.B. Gatekeeping: An ongoing adventure in research. Field Methods J. 2011 , 23 , 57–76. [ Google Scholar ] [ CrossRef ]
  • Ntihinyurwa, P.D. An Evaluation of the Role of Public Participation in Land Use Consolidation (LUC) Practices in Rwanda and Its Improvement. Master’s Thesis, Technical University of Munich, Munich, Germany, 2016. [ Google Scholar ]
  • Ntihinyurwa, P.D.; de Vries, W.T.; Chigbu, U.E.; Dukwiyimpuhwe, P.A. The positive impacts of farm land fragmentation in Rwanda. Land Use Policy 2019 , 81 , 565–581. [ Google Scholar ] [ CrossRef ]
  • Chigbu, U.E. Masculinity, men and patriarchal issues aside: How do women’s actions impede women’s access to land? Matters arising from a peri-rural community in Nigeria. Land Use Policy 2019 , 81 , 39–48. [ Google Scholar ] [ CrossRef ]
  • Gwaleba, M.J.; Masum, F. Participation of Informal Settlers in Participatory Land Use Planning Project in Pursuit of Tenure Security. Urban Forum 2018 , 29 , 169–184. [ Google Scholar ] [ CrossRef ]
  • Sait, M.A.; Chigbu, U.E.; Hamiduddin, I.; De Vries, W.T. Renewable Energy as an Underutilised Resource in Cities: Germany’s ‘Energiewende’ and Lessons for Post-Brexit Cities in the United Kingdom. Resources 2019 , 8 , 7. [ Google Scholar ] [ CrossRef ]
  • Chigbu, U.E.; Paradza, G.; Dachaga, W. Differentiations in Women’s Land Tenure Experiences: Implications for Women’s Land Access and Tenure Security in Sub-Saharan Africa. Land 2019 , 8 , 22. [ Google Scholar ] [ CrossRef ]
  • Chigbu, U.E.; Alemayehu, Z.; Dachaga, W. Uncovering land tenure insecurities: Tips for tenure responsive land-use planning in Ethiopia. Dev. Pract. 2019 . [ Google Scholar ] [ CrossRef ]
  • Handayani, W.; Rudianto, I.; Setyono, J.S.; Chigbu, U.E.; Sukmawati, A.N. Vulnerability assessment: A comparison of three different city sizes in the coastal area of Central Java, Indonesia. Adv. Clim. Chang. Res. 2017 , 8 , 286–296. [ Google Scholar ] [ CrossRef ]
  • Maxwell, J. Qualitative Research: An Interactive Design , 3rd ed.; Sage: Thousand Oaks, CA, USA, 2005; ISBN 9781412981194. [ Google Scholar ]
  • Banks, M. Using Visual Data in Qualitative Research , 2nd ed.; Sage: Thousand Oaks, CA, USA, 2018; Volume 5, ISBN 9781473913196. [ Google Scholar ]

Click here to enlarge figure

TypesApproach to Research or EnquiriesData Collection MethodsData Analysis MethodsForms in Scientific WritingEpistemological Foundations
NarrativeExplores situations, scenarios and processesInterviews and documentsStorytelling, content review and theme (meaning developmentIn-depth narration of events or situationsObjectivism, postmodernism, social constructionism, feminism and constructivism (including interpretive and reflexive) in positivist and post-positivist perspectives
Case studyExamination of episodic events with focus on answering “how” questionsInterviews, observations, document contents and physical inspectionsDetailed identification of themes and development of narrativesIn-depth study of possible lessons learned from a case or cases
Grounded theoryInvestigates proceduresInterviews and questionnaireData coding, categorisation of themes and description of implicationsTheory and theoretical models
HistoricalDescription of past eventsInterviews, surveys and documentsDescription of events developmentHistorical reports
PhenomenologicalUnderstand or explain experiencesInterviews, surveys and observationsDescription of experiences, examination of meanings and theme developmentContextualisation and reporting of experience
EthnographicDescribes and interprets social grouping or cultural situationInterviews, observations and active participationDescription and interpretation of data and theme developmentDetailed reporting of interpreted data

Share and Cite

Chigbu, U.E. Visually Hypothesising in Scientific Paper Writing: Confirming and Refuting Qualitative Research Hypotheses Using Diagrams. Publications 2019 , 7 , 22. https://doi.org/10.3390/publications7010022

Chigbu UE. Visually Hypothesising in Scientific Paper Writing: Confirming and Refuting Qualitative Research Hypotheses Using Diagrams. Publications . 2019; 7(1):22. https://doi.org/10.3390/publications7010022

Chigbu, Uchendu Eugene. 2019. "Visually Hypothesising in Scientific Paper Writing: Confirming and Refuting Qualitative Research Hypotheses Using Diagrams" Publications 7, no. 1: 22. https://doi.org/10.3390/publications7010022

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Getting Started
  • GLG Institute
  • Expert Witness
  • Integrated Insights
  • Qualitative
  • Featured Content
  • Medical Devices & Diagnostics
  • Pharmaceuticals & Biotechnology
  • Industrials
  • Consumer Goods
  • Payments & Insurance
  • Hedge Funds
  • Private Equity
  • Private Credit
  • Investment Managers & Mutual Funds
  • Investment Banks & Research
  • Consulting Firms
  • Advertising & Public Relations
  • Law Firm Resources
  • Social Impact
  • Clients - MyGLG
  • Network Members

Qualitative vs. Quantitative Research — Here’s What You Need to Know

Will Mellor, Director of Surveys, GLG

Read Time: 0 Minutes

Qualitative vs. Quantitative — you’ve heard the terms before, but what do they mean? Here’s what you need to know on when to use them and how to apply them in your research projects.

Most research projects you undertake will likely require some combination of qualitative and quantitative data. The magnitude of each will depend on what you need to accomplish. They are opposite in their approach, which makes them balanced in their outcomes.

Qualitative vs. Quantitaitve Research

When Are They Applied?

Qualitative  

Qualitative research is used to formulate a hypothesis . If you need deeper information about a topic you know little about, qualitative research can help you uncover themes. For this reason, qualitative research often comes prior to quantitative. It allows you to get a baseline understanding of the topic and start to formulate hypotheses around correlation and causation.

Quantitative

Quantitative research is used to test or confirm a hypothesis . Qualitative research usually informs quantitative. You need to have enough understanding about a topic in order to develop a hypothesis you can test. Since quantitative research is highly structured, you first need to understand what the parameters are and how variable they are in practice. This allows you to create a research outline that is controlled in all the ways that will produce high-quality data.

In practice, the parameters are the factors you want to test against your hypothesis. If your hypothesis is that COVID is going to transform the way companies think about office space, some of your parameters might include the percent of your workforce working from home pre- and post-COVID, total square footage of office space held, and/or real-estate spend expectations by executive leadership. You would also want to know the variability of those parameters. In the COVID example, you will need to know standard ranges of square footage and real-estate expenditures so that you can create answer options that will capture relevant, high-quality, and easily actionable data.

Methods of Research

Often, qualitative research is conducted with a small sample size and includes many open-ended questions . The goal is to understand “Why?” and the thinking behind the decisions. The best way to facilitate this type of research is through one-on-one interviews, focus groups, and sometimes surveys. A major benefit of the interview and focus group formats is the ability to ask follow-up questions and dig deeper on answers that are particularly insightful.

Conversely, quantitative research is designed for larger sample sizes, which can garner perspectives across a wide spectrum of respondents. While not always necessary, sample sizes can sometimes be large enough to be statistically significant . The best way to facilitate this type of research is through surveys or large-scale experiments.

Unsurprisingly, the two different approaches will generate different types of data that will need to be analyzed differently.

For qualitative data, you’ll end up with data that will be highly textual in nature. You’ll be reading through the data and looking for key themes that emerge over and over. This type of research is also great at producing quotes that can be used in presentations or reports. Quotes are a powerful tool for conveying sentiment and making a poignant point.

For quantitative data, you’ll end up with a data set that can be analyzed, often with statistical software such as Excel, R, or SPSS. You can ask many different types of questions that produce this quantitative data, including rating/ranking questions, single-select, multiselect, and matrix table questions. These question types will produce data that can be analyzed to find averages, ranges, growth rates, percentage changes, minimums/maximums, and even time-series data for longer-term trend analysis.

Mixed Methods Approach

You aren’t limited to just one approach. If you need both quantitative and qualitative data, then collect both. You can even collect both quantitative and qualitative data within one type of research instrument. In a survey, you can ask both open-ended questions about “Why?” as well as closed-ended, data-related questions. Even in an unstructured format, like an interview or focus group, you can ask numerical questions to capture analyzable data.

Just be careful. While qualitative themes can be generalized, it can be dangerous to generalize on such a small sample size of quantitative data. For instance, why companies like a certain software platform may fall into three to five key themes. How much they spend on that platform can be highly variable.

The Takeaway

If you are unfamiliar with the topic you are researching, qualitative research is the best first approach. As you get deeper in your research, certain themes will emerge, and you’ll start to form hypotheses. From there, quantitative research can provide larger-scale data sets that can be analyzed to either confirm or deny the hypotheses you formulated earlier in your research. Most importantly, the two approaches are not mutually exclusive. You can have an eye for both themes and data throughout the research process. You’ll just be leaning more heavily to one or the other depending on where you are in your understanding of the topic.

Ready to get started? Get the actionable insights you need with the help of GLG’s qualitative and quantitative research methods.

About Will Mellor

Will Mellor leads a team of accomplished project managers who serve financial service firms across North America. His team manages end-to-end survey delivery from first draft to final deliverable. Will is an expert on GLG’s internal membership and consumer populations, as well as survey design and research. Before coming to GLG, he was the vice president of an economic consulting group, where he was responsible for designing economic impact models for clients in both the public sector and the private sector. Will has bachelor’s degrees in international business and finance and a master’s degree in applied economics.

For more information, read our articles: Three Ways to Apply Qualitative Research ,   Focusing on Focus Groups: Best Practices,   What Type of Survey Do You Need?, or The 6 Pillars of Successful Survey Design

You can also download our eBooks: GLG’s Guide to Effective Qualitative Research or Strategies for Successful Surveys

Enter your contact information below and a member of our team will reach out to you shortly.

Thank you for contacting GLG, someone will respond to your inquiry as soon as possible.

Subscribe to Insights 360

Enter your email below and receive our monthly newsletter, featuring insights from GLG’s network of approximately 1 million professionals with first-hand expertise in every industry.

is hypothesis used in qualitative research

What Is A Research (Scientific) Hypothesis? A plain-language explainer + examples

By:  Derek Jansen (MBA)  | Reviewed By: Dr Eunice Rautenbach | June 2020

If you’re new to the world of research, or it’s your first time writing a dissertation or thesis, you’re probably noticing that the words “research hypothesis” and “scientific hypothesis” are used quite a bit, and you’re wondering what they mean in a research context .

“Hypothesis” is one of those words that people use loosely, thinking they understand what it means. However, it has a very specific meaning within academic research. So, it’s important to understand the exact meaning before you start hypothesizing. 

Research Hypothesis 101

  • What is a hypothesis ?
  • What is a research hypothesis (scientific hypothesis)?
  • Requirements for a research hypothesis
  • Definition of a research hypothesis
  • The null hypothesis

What is a hypothesis?

Let’s start with the general definition of a hypothesis (not a research hypothesis or scientific hypothesis), according to the Cambridge Dictionary:

Hypothesis: an idea or explanation for something that is based on known facts but has not yet been proved.

In other words, it’s a statement that provides an explanation for why or how something works, based on facts (or some reasonable assumptions), but that has not yet been specifically tested . For example, a hypothesis might look something like this:

Hypothesis: sleep impacts academic performance.

This statement predicts that academic performance will be influenced by the amount and/or quality of sleep a student engages in – sounds reasonable, right? It’s based on reasonable assumptions , underpinned by what we currently know about sleep and health (from the existing literature). So, loosely speaking, we could call it a hypothesis, at least by the dictionary definition.

But that’s not good enough…

Unfortunately, that’s not quite sophisticated enough to describe a research hypothesis (also sometimes called a scientific hypothesis), and it wouldn’t be acceptable in a dissertation, thesis or research paper . In the world of academic research, a statement needs a few more criteria to constitute a true research hypothesis .

What is a research hypothesis?

A research hypothesis (also called a scientific hypothesis) is a statement about the expected outcome of a study (for example, a dissertation or thesis). To constitute a quality hypothesis, the statement needs to have three attributes – specificity , clarity and testability .

Let’s take a look at these more closely.

Need a helping hand?

is hypothesis used in qualitative research

Hypothesis Essential #1: Specificity & Clarity

A good research hypothesis needs to be extremely clear and articulate about both what’ s being assessed (who or what variables are involved ) and the expected outcome (for example, a difference between groups, a relationship between variables, etc.).

Let’s stick with our sleepy students example and look at how this statement could be more specific and clear.

Hypothesis: Students who sleep at least 8 hours per night will, on average, achieve higher grades in standardised tests than students who sleep less than 8 hours a night.

As you can see, the statement is very specific as it identifies the variables involved (sleep hours and test grades), the parties involved (two groups of students), as well as the predicted relationship type (a positive relationship). There’s no ambiguity or uncertainty about who or what is involved in the statement, and the expected outcome is clear.

Contrast that to the original hypothesis we looked at – “Sleep impacts academic performance” – and you can see the difference. “Sleep” and “academic performance” are both comparatively vague , and there’s no indication of what the expected relationship direction is (more sleep or less sleep). As you can see, specificity and clarity are key.

A good research hypothesis needs to be very clear about what’s being assessed and very specific about the expected outcome.

Hypothesis Essential #2: Testability (Provability)

A statement must be testable to qualify as a research hypothesis. In other words, there needs to be a way to prove (or disprove) the statement. If it’s not testable, it’s not a hypothesis – simple as that.

For example, consider the hypothesis we mentioned earlier:

Hypothesis: Students who sleep at least 8 hours per night will, on average, achieve higher grades in standardised tests than students who sleep less than 8 hours a night.  

We could test this statement by undertaking a quantitative study involving two groups of students, one that gets 8 or more hours of sleep per night for a fixed period, and one that gets less. We could then compare the standardised test results for both groups to see if there’s a statistically significant difference. 

Again, if you compare this to the original hypothesis we looked at – “Sleep impacts academic performance” – you can see that it would be quite difficult to test that statement, primarily because it isn’t specific enough. How much sleep? By who? What type of academic performance?

So, remember the mantra – if you can’t test it, it’s not a hypothesis 🙂

A good research hypothesis must be testable. In other words, you must able to collect observable data in a scientifically rigorous fashion to test it.

Defining A Research Hypothesis

You’re still with us? Great! Let’s recap and pin down a clear definition of a hypothesis.

A research hypothesis (or scientific hypothesis) is a statement about an expected relationship between variables, or explanation of an occurrence, that is clear, specific and testable.

So, when you write up hypotheses for your dissertation or thesis, make sure that they meet all these criteria. If you do, you’ll not only have rock-solid hypotheses but you’ll also ensure a clear focus for your entire research project.

What about the null hypothesis?

You may have also heard the terms null hypothesis , alternative hypothesis, or H-zero thrown around. At a simple level, the null hypothesis is the counter-proposal to the original hypothesis.

For example, if the hypothesis predicts that there is a relationship between two variables (for example, sleep and academic performance), the null hypothesis would predict that there is no relationship between those variables.

At a more technical level, the null hypothesis proposes that no statistical significance exists in a set of given observations and that any differences are due to chance alone.

And there you have it – hypotheses in a nutshell. 

If you have any questions, be sure to leave a comment below and we’ll do our best to help you. If you need hands-on help developing and testing your hypotheses, consider our private coaching service , where we hold your hand through the research journey.

is hypothesis used in qualitative research

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

17 Comments

Lynnet Chikwaikwai

Very useful information. I benefit more from getting more information in this regard.

Dr. WuodArek

Very great insight,educative and informative. Please give meet deep critics on many research data of public international Law like human rights, environment, natural resources, law of the sea etc

Afshin

In a book I read a distinction is made between null, research, and alternative hypothesis. As far as I understand, alternative and research hypotheses are the same. Can you please elaborate? Best Afshin

GANDI Benjamin

This is a self explanatory, easy going site. I will recommend this to my friends and colleagues.

Lucile Dossou-Yovo

Very good definition. How can I cite your definition in my thesis? Thank you. Is nul hypothesis compulsory in a research?

Pereria

It’s a counter-proposal to be proven as a rejection

Egya Salihu

Please what is the difference between alternate hypothesis and research hypothesis?

Mulugeta Tefera

It is a very good explanation. However, it limits hypotheses to statistically tasteable ideas. What about for qualitative researches or other researches that involve quantitative data that don’t need statistical tests?

Derek Jansen

In qualitative research, one typically uses propositions, not hypotheses.

Samia

could you please elaborate it more

Patricia Nyawir

I’ve benefited greatly from these notes, thank you.

Hopeson Khondiwa

This is very helpful

Dr. Andarge

well articulated ideas are presented here, thank you for being reliable sources of information

TAUNO

Excellent. Thanks for being clear and sound about the research methodology and hypothesis (quantitative research)

I have only a simple question regarding the null hypothesis. – Is the null hypothesis (Ho) known as the reversible hypothesis of the alternative hypothesis (H1? – How to test it in academic research?

Tesfaye Negesa Urge

this is very important note help me much more

Elton Cleckley

Hi” best wishes to you and your very nice blog” 

Trackbacks/Pingbacks

  • What Is Research Methodology? Simple Definition (With Examples) - Grad Coach - […] Contrasted to this, a quantitative methodology is typically used when the research aims and objectives are confirmatory in nature. For example,…

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Logo for Open Educational Resources

Chapter 1. Introduction

“Science is in danger, and for that reason it is becoming dangerous” -Pierre Bourdieu, Science of Science and Reflexivity

Why an Open Access Textbook on Qualitative Research Methods?

I have been teaching qualitative research methods to both undergraduates and graduate students for many years.  Although there are some excellent textbooks out there, they are often costly, and none of them, to my mind, properly introduces qualitative research methods to the beginning student (whether undergraduate or graduate student).  In contrast, this open-access textbook is designed as a (free) true introduction to the subject, with helpful, practical pointers on how to conduct research and how to access more advanced instruction.  

Textbooks are typically arranged in one of two ways: (1) by technique (each chapter covers one method used in qualitative research); or (2) by process (chapters advance from research design through publication).  But both of these approaches are necessary for the beginner student.  This textbook will have sections dedicated to the process as well as the techniques of qualitative research.  This is a true “comprehensive” book for the beginning student.  In addition to covering techniques of data collection and data analysis, it provides a road map of how to get started and how to keep going and where to go for advanced instruction.  It covers aspects of research design and research communication as well as methods employed.  Along the way, it includes examples from many different disciplines in the social sciences.

The primary goal has been to create a useful, accessible, engaging textbook for use across many disciplines.  And, let’s face it.  Textbooks can be boring.  I hope readers find this to be a little different.  I have tried to write in a practical and forthright manner, with many lively examples and references to good and intellectually creative qualitative research.  Woven throughout the text are short textual asides (in colored textboxes) by professional (academic) qualitative researchers in various disciplines.  These short accounts by practitioners should help inspire students.  So, let’s begin!

What is Research?

When we use the word research , what exactly do we mean by that?  This is one of those words that everyone thinks they understand, but it is worth beginning this textbook with a short explanation.  We use the term to refer to “empirical research,” which is actually a historically specific approach to understanding the world around us.  Think about how you know things about the world. [1] You might know your mother loves you because she’s told you she does.  Or because that is what “mothers” do by tradition.  Or you might know because you’ve looked for evidence that she does, like taking care of you when you are sick or reading to you in bed or working two jobs so you can have the things you need to do OK in life.  Maybe it seems churlish to look for evidence; you just take it “on faith” that you are loved.

Only one of the above comes close to what we mean by research.  Empirical research is research (investigation) based on evidence.  Conclusions can then be drawn from observable data.  This observable data can also be “tested” or checked.  If the data cannot be tested, that is a good indication that we are not doing research.  Note that we can never “prove” conclusively, through observable data, that our mothers love us.  We might have some “disconfirming evidence” (that time she didn’t show up to your graduation, for example) that could push you to question an original hypothesis , but no amount of “confirming evidence” will ever allow us to say with 100% certainty, “my mother loves me.”  Faith and tradition and authority work differently.  Our knowledge can be 100% certain using each of those alternative methods of knowledge, but our certainty in those cases will not be based on facts or evidence.

For many periods of history, those in power have been nervous about “science” because it uses evidence and facts as the primary source of understanding the world, and facts can be at odds with what power or authority or tradition want you to believe.  That is why I say that scientific empirical research is a historically specific approach to understand the world.  You are in college or university now partly to learn how to engage in this historically specific approach.

In the sixteenth and seventeenth centuries in Europe, there was a newfound respect for empirical research, some of which was seriously challenging to the established church.  Using observations and testing them, scientists found that the earth was not at the center of the universe, for example, but rather that it was but one planet of many which circled the sun. [2]   For the next two centuries, the science of astronomy, physics, biology, and chemistry emerged and became disciplines taught in universities.  All used the scientific method of observation and testing to advance knowledge.  Knowledge about people , however, and social institutions, however, was still left to faith, tradition, and authority.  Historians and philosophers and poets wrote about the human condition, but none of them used research to do so. [3]

It was not until the nineteenth century that “social science” really emerged, using the scientific method (empirical observation) to understand people and social institutions.  New fields of sociology, economics, political science, and anthropology emerged.  The first sociologists, people like Auguste Comte and Karl Marx, sought specifically to apply the scientific method of research to understand society, Engels famously claiming that Marx had done for the social world what Darwin did for the natural world, tracings its laws of development.  Today we tend to take for granted the naturalness of science here, but it is actually a pretty recent and radical development.

To return to the question, “does your mother love you?”  Well, this is actually not really how a researcher would frame the question, as it is too specific to your case.  It doesn’t tell us much about the world at large, even if it does tell us something about you and your relationship with your mother.  A social science researcher might ask, “do mothers love their children?”  Or maybe they would be more interested in how this loving relationship might change over time (e.g., “do mothers love their children more now than they did in the 18th century when so many children died before reaching adulthood?”) or perhaps they might be interested in measuring quality of love across cultures or time periods, or even establishing “what love looks like” using the mother/child relationship as a site of exploration.  All of these make good research questions because we can use observable data to answer them.

What is Qualitative Research?

“All we know is how to learn. How to study, how to listen, how to talk, how to tell.  If we don’t tell the world, we don’t know the world.  We’re lost in it, we die.” -Ursula LeGuin, The Telling

At its simplest, qualitative research is research about the social world that does not use numbers in its analyses.  All those who fear statistics can breathe a sigh of relief – there are no mathematical formulae or regression models in this book! But this definition is less about what qualitative research can be and more about what it is not.  To be honest, any simple statement will fail to capture the power and depth of qualitative research.  One way of contrasting qualitative research to quantitative research is to note that the focus of qualitative research is less about explaining and predicting relationships between variables and more about understanding the social world.  To use our mother love example, the question about “what love looks like” is a good question for the qualitative researcher while all questions measuring love or comparing incidences of love (both of which require measurement) are good questions for quantitative researchers. Patton writes,

Qualitative data describe.  They take us, as readers, into the time and place of the observation so that we know what it was like to have been there.  They capture and communicate someone else’s experience of the world in his or her own words.  Qualitative data tell a story. ( Patton 2002:47 )

Qualitative researchers are asking different questions about the world than their quantitative colleagues.  Even when researchers are employed in “mixed methods” research ( both quantitative and qualitative), they are using different methods to address different questions of the study.  I do a lot of research about first-generation and working-college college students.  Where a quantitative researcher might ask, how many first-generation college students graduate from college within four years? Or does first-generation college status predict high student debt loads?  A qualitative researcher might ask, how does the college experience differ for first-generation college students?  What is it like to carry a lot of debt, and how does this impact the ability to complete college on time?  Both sets of questions are important, but they can only be answered using specific tools tailored to those questions.  For the former, you need large numbers to make adequate comparisons.  For the latter, you need to talk to people, find out what they are thinking and feeling, and try to inhabit their shoes for a little while so you can make sense of their experiences and beliefs.

Examples of Qualitative Research

You have probably seen examples of qualitative research before, but you might not have paid particular attention to how they were produced or realized that the accounts you were reading were the result of hours, months, even years of research “in the field.”  A good qualitative researcher will present the product of their hours of work in such a way that it seems natural, even obvious, to the reader.  Because we are trying to convey what it is like answers, qualitative research is often presented as stories – stories about how people live their lives, go to work, raise their children, interact with one another.  In some ways, this can seem like reading particularly insightful novels.  But, unlike novels, there are very specific rules and guidelines that qualitative researchers follow to ensure that the “story” they are telling is accurate , a truthful rendition of what life is like for the people being studied.  Most of this textbook will be spent conveying those rules and guidelines.  Let’s take a look, first, however, at three examples of what the end product looks like.  I have chosen these three examples to showcase very different approaches to qualitative research, and I will return to these five examples throughout the book.  They were all published as whole books (not chapters or articles), and they are worth the long read, if you have the time.  I will also provide some information on how these books came to be and the length of time it takes to get them into book version.  It is important you know about this process, and the rest of this textbook will help explain why it takes so long to conduct good qualitative research!

Example 1 : The End Game (ethnography + interviews)

Corey Abramson is a sociologist who teaches at the University of Arizona.   In 2015 he published The End Game: How Inequality Shapes our Final Years ( 2015 ). This book was based on the research he did for his dissertation at the University of California-Berkeley in 2012.  Actually, the dissertation was completed in 2012 but the work that was produced that took several years.  The dissertation was entitled, “This is How We Live, This is How We Die: Social Stratification, Aging, and Health in Urban America” ( 2012 ).  You can see how the book version, which was written for a more general audience, has a more engaging sound to it, but that the dissertation version, which is what academic faculty read and evaluate, has a more descriptive title.  You can read the title and know that this is a study about aging and health and that the focus is going to be inequality and that the context (place) is going to be “urban America.”  It’s a study about “how” people do something – in this case, how they deal with aging and death.  This is the very first sentence of the dissertation, “From our first breath in the hospital to the day we die, we live in a society characterized by unequal opportunities for maintaining health and taking care of ourselves when ill.  These disparities reflect persistent racial, socio-economic, and gender-based inequalities and contribute to their persistence over time” ( 1 ).  What follows is a truthful account of how that is so.

Cory Abramson spent three years conducting his research in four different urban neighborhoods.  We call the type of research he conducted “comparative ethnographic” because he designed his study to compare groups of seniors as they went about their everyday business.  It’s comparative because he is comparing different groups (based on race, class, gender) and ethnographic because he is studying the culture/way of life of a group. [4]   He had an educated guess, rooted in what previous research had shown and what social theory would suggest, that people’s experiences of aging differ by race, class, and gender.  So, he set up a research design that would allow him to observe differences.  He chose two primarily middle-class (one was racially diverse and the other was predominantly White) and two primarily poor neighborhoods (one was racially diverse and the other was predominantly African American).  He hung out in senior centers and other places seniors congregated, watched them as they took the bus to get prescriptions filled, sat in doctor’s offices with them, and listened to their conversations with each other.  He also conducted more formal conversations, what we call in-depth interviews, with sixty seniors from each of the four neighborhoods.  As with a lot of fieldwork , as he got closer to the people involved, he both expanded and deepened his reach –

By the end of the project, I expanded my pool of general observations to include various settings frequented by seniors: apartment building common rooms, doctors’ offices, emergency rooms, pharmacies, senior centers, bars, parks, corner stores, shopping centers, pool halls, hair salons, coffee shops, and discount stores. Over the course of the three years of fieldwork, I observed hundreds of elders, and developed close relationships with a number of them. ( 2012:10 )

When Abramson rewrote the dissertation for a general audience and published his book in 2015, it got a lot of attention.  It is a beautifully written book and it provided insight into a common human experience that we surprisingly know very little about.  It won the Outstanding Publication Award by the American Sociological Association Section on Aging and the Life Course and was featured in the New York Times .  The book was about aging, and specifically how inequality shapes the aging process, but it was also about much more than that.  It helped show how inequality affects people’s everyday lives.  For example, by observing the difficulties the poor had in setting up appointments and getting to them using public transportation and then being made to wait to see a doctor, sometimes in standing-room-only situations, when they are unwell, and then being treated dismissively by hospital staff, Abramson allowed readers to feel the material reality of being poor in the US.  Comparing these examples with seniors with adequate supplemental insurance who have the resources to hire car services or have others assist them in arranging care when they need it, jolts the reader to understand and appreciate the difference money makes in the lives and circumstances of us all, and in a way that is different than simply reading a statistic (“80% of the poor do not keep regular doctor’s appointments”) does.  Qualitative research can reach into spaces and places that often go unexamined and then reports back to the rest of us what it is like in those spaces and places.

Example 2: Racing for Innocence (Interviews + Content Analysis + Fictional Stories)

Jennifer Pierce is a Professor of American Studies at the University of Minnesota.  Trained as a sociologist, she has written a number of books about gender, race, and power.  Her very first book, Gender Trials: Emotional Lives in Contemporary Law Firms, published in 1995, is a brilliant look at gender dynamics within two law firms.  Pierce was a participant observer, working as a paralegal, and she observed how female lawyers and female paralegals struggled to obtain parity with their male colleagues.

Fifteen years later, she reexamined the context of the law firm to include an examination of racial dynamics, particularly how elite white men working in these spaces created and maintained a culture that made it difficult for both female attorneys and attorneys of color to thrive. Her book, Racing for Innocence: Whiteness, Gender, and the Backlash Against Affirmative Action , published in 2012, is an interesting and creative blending of interviews with attorneys, content analyses of popular films during this period, and fictional accounts of racial discrimination and sexual harassment.  The law firm she chose to study had come under an affirmative action order and was in the process of implementing equitable policies and programs.  She wanted to understand how recipients of white privilege (the elite white male attorneys) come to deny the role they play in reproducing inequality.  Through interviews with attorneys who were present both before and during the affirmative action order, she creates a historical record of the “bad behavior” that necessitated new policies and procedures, but also, and more importantly , probed the participants ’ understanding of this behavior.  It should come as no surprise that most (but not all) of the white male attorneys saw little need for change, and that almost everyone else had accounts that were different if not sometimes downright harrowing.

I’ve used Pierce’s book in my qualitative research methods courses as an example of an interesting blend of techniques and presentation styles.  My students often have a very difficult time with the fictional accounts she includes.  But they serve an important communicative purpose here.  They are her attempts at presenting “both sides” to an objective reality – something happens (Pierce writes this something so it is very clear what it is), and the two participants to the thing that happened have very different understandings of what this means.  By including these stories, Pierce presents one of her key findings – people remember things differently and these different memories tend to support their own ideological positions.  I wonder what Pierce would have written had she studied the murder of George Floyd or the storming of the US Capitol on January 6 or any number of other historic events whose observers and participants record very different happenings.

This is not to say that qualitative researchers write fictional accounts.  In fact, the use of fiction in our work remains controversial.  When used, it must be clearly identified as a presentation device, as Pierce did.  I include Racing for Innocence here as an example of the multiple uses of methods and techniques and the way that these work together to produce better understandings by us, the readers, of what Pierce studied.  We readers come away with a better grasp of how and why advantaged people understate their own involvement in situations and structures that advantage them.  This is normal human behavior , in other words.  This case may have been about elite white men in law firms, but the general insights here can be transposed to other settings.  Indeed, Pierce argues that more research needs to be done about the role elites play in the reproduction of inequality in the workplace in general.

Example 3: Amplified Advantage (Mixed Methods: Survey Interviews + Focus Groups + Archives)

The final example comes from my own work with college students, particularly the ways in which class background affects the experience of college and outcomes for graduates.  I include it here as an example of mixed methods, and for the use of supplementary archival research.  I’ve done a lot of research over the years on first-generation, low-income, and working-class college students.  I am curious (and skeptical) about the possibility of social mobility today, particularly with the rising cost of college and growing inequality in general.  As one of the few people in my family to go to college, I didn’t grow up with a lot of examples of what college was like or how to make the most of it.  And when I entered graduate school, I realized with dismay that there were very few people like me there.  I worried about becoming too different from my family and friends back home.  And I wasn’t at all sure that I would ever be able to pay back the huge load of debt I was taking on.  And so I wrote my dissertation and first two books about working-class college students.  These books focused on experiences in college and the difficulties of navigating between family and school ( Hurst 2010a, 2012 ).  But even after all that research, I kept coming back to wondering if working-class students who made it through college had an equal chance at finding good jobs and happy lives,

What happens to students after college?  Do working-class students fare as well as their peers?  I knew from my own experience that barriers continued through graduate school and beyond, and that my debtload was higher than that of my peers, constraining some of the choices I made when I graduated.  To answer these questions, I designed a study of students attending small liberal arts colleges, the type of college that tried to equalize the experience of students by requiring all students to live on campus and offering small classes with lots of interaction with faculty.  These private colleges tend to have more money and resources so they can provide financial aid to low-income students.  They also attract some very wealthy students.  Because they enroll students across the class spectrum, I would be able to draw comparisons.  I ended up spending about four years collecting data, both a survey of more than 2000 students (which formed the basis for quantitative analyses) and qualitative data collection (interviews, focus groups, archival research, and participant observation).  This is what we call a “mixed methods” approach because we use both quantitative and qualitative data.  The survey gave me a large enough number of students that I could make comparisons of the how many kind, and to be able to say with some authority that there were in fact significant differences in experience and outcome by class (e.g., wealthier students earned more money and had little debt; working-class students often found jobs that were not in their chosen careers and were very affected by debt, upper-middle-class students were more likely to go to graduate school).  But the survey analyses could not explain why these differences existed.  For that, I needed to talk to people and ask them about their motivations and aspirations.  I needed to understand their perceptions of the world, and it is very hard to do this through a survey.

By interviewing students and recent graduates, I was able to discern particular patterns and pathways through college and beyond.  Specifically, I identified three versions of gameplay.  Upper-middle-class students, whose parents were themselves professionals (academics, lawyers, managers of non-profits), saw college as the first stage of their education and took classes and declared majors that would prepare them for graduate school.  They also spent a lot of time building their resumes, taking advantage of opportunities to help professors with their research, or study abroad.  This helped them gain admission to highly-ranked graduate schools and interesting jobs in the public sector.  In contrast, upper-class students, whose parents were wealthy and more likely to be engaged in business (as CEOs or other high-level directors), prioritized building social capital.  They did this by joining fraternities and sororities and playing club sports.  This helped them when they graduated as they called on friends and parents of friends to find them well-paying jobs.  Finally, low-income, first-generation, and working-class students were often adrift.  They took the classes that were recommended to them but without the knowledge of how to connect them to life beyond college.  They spent time working and studying rather than partying or building their resumes.  All three sets of students thought they were “doing college” the right way, the way that one was supposed to do college.   But these three versions of gameplay led to distinct outcomes that advantaged some students over others.  I titled my work “Amplified Advantage” to highlight this process.

These three examples, Cory Abramson’s The End Game , Jennifer Peirce’s Racing for Innocence, and my own Amplified Advantage, demonstrate the range of approaches and tools available to the qualitative researcher.  They also help explain why qualitative research is so important.  Numbers can tell us some things about the world, but they cannot get at the hearts and minds, motivations and beliefs of the people who make up the social worlds we inhabit.  For that, we need tools that allow us to listen and make sense of what people tell us and show us.  That is what good qualitative research offers us.

How Is This Book Organized?

This textbook is organized as a comprehensive introduction to the use of qualitative research methods.  The first half covers general topics (e.g., approaches to qualitative research, ethics) and research design (necessary steps for building a successful qualitative research study).  The second half reviews various data collection and data analysis techniques.  Of course, building a successful qualitative research study requires some knowledge of data collection and data analysis so the chapters in the first half and the chapters in the second half should be read in conversation with each other.  That said, each chapter can be read on its own for assistance with a particular narrow topic.  In addition to the chapters, a helpful glossary can be found in the back of the book.  Rummage around in the text as needed.

Chapter Descriptions

Chapter 2 provides an overview of the Research Design Process.  How does one begin a study? What is an appropriate research question?  How is the study to be done – with what methods ?  Involving what people and sites?  Although qualitative research studies can and often do change and develop over the course of data collection, it is important to have a good idea of what the aims and goals of your study are at the outset and a good plan of how to achieve those aims and goals.  Chapter 2 provides a road map of the process.

Chapter 3 describes and explains various ways of knowing the (social) world.  What is it possible for us to know about how other people think or why they behave the way they do?  What does it mean to say something is a “fact” or that it is “well-known” and understood?  Qualitative researchers are particularly interested in these questions because of the types of research questions we are interested in answering (the how questions rather than the how many questions of quantitative research).  Qualitative researchers have adopted various epistemological approaches.  Chapter 3 will explore these approaches, highlighting interpretivist approaches that acknowledge the subjective aspect of reality – in other words, reality and knowledge are not objective but rather influenced by (interpreted through) people.

Chapter 4 focuses on the practical matter of developing a research question and finding the right approach to data collection.  In any given study (think of Cory Abramson’s study of aging, for example), there may be years of collected data, thousands of observations , hundreds of pages of notes to read and review and make sense of.  If all you had was a general interest area (“aging”), it would be very difficult, nearly impossible, to make sense of all of that data.  The research question provides a helpful lens to refine and clarify (and simplify) everything you find and collect.  For that reason, it is important to pull out that lens (articulate the research question) before you get started.  In the case of the aging study, Cory Abramson was interested in how inequalities affected understandings and responses to aging.  It is for this reason he designed a study that would allow him to compare different groups of seniors (some middle-class, some poor).  Inevitably, he saw much more in the three years in the field than what made it into his book (or dissertation), but he was able to narrow down the complexity of the social world to provide us with this rich account linked to the original research question.  Developing a good research question is thus crucial to effective design and a successful outcome.  Chapter 4 will provide pointers on how to do this.  Chapter 4 also provides an overview of general approaches taken to doing qualitative research and various “traditions of inquiry.”

Chapter 5 explores sampling .  After you have developed a research question and have a general idea of how you will collect data (Observations?  Interviews?), how do you go about actually finding people and sites to study?  Although there is no “correct number” of people to interview , the sample should follow the research question and research design.  Unlike quantitative research, qualitative research involves nonprobability sampling.  Chapter 5 explains why this is so and what qualities instead make a good sample for qualitative research.

Chapter 6 addresses the importance of reflexivity in qualitative research.  Related to epistemological issues of how we know anything about the social world, qualitative researchers understand that we the researchers can never be truly neutral or outside the study we are conducting.  As observers, we see things that make sense to us and may entirely miss what is either too obvious to note or too different to comprehend.  As interviewers, as much as we would like to ask questions neutrally and remain in the background, interviews are a form of conversation, and the persons we interview are responding to us .  Therefore, it is important to reflect upon our social positions and the knowledges and expectations we bring to our work and to work through any blind spots that we may have.  Chapter 6 provides some examples of reflexivity in practice and exercises for thinking through one’s own biases.

Chapter 7 is a very important chapter and should not be overlooked.  As a practical matter, it should also be read closely with chapters 6 and 8.  Because qualitative researchers deal with people and the social world, it is imperative they develop and adhere to a strong ethical code for conducting research in a way that does not harm.  There are legal requirements and guidelines for doing so (see chapter 8), but these requirements should not be considered synonymous with the ethical code required of us.   Each researcher must constantly interrogate every aspect of their research, from research question to design to sample through analysis and presentation, to ensure that a minimum of harm (ideally, zero harm) is caused.  Because each research project is unique, the standards of care for each study are unique.  Part of being a professional researcher is carrying this code in one’s heart, being constantly attentive to what is required under particular circumstances.  Chapter 7 provides various research scenarios and asks readers to weigh in on the suitability and appropriateness of the research.  If done in a class setting, it will become obvious fairly quickly that there are often no absolutely correct answers, as different people find different aspects of the scenarios of greatest importance.  Minimizing the harm in one area may require possible harm in another.  Being attentive to all the ethical aspects of one’s research and making the best judgments one can, clearly and consciously, is an integral part of being a good researcher.

Chapter 8 , best to be read in conjunction with chapter 7, explains the role and importance of Institutional Review Boards (IRBs) .  Under federal guidelines, an IRB is an appropriately constituted group that has been formally designated to review and monitor research involving human subjects .  Every institution that receives funding from the federal government has an IRB.  IRBs have the authority to approve, require modifications to (to secure approval), or disapprove research.  This group review serves an important role in the protection of the rights and welfare of human research subjects.  Chapter 8 reviews the history of IRBs and the work they do but also argues that IRBs’ review of qualitative research is often both over-inclusive and under-inclusive.  Some aspects of qualitative research are not well understood by IRBs, given that they were developed to prevent abuses in biomedical research.  Thus, it is important not to rely on IRBs to identify all the potential ethical issues that emerge in our research (see chapter 7).

Chapter 9 provides help for getting started on formulating a research question based on gaps in the pre-existing literature.  Research is conducted as part of a community, even if particular studies are done by single individuals (or small teams).  What any of us finds and reports back becomes part of a much larger body of knowledge.  Thus, it is important that we look at the larger body of knowledge before we actually start our bit to see how we can best contribute.  When I first began interviewing working-class college students, there was only one other similar study I could find, and it hadn’t been published (it was a dissertation of students from poor backgrounds).  But there had been a lot published by professors who had grown up working class and made it through college despite the odds.  These accounts by “working-class academics” became an important inspiration for my study and helped me frame the questions I asked the students I interviewed.  Chapter 9 will provide some pointers on how to search for relevant literature and how to use this to refine your research question.

Chapter 10 serves as a bridge between the two parts of the textbook, by introducing techniques of data collection.  Qualitative research is often characterized by the form of data collection – for example, an ethnographic study is one that employs primarily observational data collection for the purpose of documenting and presenting a particular culture or ethnos.  Techniques can be effectively combined, depending on the research question and the aims and goals of the study.   Chapter 10 provides a general overview of all the various techniques and how they can be combined.

The second part of the textbook moves into the doing part of qualitative research once the research question has been articulated and the study designed.  Chapters 11 through 17 cover various data collection techniques and approaches.  Chapters 18 and 19 provide a very simple overview of basic data analysis.  Chapter 20 covers communication of the data to various audiences, and in various formats.

Chapter 11 begins our overview of data collection techniques with a focus on interviewing , the true heart of qualitative research.  This technique can serve as the primary and exclusive form of data collection, or it can be used to supplement other forms (observation, archival).  An interview is distinct from a survey, where questions are asked in a specific order and often with a range of predetermined responses available.  Interviews can be conversational and unstructured or, more conventionally, semistructured , where a general set of interview questions “guides” the conversation.  Chapter 11 covers the basics of interviews: how to create interview guides, how many people to interview, where to conduct the interview, what to watch out for (how to prepare against things going wrong), and how to get the most out of your interviews.

Chapter 12 covers an important variant of interviewing, the focus group.  Focus groups are semistructured interviews with a group of people moderated by a facilitator (the researcher or researcher’s assistant).  Focus groups explicitly use group interaction to assist in the data collection.  They are best used to collect data on a specific topic that is non-personal and shared among the group.  For example, asking a group of college students about a common experience such as taking classes by remote delivery during the pandemic year of 2020.  Chapter 12 covers the basics of focus groups: when to use them, how to create interview guides for them, and how to run them effectively.

Chapter 13 moves away from interviewing to the second major form of data collection unique to qualitative researchers – observation .  Qualitative research that employs observation can best be understood as falling on a continuum of “fly on the wall” observation (e.g., observing how strangers interact in a doctor’s waiting room) to “participant” observation, where the researcher is also an active participant of the activity being observed.  For example, an activist in the Black Lives Matter movement might want to study the movement, using her inside position to gain access to observe key meetings and interactions.  Chapter  13 covers the basics of participant observation studies: advantages and disadvantages, gaining access, ethical concerns related to insider/outsider status and entanglement, and recording techniques.

Chapter 14 takes a closer look at “deep ethnography” – immersion in the field of a particularly long duration for the purpose of gaining a deeper understanding and appreciation of a particular culture or social world.  Clifford Geertz called this “deep hanging out.”  Whereas participant observation is often combined with semistructured interview techniques, deep ethnography’s commitment to “living the life” or experiencing the situation as it really is demands more conversational and natural interactions with people.  These interactions and conversations may take place over months or even years.  As can be expected, there are some costs to this technique, as well as some very large rewards when done competently.  Chapter 14 provides some examples of deep ethnographies that will inspire some beginning researchers and intimidate others.

Chapter 15 moves in the opposite direction of deep ethnography, a technique that is the least positivist of all those discussed here, to mixed methods , a set of techniques that is arguably the most positivist .  A mixed methods approach combines both qualitative data collection and quantitative data collection, commonly by combining a survey that is analyzed statistically (e.g., cross-tabs or regression analyses of large number probability samples) with semi-structured interviews.  Although it is somewhat unconventional to discuss mixed methods in textbooks on qualitative research, I think it is important to recognize this often-employed approach here.  There are several advantages and some disadvantages to taking this route.  Chapter 16 will describe those advantages and disadvantages and provide some particular guidance on how to design a mixed methods study for maximum effectiveness.

Chapter 16 covers data collection that does not involve live human subjects at all – archival and historical research (chapter 17 will also cover data that does not involve interacting with human subjects).  Sometimes people are unavailable to us, either because they do not wish to be interviewed or observed (as is the case with many “elites”) or because they are too far away, in both place and time.  Fortunately, humans leave many traces and we can often answer questions we have by examining those traces.  Special collections and archives can be goldmines for social science research.  This chapter will explain how to access these places, for what purposes, and how to begin to make sense of what you find.

Chapter 17 covers another data collection area that does not involve face-to-face interaction with humans: content analysis .  Although content analysis may be understood more properly as a data analysis technique, the term is often used for the entire approach, which will be the case here.  Content analysis involves interpreting meaning from a body of text.  This body of text might be something found in historical records (see chapter 16) or something collected by the researcher, as in the case of comment posts on a popular blog post.  I once used the stories told by student loan debtors on the website studentloanjustice.org as the content I analyzed.  Content analysis is particularly useful when attempting to define and understand prevalent stories or communication about a topic of interest.  In other words, when we are less interested in what particular people (our defined sample) are doing or believing and more interested in what general narratives exist about a particular topic or issue.  This chapter will explore different approaches to content analysis and provide helpful tips on how to collect data, how to turn that data into codes for analysis, and how to go about presenting what is found through analysis.

Where chapter 17 has pushed us towards data analysis, chapters 18 and 19 are all about what to do with the data collected, whether that data be in the form of interview transcripts or fieldnotes from observations.  Chapter 18 introduces the basics of coding , the iterative process of assigning meaning to the data in order to both simplify and identify patterns.  What is a code and how does it work?  What are the different ways of coding data, and when should you use them?  What is a codebook, and why do you need one?  What does the process of data analysis look like?

Chapter 19 goes further into detail on codes and how to use them, particularly the later stages of coding in which our codes are refined, simplified, combined, and organized.  These later rounds of coding are essential to getting the most out of the data we’ve collected.  As students are often overwhelmed with the amount of data (a corpus of interview transcripts typically runs into the hundreds of pages; fieldnotes can easily top that), this chapter will also address time management and provide suggestions for dealing with chaos and reminders that feeling overwhelmed at the analysis stage is part of the process.  By the end of the chapter, you should understand how “findings” are actually found.

The book concludes with a chapter dedicated to the effective presentation of data results.  Chapter 20 covers the many ways that researchers communicate their studies to various audiences (academic, personal, political), what elements must be included in these various publications, and the hallmarks of excellent qualitative research that various audiences will be expecting.  Because qualitative researchers are motivated by understanding and conveying meaning , effective communication is not only an essential skill but a fundamental facet of the entire research project.  Ethnographers must be able to convey a certain sense of verisimilitude , the appearance of true reality.  Those employing interviews must faithfully depict the key meanings of the people they interviewed in a way that rings true to those people, even if the end result surprises them.  And all researchers must strive for clarity in their publications so that various audiences can understand what was found and why it is important.

The book concludes with a short chapter ( chapter 21 ) discussing the value of qualitative research. At the very end of this book, you will find a glossary of terms. I recommend you make frequent use of the glossary and add to each entry as you find examples. Although the entries are meant to be simple and clear, you may also want to paraphrase the definition—make it “make sense” to you, in other words. In addition to the standard reference list (all works cited here), you will find various recommendations for further reading at the end of many chapters. Some of these recommendations will be examples of excellent qualitative research, indicated with an asterisk (*) at the end of the entry. As they say, a picture is worth a thousand words. A good example of qualitative research can teach you more about conducting research than any textbook can (this one included). I highly recommend you select one to three examples from these lists and read them along with the textbook.

A final note on the choice of examples – you will note that many of the examples used in the text come from research on college students.  This is for two reasons.  First, as most of my research falls in this area, I am most familiar with this literature and have contacts with those who do research here and can call upon them to share their stories with you.  Second, and more importantly, my hope is that this textbook reaches a wide audience of beginning researchers who study widely and deeply across the range of what can be known about the social world (from marine resources management to public policy to nursing to political science to sexuality studies and beyond).  It is sometimes difficult to find examples that speak to all those research interests, however. A focus on college students is something that all readers can understand and, hopefully, appreciate, as we are all now or have been at some point a college student.

Recommended Reading: Other Qualitative Research Textbooks

I’ve included a brief list of some of my favorite qualitative research textbooks and guidebooks if you need more than what you will find in this introductory text.  For each, I’ve also indicated if these are for “beginning” or “advanced” (graduate-level) readers.  Many of these books have several editions that do not significantly vary; the edition recommended is merely the edition I have used in teaching and to whose page numbers any specific references made in the text agree.

Barbour, Rosaline. 2014. Introducing Qualitative Research: A Student’s Guide. Thousand Oaks, CA: SAGE.  A good introduction to qualitative research, with abundant examples (often from the discipline of health care) and clear definitions.  Includes quick summaries at the ends of each chapter.  However, some US students might find the British context distracting and can be a bit advanced in some places.  Beginning .

Bloomberg, Linda Dale, and Marie F. Volpe. 2012. Completing Your Qualitative Dissertation . 2nd ed. Thousand Oaks, CA: SAGE.  Specifically designed to guide graduate students through the research process. Advanced .

Creswell, John W., and Cheryl Poth. 2018 Qualitative Inquiry and Research Design: Choosing among Five Traditions .  4th ed. Thousand Oaks, CA: SAGE.  This is a classic and one of the go-to books I used myself as a graduate student.  One of the best things about this text is its clear presentation of five distinct traditions in qualitative research.  Despite the title, this reasonably sized book is about more than research design, including both data analysis and how to write about qualitative research.  Advanced .

Lareau, Annette. 2021. Listening to People: A Practical Guide to Interviewing, Participant Observation, Data Analysis, and Writing It All Up .  Chicago: University of Chicago Press. A readable and personal account of conducting qualitative research by an eminent sociologist, with a heavy emphasis on the kinds of participant-observation research conducted by the author.  Despite its reader-friendliness, this is really a book targeted to graduate students learning the craft.  Advanced .

Lune, Howard, and Bruce L. Berg. 2018. 9th edition.  Qualitative Research Methods for the Social Sciences.  Pearson . Although a good introduction to qualitative methods, the authors favor symbolic interactionist and dramaturgical approaches, which limits the appeal primarily to sociologists.  Beginning .

Marshall, Catherine, and Gretchen B. Rossman. 2016. 6th edition. Designing Qualitative Research. Thousand Oaks, CA: SAGE.  Very readable and accessible guide to research design by two educational scholars.  Although the presentation is sometimes fairly dry, personal vignettes and illustrations enliven the text.  Beginning .

Maxwell, Joseph A. 2013. Qualitative Research Design: An Interactive Approach .  3rd ed. Thousand Oaks, CA: SAGE. A short and accessible introduction to qualitative research design, particularly helpful for graduate students contemplating theses and dissertations. This has been a standard textbook in my graduate-level courses for years.  Advanced .

Patton, Michael Quinn. 2002. Qualitative Research and Evaluation Methods . Thousand Oaks, CA: SAGE.  This is a comprehensive text that served as my “go-to” reference when I was a graduate student.  It is particularly helpful for those involved in program evaluation and other forms of evaluation studies and uses examples from a wide range of disciplines.  Advanced .

Rubin, Ashley T. 2021. Rocking Qualitative Social Science: An Irreverent Guide to Rigorous Research. Stanford : Stanford University Press.  A delightful and personal read.  Rubin uses rock climbing as an extended metaphor for learning how to conduct qualitative research.  A bit slanted toward ethnographic and archival methods of data collection, with frequent examples from her own studies in criminology. Beginning .

Weis, Lois, and Michelle Fine. 2000. Speed Bumps: A Student-Friendly Guide to Qualitative Research . New York: Teachers College Press.  Readable and accessibly written in a quasi-conversational style.  Particularly strong in its discussion of ethical issues throughout the qualitative research process.  Not comprehensive, however, and very much tied to ethnographic research.  Although designed for graduate students, this is a recommended read for students of all levels.  Beginning .

Patton’s Ten Suggestions for Doing Qualitative Research

The following ten suggestions were made by Michael Quinn Patton in his massive textbooks Qualitative Research and Evaluations Methods . This book is highly recommended for those of you who want more than an introduction to qualitative methods. It is the book I relied on heavily when I was a graduate student, although it is much easier to “dip into” when necessary than to read through as a whole. Patton is asked for “just one bit of advice” for a graduate student considering using qualitative research methods for their dissertation.  Here are his top ten responses, in short form, heavily paraphrased, and with additional comments and emphases from me:

  • Make sure that a qualitative approach fits the research question. The following are the kinds of questions that call out for qualitative methods or where qualitative methods are particularly appropriate: questions about people’s experiences or how they make sense of those experiences; studying a person in their natural environment; researching a phenomenon so unknown that it would be impossible to study it with standardized instruments or other forms of quantitative data collection.
  • Study qualitative research by going to the original sources for the design and analysis appropriate to the particular approach you want to take (e.g., read Glaser and Straus if you are using grounded theory )
  • Find a dissertation adviser who understands or at least who will support your use of qualitative research methods. You are asking for trouble if your entire committee is populated by quantitative researchers, even if they are all very knowledgeable about the subject or focus of your study (maybe even more so if they are!)
  • Really work on design. Doing qualitative research effectively takes a lot of planning.  Even if things are more flexible than in quantitative research, a good design is absolutely essential when starting out.
  • Practice data collection techniques, particularly interviewing and observing. There is definitely a set of learned skills here!  Do not expect your first interview to be perfect.  You will continue to grow as a researcher the more interviews you conduct, and you will probably come to understand yourself a bit more in the process, too.  This is not easy, despite what others who don’t work with qualitative methods may assume (and tell you!)
  • Have a plan for analysis before you begin data collection. This is often a requirement in IRB protocols , although you can get away with writing something fairly simple.  And even if you are taking an approach, such as grounded theory, that pushes you to remain fairly open-minded during the data collection process, you still want to know what you will be doing with all the data collected – creating a codebook? Writing analytical memos? Comparing cases?  Having a plan in hand will also help prevent you from collecting too much extraneous data.
  • Be prepared to confront controversies both within the qualitative research community and between qualitative research and quantitative research. Don’t be naïve about this – qualitative research, particularly some approaches, will be derided by many more “positivist” researchers and audiences.  For example, is an “n” of 1 really sufficient?  Yes!  But not everyone will agree.
  • Do not make the mistake of using qualitative research methods because someone told you it was easier, or because you are intimidated by the math required of statistical analyses. Qualitative research is difficult in its own way (and many would claim much more time-consuming than quantitative research).  Do it because you are convinced it is right for your goals, aims, and research questions.
  • Find a good support network. This could be a research mentor, or it could be a group of friends or colleagues who are also using qualitative research, or it could be just someone who will listen to you work through all of the issues you will confront out in the field and during the writing process.  Even though qualitative research often involves human subjects, it can be pretty lonely.  A lot of times you will feel like you are working without a net.  You have to create one for yourself.  Take care of yourself.
  • And, finally, in the words of Patton, “Prepare to be changed. Looking deeply at other people’s lives will force you to look deeply at yourself.”
  • We will actually spend an entire chapter ( chapter 3 ) looking at this question in much more detail! ↵
  • Note that this might have been news to Europeans at the time, but many other societies around the world had also come to this conclusion through observation.  There is often a tendency to equate “the scientific revolution” with the European world in which it took place, but this is somewhat misleading. ↵
  • Historians are a special case here.  Historians have scrupulously and rigorously investigated the social world, but not for the purpose of understanding general laws about how things work, which is the point of scientific empirical research.  History is often referred to as an idiographic field of study, meaning that it studies things that happened or are happening in themselves and not for general observations or conclusions. ↵
  • Don’t worry, we’ll spend more time later in this book unpacking the meaning of ethnography and other terms that are important here.  Note the available glossary ↵

An approach to research that is “multimethod in focus, involving an interpretative, naturalistic approach to its subject matter.  This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.  Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives." ( Denzin and Lincoln 2005:2 ). Contrast with quantitative research .

In contrast to methodology, methods are more simply the practices and tools used to collect and analyze data.  Examples of common methods in qualitative research are interviews , observations , and documentary analysis .  One’s methodology should connect to one’s choice of methods, of course, but they are distinguishable terms.  See also methodology .

A proposed explanation for an observation, phenomenon, or scientific problem that can be tested by further investigation.  The positing of a hypothesis is often the first step in quantitative research but not in qualitative research.  Even when qualitative researchers offer possible explanations in advance of conducting research, they will tend to not use the word “hypothesis” as it conjures up the kind of positivist research they are not conducting.

The foundational question to be addressed by the research study.  This will form the anchor of the research design, collection, and analysis.  Note that in qualitative research, the research question may, and probably will, alter or develop during the course of the research.

An approach to research that collects and analyzes numerical data for the purpose of finding patterns and averages, making predictions, testing causal relationships, and generalizing results to wider populations.  Contrast with qualitative research .

Data collection that takes place in real-world settings, referred to as “the field;” a key component of much Grounded Theory and ethnographic research.  Patton ( 2002 ) calls fieldwork “the central activity of qualitative inquiry” where “‘going into the field’ means having direct and personal contact with people under study in their own environments – getting close to people and situations being studied to personally understand the realities of minutiae of daily life” (48).

The people who are the subjects of a qualitative study.  In interview-based studies, they may be the respondents to the interviewer; for purposes of IRBs, they are often referred to as the human subjects of the research.

The branch of philosophy concerned with knowledge.  For researchers, it is important to recognize and adopt one of the many distinguishing epistemological perspectives as part of our understanding of what questions research can address or fully answer.  See, e.g., constructivism , subjectivism, and  objectivism .

An approach that refutes the possibility of neutrality in social science research.  All research is “guided by a set of beliefs and feelings about the world and how it should be understood and studied” (Denzin and Lincoln 2005: 13).  In contrast to positivism , interpretivism recognizes the social constructedness of reality, and researchers adopting this approach focus on capturing interpretations and understandings people have about the world rather than “the world” as it is (which is a chimera).

The cluster of data-collection tools and techniques that involve observing interactions between people, the behaviors, and practices of individuals (sometimes in contrast to what they say about how they act and behave), and cultures in context.  Observational methods are the key tools employed by ethnographers and Grounded Theory .

Research based on data collected and analyzed by the research (in contrast to secondary “library” research).

The process of selecting people or other units of analysis to represent a larger population. In quantitative research, this representation is taken quite literally, as statistically representative.  In qualitative research, in contrast, sample selection is often made based on potential to generate insight about a particular topic or phenomenon.

A method of data collection in which the researcher asks the participant questions; the answers to these questions are often recorded and transcribed verbatim. There are many different kinds of interviews - see also semistructured interview , structured interview , and unstructured interview .

The specific group of individuals that you will collect data from.  Contrast population.

The practice of being conscious of and reflective upon one’s own social location and presence when conducting research.  Because qualitative research often requires interaction with live humans, failing to take into account how one’s presence and prior expectations and social location affect the data collected and how analyzed may limit the reliability of the findings.  This remains true even when dealing with historical archives and other content.  Who we are matters when asking questions about how people experience the world because we, too, are a part of that world.

The science and practice of right conduct; in research, it is also the delineation of moral obligations towards research participants, communities to which we belong, and communities in which we conduct our research.

An administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated. The IRB is charged with the responsibility of reviewing all research involving human participants. The IRB is concerned with protecting the welfare, rights, and privacy of human subjects. The IRB has the authority to approve, disapprove, monitor, and require modifications in all research activities that fall within its jurisdiction as specified by both the federal regulations and institutional policy.

Research, according to US federal guidelines, that involves “a living individual about whom an investigator (whether professional or student) conducting research:  (1) Obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or biospecimens; or  (2) Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.”

One of the primary methodological traditions of inquiry in qualitative research, ethnography is the study of a group or group culture, largely through observational fieldwork supplemented by interviews. It is a form of fieldwork that may include participant-observation data collection. See chapter 14 for a discussion of deep ethnography. 

A form of interview that follows a standard guide of questions asked, although the order of the questions may change to match the particular needs of each individual interview subject, and probing “follow-up” questions are often added during the course of the interview.  The semi-structured interview is the primary form of interviewing used by qualitative researchers in the social sciences.  It is sometimes referred to as an “in-depth” interview.  See also interview and  interview guide .

A method of observational data collection taking place in a natural setting; a form of fieldwork .  The term encompasses a continuum of relative participation by the researcher (from full participant to “fly-on-the-wall” observer).  This is also sometimes referred to as ethnography , although the latter is characterized by a greater focus on the culture under observation.

A research design that employs both quantitative and qualitative methods, as in the case of a survey supplemented by interviews.

An epistemological perspective that posits the existence of reality through sensory experience similar to empiricism but goes further in denying any non-sensory basis of thought or consciousness.  In the social sciences, the term has roots in the proto-sociologist August Comte, who believed he could discern “laws” of society similar to the laws of natural science (e.g., gravity).  The term has come to mean the kinds of measurable and verifiable science conducted by quantitative researchers and is thus used pejoratively by some qualitative researchers interested in interpretation, consciousness, and human understanding.  Calling someone a “positivist” is often intended as an insult.  See also empiricism and objectivism.

A place or collection containing records, documents, or other materials of historical interest; most universities have an archive of material related to the university’s history, as well as other “special collections” that may be of interest to members of the community.

A method of both data collection and data analysis in which a given content (textual, visual, graphic) is examined systematically and rigorously to identify meanings, themes, patterns and assumptions.  Qualitative content analysis (QCA) is concerned with gathering and interpreting an existing body of material.    

A word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data (Saldaña 2021:5).

Usually a verbatim written record of an interview or focus group discussion.

The primary form of data for fieldwork , participant observation , and ethnography .  These notes, taken by the researcher either during the course of fieldwork or at day’s end, should include as many details as possible on what was observed and what was said.  They should include clear identifiers of date, time, setting, and names (or identifying characteristics) of participants.

The process of labeling and organizing qualitative data to identify different themes and the relationships between them; a way of simplifying data to allow better management and retrieval of key themes and illustrative passages.  See coding frame and  codebook.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

A detailed description of any proposed research that involves human subjects for review by IRB.  The protocol serves as the recipe for the conduct of the research activity.  It includes the scientific rationale to justify the conduct of the study, the information necessary to conduct the study, the plan for managing and analyzing the data, and a discussion of the research ethical issues relevant to the research.  Protocols for qualitative research often include interview guides, all documents related to recruitment, informed consent forms, very clear guidelines on the safekeeping of materials collected, and plans for de-identifying transcripts or other data that include personal identifying information.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Quantitative vs. Qualitative Research in Psychology

  • Key Differences

Quantitative Research Methods

Qualitative research methods.

  • How They Relate

In psychology and other social sciences, researchers are faced with an unresolved question: Can we measure concepts like love or racism the same way we can measure temperature or the weight of a star? Social phenomena⁠—things that happen because of and through human behavior⁠—are especially difficult to grasp with typical scientific models.

At a Glance

Psychologists rely on quantitative and quantitative research to better understand human thought and behavior.

  • Qualitative research involves collecting and evaluating non-numerical data in order to understand concepts or subjective opinions.
  • Quantitative research involves collecting and evaluating numerical data. 

This article discusses what qualitative and quantitative research are, how they are different, and how they are used in psychology research.

Qualitative Research vs. Quantitative Research

In order to understand qualitative and quantitative psychology research, it can be helpful to look at the methods that are used and when each type is most appropriate.

Psychologists rely on a few methods to measure behavior, attitudes, and feelings. These include:

  • Self-reports , like surveys or questionnaires
  • Observation (often used in experiments or fieldwork)
  • Implicit attitude tests that measure timing in responding to prompts

Most of these are quantitative methods. The result is a number that can be used to assess differences between groups.

However, most of these methods are static, inflexible (you can't change a question because a participant doesn't understand it), and provide a "what" answer rather than a "why" answer.

Sometimes, researchers are more interested in the "why" and the "how." That's where qualitative methods come in.

Qualitative research is about speaking to people directly and hearing their words. It is grounded in the philosophy that the social world is ultimately unmeasurable, that no measure is truly ever "objective," and that how humans make meaning is just as important as how much they score on a standardized test.

Used to develop theories

Takes a broad, complex approach

Answers "why" and "how" questions

Explores patterns and themes

Used to test theories

Takes a narrow, specific approach

Answers "what" questions

Explores statistical relationships

Quantitative methods have existed ever since people have been able to count things. But it is only with the positivist philosophy of Auguste Comte (which maintains that factual knowledge obtained by observation is trustworthy) that it became a "scientific method."

The scientific method follows this general process. A researcher must:

  • Generate a theory or hypothesis (i.e., predict what might happen in an experiment) and determine the variables needed to answer their question
  • Develop instruments to measure the phenomenon (such as a survey, a thermometer, etc.)
  • Develop experiments to manipulate the variables
  • Collect empirical (measured) data
  • Analyze data

Quantitative methods are about measuring phenomena, not explaining them.

Quantitative research compares two groups of people. There are all sorts of variables you could measure, and many kinds of experiments to run using quantitative methods.

These comparisons are generally explained using graphs, pie charts, and other visual representations that give the researcher a sense of how the various data points relate to one another.

Basic Assumptions

Quantitative methods assume:

  • That the world is measurable
  • That humans can observe objectively
  • That we can know things for certain about the world from observation

In some fields, these assumptions hold true. Whether you measure the size of the sun 2000 years ago or now, it will always be the same. But when it comes to human behavior, it is not so simple.

As decades of cultural and social research have shown, people behave differently (and even think differently) based on historical context, cultural context, social context, and even identity-based contexts like gender , social class, or sexual orientation .

Therefore, quantitative methods applied to human behavior (as used in psychology and some areas of sociology) should always be rooted in their particular context. In other words: there are no, or very few, human universals.

Statistical information is the primary form of quantitative data used in human and social quantitative research. Statistics provide lots of information about tendencies across large groups of people, but they can never describe every case or every experience. In other words, there are always outliers.

Correlation and Causation

A basic principle of statistics is that correlation is not causation. Researchers can only claim a cause-and-effect relationship under certain conditions:

  • The study was a true experiment.
  • The independent variable can be manipulated (for example, researchers cannot manipulate gender, but they can change the primer a study subject sees, such as a picture of nature or of a building).
  • The dependent variable can be measured through a ratio or a scale.

So when you read a report that "gender was linked to" something (like a behavior or an attitude), remember that gender is NOT a cause of the behavior or attitude. There is an apparent relationship, but the true cause of the difference is hidden.

Pitfalls of Quantitative Research

Quantitative methods are one way to approach the measurement and understanding of human and social phenomena. But what's missing from this picture?

As noted above, statistics do not tell us about personal, individual experiences and meanings. While surveys can give a general idea, respondents have to choose between only a few responses. This can make it difficult to understand the subtleties of different experiences.

Quantitative methods can be helpful when making objective comparisons between groups or when looking for relationships between variables. They can be analyzed statistically, which can be helpful when looking for patterns and relationships.

Qualitative data are not made out of numbers but rather of descriptions, metaphors, symbols, quotes, analysis, concepts, and characteristics. This approach uses interviews, written texts, art, photos, and other materials to make sense of human experiences and to understand what these experiences mean to people.

While quantitative methods ask "what" and "how much," qualitative methods ask "why" and "how."

Qualitative methods are about describing and analyzing phenomena from a human perspective. There are many different philosophical views on qualitative methods, but in general, they agree that some questions are too complex or impossible to answer with standardized instruments.

These methods also accept that it is impossible to be completely objective in observing phenomena. Researchers have their own thoughts, attitudes, experiences, and beliefs, and these always color how people interpret results.

Qualitative Approaches

There are many different approaches to qualitative research, with their own philosophical bases. Different approaches are best for different kinds of projects. For example:

  • Case studies and narrative studies are best for single individuals. These involve studying every aspect of a person's life in great depth.
  • Phenomenology aims to explain experiences. This type of work aims to describe and explore different events as they are consciously and subjectively experienced.
  • Grounded theory develops models and describes processes. This approach allows researchers to construct a theory based on data that is collected, analyzed, and compared to reach new discoveries.
  • Ethnography describes cultural groups. In this approach, researchers immerse themselves in a community or group in order to observe behavior.

Qualitative researchers must be aware of several different methods and know each thoroughly enough to produce valuable research.

Some researchers specialize in a single method, but others specialize in a topic or content area and use many different methods to explore the topic, providing different information and a variety of points of view.

There is not a single model or method that can be used for every qualitative project. Depending on the research question, the people participating, and the kind of information they want to produce, researchers will choose the appropriate approach.

Interpretation

Qualitative research does not look into causal relationships between variables, but rather into themes, values, interpretations, and meanings. As a rule, then, qualitative research is not generalizable (cannot be applied to people outside the research participants).

The insights gained from qualitative research can extend to other groups with proper attention to specific historical and social contexts.

Relationship Between Qualitative and Quantitative Research

It might sound like quantitative and qualitative research do not play well together. They have different philosophies, different data, and different outputs. However, this could not be further from the truth.

These two general methods complement each other. By using both, researchers can gain a fuller, more comprehensive understanding of a phenomenon.

For example, a psychologist wanting to develop a new survey instrument about sexuality might and ask a few dozen people questions about their sexual experiences (this is qualitative research). This gives the researcher some information to begin developing questions for their survey (which is a quantitative method).

After the survey, the same or other researchers might want to dig deeper into issues brought up by its data. Follow-up questions like "how does it feel when...?" or "what does this mean to you?" or "how did you experience this?" can only be answered by qualitative research.

By using both quantitative and qualitative data, researchers have a more holistic, well-rounded understanding of a particular topic or phenomenon.

Qualitative and quantitative methods both play an important role in psychology. Where quantitative methods can help answer questions about what is happening in a group and to what degree, qualitative methods can dig deeper into the reasons behind why it is happening. By using both strategies, psychology researchers can learn more about human thought and behavior.

Gough B, Madill A. Subjectivity in psychological science: From problem to prospect . Psychol Methods . 2012;17(3):374-384. doi:10.1037/a0029313

Pearce T. “Science organized”: Positivism and the metaphysical club, 1865–1875 . J Hist Ideas . 2015;76(3):441-465.

Adams G. Context in person, person in context: A cultural psychology approach to social-personality psychology . In: Deaux K, Snyder M, eds. The Oxford Handbook of Personality and Social Psychology . Oxford University Press; 2012:182-208.

Brady HE. Causation and explanation in social science . In: Goodin RE, ed. The Oxford Handbook of Political Science. Oxford University Press; 2011. doi:10.1093/oxfordhb/9780199604456.013.0049

Chun Tie Y, Birks M, Francis K. Grounded theory research: A design framework for novice researchers .  SAGE Open Med . 2019;7:2050312118822927. doi:10.1177/2050312118822927

Reeves S, Peller J, Goldman J, Kitto S. Ethnography in qualitative educational research: AMEE Guide No. 80 . Medical Teacher . 2013;35(8):e1365-e1379. doi:10.3109/0142159X.2013.804977

Salkind NJ, ed. Encyclopedia of Research Design . Sage Publishing.

Shaughnessy JJ, Zechmeister EB, Zechmeister JS.  Research Methods in Psychology . McGraw Hill Education.

By Anabelle Bernard Fournier Anabelle Bernard Fournier is a researcher of sexual and reproductive health at the University of Victoria as well as a freelance writer on various health topics.

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

7.4 Qualitative Research

Learning objectives.

  • List several ways in which qualitative research differs from quantitative research in psychology.
  • Describe the strengths and weaknesses of qualitative research in psychology compared with quantitative research.
  • Give examples of qualitative research in psychology.

What Is Qualitative Research?

This book is primarily about quantitative research . Quantitative researchers typically start with a focused research question or hypothesis, collect a small amount of data from each of a large number of individuals, describe the resulting data using statistical techniques, and draw general conclusions about some large population. Although this is by far the most common approach to conducting empirical research in psychology, there is an important alternative called qualitative research. Qualitative research originated in the disciplines of anthropology and sociology but is now used to study many psychological topics as well. Qualitative researchers generally begin with a less focused research question, collect large amounts of relatively “unfiltered” data from a relatively small number of individuals, and describe their data using nonstatistical techniques. They are usually less concerned with drawing general conclusions about human behavior than with understanding in detail the experience of their research participants.

Consider, for example, a study by researcher Per Lindqvist and his colleagues, who wanted to learn how the families of teenage suicide victims cope with their loss (Lindqvist, Johansson, & Karlsson, 2008). They did not have a specific research question or hypothesis, such as, What percentage of family members join suicide support groups? Instead, they wanted to understand the variety of reactions that families had, with a focus on what it is like from their perspectives. To do this, they interviewed the families of 10 teenage suicide victims in their homes in rural Sweden. The interviews were relatively unstructured, beginning with a general request for the families to talk about the victim and ending with an invitation to talk about anything else that they wanted to tell the interviewer. One of the most important themes that emerged from these interviews was that even as life returned to “normal,” the families continued to struggle with the question of why their loved one committed suicide. This struggle appeared to be especially difficult for families in which the suicide was most unexpected.

The Purpose of Qualitative Research

Again, this book is primarily about quantitative research in psychology. The strength of quantitative research is its ability to provide precise answers to specific research questions and to draw general conclusions about human behavior. This is how we know that people have a strong tendency to obey authority figures, for example, or that female college students are not substantially more talkative than male college students. But while quantitative research is good at providing precise answers to specific research questions, it is not nearly as good at generating novel and interesting research questions. Likewise, while quantitative research is good at drawing general conclusions about human behavior, it is not nearly as good at providing detailed descriptions of the behavior of particular groups in particular situations. And it is not very good at all at communicating what it is actually like to be a member of a particular group in a particular situation.

But the relative weaknesses of quantitative research are the relative strengths of qualitative research. Qualitative research can help researchers to generate new and interesting research questions and hypotheses. The research of Lindqvist and colleagues, for example, suggests that there may be a general relationship between how unexpected a suicide is and how consumed the family is with trying to understand why the teen committed suicide. This relationship can now be explored using quantitative research. But it is unclear whether this question would have arisen at all without the researchers sitting down with the families and listening to what they themselves wanted to say about their experience. Qualitative research can also provide rich and detailed descriptions of human behavior in the real-world contexts in which it occurs. Among qualitative researchers, this is often referred to as “thick description” (Geertz, 1973). Similarly, qualitative research can convey a sense of what it is actually like to be a member of a particular group or in a particular situation—what qualitative researchers often refer to as the “lived experience” of the research participants. Lindqvist and colleagues, for example, describe how all the families spontaneously offered to show the interviewer the victim’s bedroom or the place where the suicide occurred—revealing the importance of these physical locations to the families. It seems unlikely that a quantitative study would have discovered this.

Data Collection and Analysis in Qualitative Research

As with correlational research, data collection approaches in qualitative research are quite varied and can involve naturalistic observation, archival data, artwork, and many other things. But one of the most common approaches, especially for psychological research, is to conduct interviews . Interviews in qualitative research tend to be unstructured—consisting of a small number of general questions or prompts that allow participants to talk about what is of interest to them. The researcher can follow up by asking more detailed questions about the topics that do come up. Such interviews can be lengthy and detailed, but they are usually conducted with a relatively small sample. This was essentially the approach used by Lindqvist and colleagues in their research on the families of suicide survivors. Small groups of people who participate together in interviews focused on a particular topic or issue are often referred to as focus groups . The interaction among participants in a focus group can sometimes bring out more information than can be learned in a one-on-one interview. The use of focus groups has become a standard technique in business and industry among those who want to understand consumer tastes and preferences. The content of all focus group interviews is usually recorded and transcribed to facilitate later analyses.

Another approach to data collection in qualitative research is participant observation. In participant observation , researchers become active participants in the group or situation they are studying. The data they collect can include interviews (usually unstructured), their own notes based on their observations and interactions, documents, photographs, and other artifacts. The basic rationale for participant observation is that there may be important information that is only accessible to, or can be interpreted only by, someone who is an active participant in the group or situation. An example of participant observation comes from a study by sociologist Amy Wilkins (published in Social Psychology Quarterly ) on a college-based religious organization that emphasized how happy its members were (Wilkins, 2008). Wilkins spent 12 months attending and participating in the group’s meetings and social events, and she interviewed several group members. In her study, Wilkins identified several ways in which the group “enforced” happiness—for example, by continually talking about happiness, discouraging the expression of negative emotions, and using happiness as a way to distinguish themselves from other groups.

Data Analysis in Quantitative Research

Although quantitative and qualitative research generally differ along several important dimensions (e.g., the specificity of the research question, the type of data collected), it is the method of data analysis that distinguishes them more clearly than anything else. To illustrate this idea, imagine a team of researchers that conducts a series of unstructured interviews with recovering alcoholics to learn about the role of their religious faith in their recovery. Although this sounds like qualitative research, imagine further that once they collect the data, they code the data in terms of how often each participant mentions God (or a “higher power”), and they then use descriptive and inferential statistics to find out whether those who mention God more often are more successful in abstaining from alcohol. Now it sounds like quantitative research. In other words, the quantitative-qualitative distinction depends more on what researchers do with the data they have collected than with why or how they collected the data.

But what does qualitative data analysis look like? Just as there are many ways to collect data in qualitative research, there are many ways to analyze data. Here we focus on one general approach called grounded theory (Glaser & Strauss, 1967). This approach was developed within the field of sociology in the 1960s and has gradually gained popularity in psychology. Remember that in quantitative research, it is typical for the researcher to start with a theory, derive a hypothesis from that theory, and then collect data to test that specific hypothesis. In qualitative research using grounded theory, researchers start with the data and develop a theory or an interpretation that is “grounded in” those data. They do this in stages. First, they identify ideas that are repeated throughout the data. Then they organize these ideas into a smaller number of broader themes. Finally, they write a theoretical narrative —an interpretation—of the data in terms of the themes that they have identified. This theoretical narrative focuses on the subjective experience of the participants and is usually supported by many direct quotations from the participants themselves.

As an example, consider a study by researchers Laura Abrams and Laura Curran, who used the grounded theory approach to study the experience of postpartum depression symptoms among low-income mothers (Abrams & Curran, 2009). Their data were the result of unstructured interviews with 19 participants. Table 7.1 “Themes and Repeating Ideas in a Study of Postpartum Depression Among Low-Income Mothers” shows the five broad themes the researchers identified and the more specific repeating ideas that made up each of those themes. In their research report, they provide numerous quotations from their participants, such as this one from “Destiny:”

Well, just recently my apartment was broken into and the fact that his Medicaid for some reason was cancelled so a lot of things was happening within the last two weeks all at one time. So that in itself I don’t want to say almost drove me mad but it put me in a funk.…Like I really was depressed. (p. 357)

Their theoretical narrative focused on the participants’ experience of their symptoms not as an abstract “affective disorder” but as closely tied to the daily struggle of raising children alone under often difficult circumstances.

Table 7.1 Themes and Repeating Ideas in a Study of Postpartum Depression Among Low-Income Mothers

Theme Repeating ideas
Ambivalence “I wasn’t prepared for this baby,” “I didn’t want to have any more children.”
Caregiving overload “Please stop crying,” “I need a break,” “I can’t do this anymore.”
Juggling “No time to breathe,” “Everyone depends on me,” “Navigating the maze.”
Mothering alone “I really don’t have any help,” “My baby has no father.”
Real-life worry “I don’t have any money,” “Will my baby be OK?” “It’s not safe here.”

The Quantitative-Qualitative “Debate”

Given their differences, it may come as no surprise that quantitative and qualitative research in psychology and related fields do not coexist in complete harmony. Some quantitative researchers criticize qualitative methods on the grounds that they lack objectivity, are difficult to evaluate in terms of reliability and validity, and do not allow generalization to people or situations other than those actually studied. At the same time, some qualitative researchers criticize quantitative methods on the grounds that they overlook the richness of human behavior and experience and instead answer simple questions about easily quantifiable variables.

In general, however, qualitative researchers are well aware of the issues of objectivity, reliability, validity, and generalizability. In fact, they have developed a number of frameworks for addressing these issues (which are beyond the scope of our discussion). And in general, quantitative researchers are well aware of the issue of oversimplification. They do not believe that all human behavior and experience can be adequately described in terms of a small number of variables and the statistical relationships among them. Instead, they use simplification as a strategy for uncovering general principles of human behavior.

Many researchers from both the quantitative and qualitative camps now agree that the two approaches can and should be combined into what has come to be called mixed-methods research (Todd, Nerlich, McKeown, & Clarke, 2004). (In fact, the studies by Lindqvist and colleagues and by Abrams and Curran both combined quantitative and qualitative approaches.) One approach to combining quantitative and qualitative research is to use qualitative research for hypothesis generation and quantitative research for hypothesis testing. Again, while a qualitative study might suggest that families who experience an unexpected suicide have more difficulty resolving the question of why, a well-designed quantitative study could test a hypothesis by measuring these specific variables for a large sample. A second approach to combining quantitative and qualitative research is referred to as triangulation . The idea is to use both quantitative and qualitative methods simultaneously to study the same general questions and to compare the results. If the results of the quantitative and qualitative methods converge on the same general conclusion, they reinforce and enrich each other. If the results diverge, then they suggest an interesting new question: Why do the results diverge and how can they be reconciled?

Key Takeaways

  • Qualitative research is an important alternative to quantitative research in psychology. It generally involves asking broader research questions, collecting more detailed data (e.g., interviews), and using nonstatistical analyses.
  • Many researchers conceptualize quantitative and qualitative research as complementary and advocate combining them. For example, qualitative research can be used to generate hypotheses and quantitative research to test them.
  • Discussion: What are some ways in which a qualitative study of girls who play youth baseball would be likely to differ from a quantitative study on the same topic?

Abrams, L. S., & Curran, L. (2009). “And you’re telling me not to stress?” A grounded theory study of postpartum depression symptoms among low-income mothers. Psychology of Women Quarterly, 33 , 351–362.

Geertz, C. (1973). The interpretation of cultures . New York, NY: Basic Books.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago, IL: Aldine.

Lindqvist, P., Johansson, L., & Karlsson, U. (2008). In the aftermath of teenage suicide: A qualitative study of the psychosocial consequences for the surviving family members. BMC Psychiatry, 8 , 26. Retrieved from http://www.biomedcentral.com/1471-244X/8/26 .

Todd, Z., Nerlich, B., McKeown, S., & Clarke, D. D. (2004) Mixing methods in psychology: The integration of qualitative and quantitative methods in theory and practice . London, UK: Psychology Press.

Wilkins, A. (2008). “Happier than Non-Christians”: Collective emotions and symbolic boundaries among evangelical Christians. Social Psychology Quarterly, 71 , 281–301.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed, qualitative study, affiliations.

  • 1 University of Nebraska Medical Center
  • 2 GDB Research and Statistical Consulting
  • 3 GDB Research and Statistical Consulting/McLaren Macomb Hospital
  • PMID: 29262162
  • Bookshelf ID: NBK470395

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a standalone study, purely relying on qualitative data, or part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and applications of qualitative research.

Qualitative research, at its core, asks open-ended questions whose answers are not easily put into numbers, such as "how" and "why." Due to the open-ended nature of the research questions, qualitative research design is often not linear like quantitative design. One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. Phenomena such as experiences, attitudes, and behaviors can be complex to capture accurately and quantitatively. In contrast, a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a particular time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify, and it is essential to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.

However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore "compete" against each other and the philosophical paradigms associated with each other, qualitative and quantitative work are neither necessarily opposites, nor are they incompatible. While qualitative and quantitative approaches are different, they are not necessarily opposites and certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated.

Copyright © 2024, StatPearls Publishing LLC.

PubMed Disclaimer

Conflict of interest statement

Disclosure: Steven Tenny declares no relevant financial relationships with ineligible companies.

Disclosure: Janelle Brannan declares no relevant financial relationships with ineligible companies.

Disclosure: Grace Brannan declares no relevant financial relationships with ineligible companies.

  • Introduction
  • Issues of Concern
  • Clinical Significance
  • Enhancing Healthcare Team Outcomes
  • Review Questions

Similar articles

  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, Moraleda C, Rogers L, Daniels K, Green P. Crider K, et al. Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217. Cochrane Database Syst Rev. 2022. PMID: 36321557 Free PMC article.
  • Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012). Foffi G, Pastore A, Piazza F, Temussi PA. Foffi G, et al. Phys Biol. 2013 Aug;10(4):040301. doi: 10.1088/1478-3975/10/4/040301. Epub 2013 Aug 2. Phys Biol. 2013. PMID: 23912807
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Invited review: Qualitative research in dairy science-A narrative review. Ritter C, Koralesky KE, Saraceni J, Roche S, Vaarst M, Kelton D. Ritter C, et al. J Dairy Sci. 2023 Sep;106(9):5880-5895. doi: 10.3168/jds.2022-23125. Epub 2023 Jul 18. J Dairy Sci. 2023. PMID: 37474366 Review.
  • Participation in environmental enhancement and conservation activities for health and well-being in adults: a review of quantitative and qualitative evidence. Husk K, Lovell R, Cooper C, Stahl-Timmins W, Garside R. Husk K, et al. Cochrane Database Syst Rev. 2016 May 21;2016(5):CD010351. doi: 10.1002/14651858.CD010351.pub2. Cochrane Database Syst Rev. 2016. PMID: 27207731 Free PMC article. Review.
  • Moser A, Korstjens I. Series: Practical guidance to qualitative research. Part 1: Introduction. Eur J Gen Pract. 2017 Dec;23(1):271-273. - PMC - PubMed
  • Cleland JA. The qualitative orientation in medical education research. Korean J Med Educ. 2017 Jun;29(2):61-71. - PMC - PubMed
  • Foley G, Timonen V. Using Grounded Theory Method to Capture and Analyze Health Care Experiences. Health Serv Res. 2015 Aug;50(4):1195-210. - PMC - PubMed
  • Devers KJ. How will we know "good" qualitative research when we see it? Beginning the dialogue in health services research. Health Serv Res. 1999 Dec;34(5 Pt 2):1153-88. - PMC - PubMed
  • Huston P, Rowan M. Qualitative studies. Their role in medical research. Can Fam Physician. 1998 Nov;44:2453-8. - PMC - PubMed

Publication types

  • Search in PubMed
  • Search in MeSH
  • Add to Search

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

  • NCBI Bookshelf

book cover photo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Strong Hypothesis | Steps & Examples

How to Write a Strong Hypothesis | Steps & Examples

Published on May 6, 2022 by Shona McCombes . Revised on November 20, 2023.

A hypothesis is a statement that can be tested by scientific research. If you want to test a relationship between two or more variables, you need to write hypotheses before you start your experiment or data collection .

Example: Hypothesis

Daily apple consumption leads to fewer doctor’s visits.

Table of contents

What is a hypothesis, developing a hypothesis (with example), hypothesis examples, other interesting articles, frequently asked questions about writing hypotheses.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess – it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Variables in hypotheses

Hypotheses propose a relationship between two or more types of variables .

  • An independent variable is something the researcher changes or controls.
  • A dependent variable is something the researcher observes and measures.

If there are any control variables , extraneous variables , or confounding variables , be sure to jot those down as you go to minimize the chances that research bias  will affect your results.

In this example, the independent variable is exposure to the sun – the assumed cause . The dependent variable is the level of happiness – the assumed effect .

Prevent plagiarism. Run a free check.

Step 1. ask a question.

Writing a hypothesis begins with a research question that you want to answer. The question should be focused, specific, and researchable within the constraints of your project.

Step 2. Do some preliminary research

Your initial answer to the question should be based on what is already known about the topic. Look for theories and previous studies to help you form educated assumptions about what your research will find.

At this stage, you might construct a conceptual framework to ensure that you’re embarking on a relevant topic . This can also help you identify which variables you will study and what you think the relationships are between them. Sometimes, you’ll have to operationalize more complex constructs.

Step 3. Formulate your hypothesis

Now you should have some idea of what you expect to find. Write your initial answer to the question in a clear, concise sentence.

4. Refine your hypothesis

You need to make sure your hypothesis is specific and testable. There are various ways of phrasing a hypothesis, but all the terms you use should have clear definitions, and the hypothesis should contain:

  • The relevant variables
  • The specific group being studied
  • The predicted outcome of the experiment or analysis

5. Phrase your hypothesis in three ways

To identify the variables, you can write a simple prediction in  if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable.

In academic research, hypotheses are more commonly phrased in terms of correlations or effects, where you directly state the predicted relationship between variables.

If you are comparing two groups, the hypothesis can state what difference you expect to find between them.

6. Write a null hypothesis

If your research involves statistical hypothesis testing , you will also have to write a null hypothesis . The null hypothesis is the default position that there is no association between the variables. The null hypothesis is written as H 0 , while the alternative hypothesis is H 1 or H a .

  • H 0 : The number of lectures attended by first-year students has no effect on their final exam scores.
  • H 1 : The number of lectures attended by first-year students has a positive effect on their final exam scores.
Research question Hypothesis Null hypothesis
What are the health benefits of eating an apple a day? Increasing apple consumption in over-60s will result in decreasing frequency of doctor’s visits. Increasing apple consumption in over-60s will have no effect on frequency of doctor’s visits.
Which airlines have the most delays? Low-cost airlines are more likely to have delays than premium airlines. Low-cost and premium airlines are equally likely to have delays.
Can flexible work arrangements improve job satisfaction? Employees who have flexible working hours will report greater job satisfaction than employees who work fixed hours. There is no relationship between working hour flexibility and job satisfaction.
How effective is high school sex education at reducing teen pregnancies? Teenagers who received sex education lessons throughout high school will have lower rates of unplanned pregnancy teenagers who did not receive any sex education. High school sex education has no effect on teen pregnancy rates.
What effect does daily use of social media have on the attention span of under-16s? There is a negative between time spent on social media and attention span in under-16s. There is no relationship between social media use and attention span in under-16s.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

is hypothesis used in qualitative research

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). How to Write a Strong Hypothesis | Steps & Examples. Scribbr. Retrieved August 30, 2024, from https://www.scribbr.com/methodology/hypothesis/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, construct validity | definition, types, & examples, what is a conceptual framework | tips & examples, operationalization | a guide with examples, pros & cons, what is your plagiarism score.

  • Media Center
  • Not yet translated

Qualitative Research

What is qualitative research.

Qualitative research is a methodology focused on collecting and analyzing descriptive, non-numerical data to understand complex human behavior, experiences, and social phenomena. This approach utilizes techniques such as interviews, focus groups, and observations to explore the underlying reasons, motivations, and meanings behind actions and decisions. Unlike quantitative research, which focuses on measuring and quantifying data, qualitative research delves into the 'why' and 'how' of human behavior, providing rich, contextual insights that reveal deeper patterns and relationships.

The Basic Idea

Theory, meet practice.

TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.

Ever heard of the saying “quality over quantity”? Well, some researchers feel the same way!

Imagine you are conducting a study looking at consumer behavior for buying potato chips. You’re interested in seeing which factors influence a customer’s choice between purchasing Doritos and Pringles. While you could conduct quantitative research and measure the number of bags purchased, this data alone wouldn’t explain why consumers choose one chip brand over the other; it would just tell you what they are purchasing. To gather more meaningful data, you may conduct interviews or surveys, asking people about their chip preferences and what draws them to one brand over another. Is it the taste of the chips? The font or color of the bag? This qualitative approach dives deeper to uncover why one potato chip is more popular than the other and can help companies make the adjustments that count.

Qualitative research, as seen in the example above, can provide greater insight into behavior, going beyond numbers to understand people’s experiences, attitudes, and perceptions. It helps us to grasp the meaning behind decisions, rather than just describing them. As human behavior is often difficult to qualify, qualitative research is a useful tool for solving complex problems or as a starting point to generate new ideas for research. Qualitative methods are used across all types of research—from consumer behavior to education, healthcare, behavioral science, and everywhere in between!

At its core, qualitative research is exploratory—rather than coming up with a hypothesis and gathering numerical data to support it, qualitative research begins with open-ended questions. Instead of asking “Which chip brand do consumers buy more frequently?”, qualitative research asks “Why do consumers choose one chip brand over another?”. Common methods to obtain qualitative data include focus groups, unstructured interviews, and surveys. From the data gathered, researchers then can make hypotheses and move on to investigating them. 

It’s important to note that qualitative and quantitative research are not two opposing methods, but rather two halves of a whole. Most of the best studies leverage both kinds of research by collecting objective, quantitative data, and using qualitative research to gain greater insight into what the numbers reveal.

You may have heard the world is made up of atoms and molecules, but it’s really made up of stories. When you sit with an individual that’s been here, you can give quantitative data a qualitative overlay. – William Turner, 16th century British scientist 1

Quantitative Research: A research method that involves collecting and analyzing numerical data to test hypotheses, identify patterns, and predict outcomes.

Exploratory Research: An initial study used to investigate a problem that is not clearly defined, helping to clarify concepts and improve research design.

Positivism: A scientific approach that emphasizes empirical evidence and objectivity, often involving the testing of hypotheses based on observable data. 2 

Phenomenology: A research approach that emphasizes the first-person point of view, placing importance on how people perceive, experience, and interpret the world around them. 3

Social Interaction Theory: A theoretical perspective that people make sense of their social worlds by the exchange of meaning through language and symbols. 4

Critical Theory: A worldview that there is no unitary or objective “truth” about people that can be discovered, as human experience is shaped by social, cultural, and historical contexts that influences reality and society. 5

Empirical research: A method of gaining knowledge through direct observation and experimentation, relying on real-world data to test theories. 

Paradigm shift: A fundamental change in the basic assumptions and methodologies of a scientific discipline, leading to the adoption of a new framework. 2

Interpretive/descriptive approach: A methodology that focuses on understanding the meanings people assign to their experiences, often using qualitative methods.

Unstructured interviews: A free-flowing conversation between researcher and participant without predetermined questions that must be asked to all participants. Instead, the researcher poses questions depending on the flow of the interview. 6

Focus Group: Group interviews where a researcher asks questions to guide a conversation between participants who are encouraged to share their ideas and information, leading to detailed insights and diverse perspectives on a specific topic.

Grounded theory : A qualitative methodology that generates a theory directly from data collected through iterative analysis.

When social sciences started to emerge in the 17th and 18th centuries, researchers wanted to apply the same quantitative approach that was used in the natural sciences. At this time, there was a predominant belief that human behavior could be numerically analyzed to find objective patterns and would be generalizable to similar people and situations. Using scientific means to understand society is known as a positivist approach. However, in the early 20th century, both natural and social scientists started to criticize this traditional view of research as being too reductive. 2  

In his book, The Structure of Scientific Revolutions, American philosopher Thomas Kuhn identified that a major paradigm shift was starting to occur. Earlier methods of science were being questioned and replaced with new ways of approaching research which suggested that true objectivity was not possible when studying human behavior. Rather, the importance of context meant research on one group could not be generalized to all groups. 2 Numbers alone were deemed insufficient for understanding the environment surrounding human behavior which was now seen as a crucial piece of the puzzle. Along with this paradigm shift, Western scholars began to take an interest in ethnography , wanting to understand the customs, practices, and behaviors of other cultures. 

Qualitative research became more prominent throughout the 20th century, expanding beyond anthropology and ethnography to being applied across all forms of research; in science, psychology, marketing—the list goes on. Paul Felix Lazarsfield, Austrian-American sociologist and mathematician often known as the father of qualitative research, popularized new methods such as unstructured interviews and group discussions. 7 During the 1940s, Lazarfield brought attention to the fact that humans are not always rational decision-makers, making them difficult to understand through numerical data alone.

The 1920s saw the invention of symbolic interaction theory, developed by George Herbert Mead. Symbolic interaction theory posits society as the product of shared symbols such as language. People attach meanings to these symbols which impacts the way they understand and communicate with the world around them, helping to create and maintain a society. 4 Critical theory was also developed in the 1920s at the University of Frankfurt Institute for Social Research. Following the challenge of positivism, critical theory is a worldview that there is no unitary or objective “truth” about people that can be discovered, as human experience is shaped by social, cultural, and historical contexts. By shedding light on the human experience, it hopes to highlight the role of power, ideology, and social structures in shaping humans, and using this knowledge to create change. 5

Other formalized theories were proposed during the 20th century, such as grounded theory , where researchers started gathering data to form a hypothesis, rather than the other way around. This represented a stark contrast to positivist approaches that had dominated the 17th and 18th centuries.

The 1950s marked a shift toward a more interpretive and descriptive approach which factored in how people make sense of their subjective reality and attach meaning to it. 2 Researchers began to recognize that the why of human behavior was just as important as the what . Max Weber, a German sociologist, laid the foundation of the interpretive approach through the concept of Verstehen (which in English translates to understanding), emphasizing the importance of interpreting the significance people attach to their behavior. 8 With the shift to an interpretive and descriptive approach came the rise of phenomenology, which emphasizes first-person experiences by studying how individuals perceive, experience, and interpret the world around them. 

Today, in the age of big data, qualitative research has boomed, as advancements in digital tools allow researchers to gather vast amounts of data (both qualitative and quantitative), helping us better understand complex social phenomena. Social media patterns can be analyzed to understand public sentiment, consumer behavior, and cultural trends to grasp how people attach subjective meaning to their reality. There is even an emerging field of digital ethnography which is entirely focused on how humans interact and communicate in virtual environments!

Thomas Kuhn

American philosopher who suggested that science does not evolve through merely an addition of knowledge by compiling new learnings onto existing theories, but instead undergoes paradigm shifts where new theories and methodologies replace old ones. In this way, Kuhn suggested that science is a reflection of a community at a particular point in time. 9

Paul Felix Lazarsfeld

Often referred to as the father of qualitative research, Austrian-American sociologist and mathematician Paul Lazarsfield helped to develop modern empirical methods of conducting research in the social sciences such as surveys, opinion polling, and panel studies. Lazarsfeld was best known for combining qualitative and quantitative research to explore America's voting habits and behaviors related to mass communication, such as newspapers, magazines, and radios. 10  

German sociologist and political economist known for his sociological approach of “Verstehen” which emphasized the need to understand individuals or groups by exploring the meanings that people attach to their decisions. While previously, qualitative researchers in ethnography acted like an outside observer to explain behavior from their point of view, Weber believed that an empathetic understanding of behavior, that explored both intent and context, was crucial to truly understanding behavior. 11  

George Herbert Mead

Widely recognized as the father of symbolic interaction theory, Mead was an American philosopher and sociologist who took an interest in how spoken language and symbols contribute to one’s idea of self, and to society at large. 4

Consequences

Humans are incredibly complex beings, whose behaviors cannot always be reduced to mere numbers and statistics. Qualitative research acknowledges this inherent complexity and can be used to better capture the diversity of human and social realities. 

Qualitative research is also more flexible—it allows researchers to pivot as they uncover new insights. Instead of approaching the study with predetermined hypotheses, oftentimes, researchers let the data speak for itself and are not limited by a set of predefined questions. It can highlight new areas that a researcher hadn’t even thought of exploring. 

By providing a deeper explanation of not only what we do, but why we do it, qualitative research can be used to inform policy-making, educational practices, healthcare approaches, and marketing tactics. For instance, while quantitative research tells us how many people are smokers, qualitative research explores what, exactly, is driving them to smoke in the first place. If the research reveals that it is because they are unaware of the gravity of the consequences, efforts can be made to emphasize the risks, such as by placing warnings on cigarette cartons. 

Finally, qualitative research helps to amplify the voices of marginalized or underrepresented groups. Researchers who embrace a true “Verstehen” mentality resist applying their own worldview to the subjects they study, but instead seek to understand the meaning people attach to their own behaviors. In bringing forward other worldviews, qualitative research can help to shift perceptions and increase awareness of social issues. For example, while quantitative research may show that mental health conditions are more prevalent for a certain group, along with the access they have to mental health resources, qualitative research is able to explain the lived experiences of these individuals and uncover what barriers they are facing to getting help. This qualitative approach can support governments and health organizations to better design mental health services tailored to the communities they exist in.

Controversies

Qualitative research aims to understand an individual’s lived experience, which although provides deeper insights, can make it hard to generalize to a larger population. While someone in a focus group could say they pick Doritos over Pringles because they prefer the packaging, it’s difficult for a researcher to know if this is universally applicable, or just one person’s preference. 12 This challenge makes it difficult to replicate qualitative research because it involves context-specific findings and subjective interpretation. 

Moreover, there can be bias in sample selection when conducting qualitative research. Individuals who put themselves forward to be part of a focus group or interview may hold strong opinions they want to share, making the insights gathered from their answers not necessarily reflective of the general population.13 People may also give answers that they think researchers are looking for leading to skewed results, which is a common example of the observer expectancy effect . 

However, the bias in this interaction can go both ways. While researchers are encouraged to embrace “Verstehen,” there is a possibility that they project their own views onto their participants. For example, if an American researcher is studying eating habits in China and observes someone burping, they may attribute this behavior to rudeness—when in fact, burping can be a sign that you have enjoyed your meal and it is a compliment to the chef. One way to mitigate this risk is through thick description , noting a great amount of contextual detail in their observations. Another way to minimize the researcher’s bias on their observations is through member checking , returning results to participants to check if they feel they accurately capture their experience.

Another drawback of qualitative research is that it is time-consuming. Focus groups and unstructured interviews take longer and are more difficult to logistically arrange, and the data gathered is harder to analyze as it goes beyond numerical data. While advances in technology alleviate some of these labor-intensive processes, they still require more resources. 

Many of these drawbacks can be mitigated through a mixed-method approach, combining both qualitative and quantitative research. Qualitative research can be a good starting point, giving depth and contextual understanding to a behavior, before turning to quantitative data to see if the results are generalizable. Or, the opposite direction can be used—quantitative research can show us the “what,” identifying patterns and correlations, and researchers can then better understand the “why” behind behavior by leveraging qualitative methods. Triangulation —using multiple datasets, methods, or theories—is another way to help researchers avoid bias. 

Linking Adult Behaviors to Childhood Experiences

In the mid-1980s, an obesity program at the KP San Diego Department of Preventive Medicine had a high dropout rate. What was interesting is that a majority of the dropouts were successfully losing weight, posing the question of why they were leaving the program in the first place. In this instance, greater investigation was required to understand the why behind their behaviors.

Researchers conducted in-depth interviews with almost 200 dropouts, finding that many of them had experienced childhood abuse that had led to obesity. In this unfortunate scenario, obesity was a consequence of another problem, rather than the root problem itself. This led Dr. Vincent J. Felitti, who was working for the department, to launch the Adverse Childhood Experiences (ACE) Study, aimed at exploring how childhood experiences impact adult health status. 

Felitti and the Department of Preventive Medicine studied over 17,000 adults with health plans that revealed a strong relationship between emotional experiences as children and negative health behaviors as adults, such as obesity, smoking, and intravenous drug use. This study demonstrates the importance of qualitative research to uncover correlations that would not be discovered by merely looking at numerical data. 14  

Understanding Voter Turnout

Voting is usually considered an important part of political participation in a democracy. However, voter turnout is an issue in many countries, including the US. While quantitative research can tell us how many people vote, it does not provide insights into why people choose to vote or not.

With this in mind, Dawn Merdelin Johnson, a PhD student in philosophy at Walden University, explored how public corruption has impacted voter turnout in Cook County, Illinois. Johnson conducted semi-structured telephone interviews to understand factors that contribute to low voter turnout and the impact of public corruption on voting behaviors. Johnson found that public corruption leads to voters believing public officials prioritize their own well-being over the good of the people, leading to distrust in candidates and the overall political system, and thus making people less likely to vote. Other themes revealed that to increase voter turnout, voting should be more convenient and supply more information about the candidates to help people make more informed decisions.

From these findings, Johnson suggested that the County could experience greater voter turnout through the development of an anti-corruption agency, improved voter registration and maintenance, and enhanced voting accessibility. These initiatives would boost voting engagement and positively impact democratic participation. 15

Related TDL Content

Applying behavioral science in an organization.

At its core, behavioral science is about uncovering the reasons behind why people do what they do. That means that the role of a behavioral scientist can be quite broad, but has many important applications. In this article, Preeti Kotamarthi explains how behavioral science supports different facets of the organization, providing valuable insights for user design, data science, and product marketing. 

Increasing HPV Vaccination in Rural Kenya

While HPV vaccines are an effective method of preventing cervical cancer, there is low intake in low and middle-income countries worldwide. Qualitative research can uncover the social and behavioral barriers to increasing HPV vaccination, revealing that misinformation, skepticism, and fear prevent people from getting the vaccine. In this article, our writer Annika Steele explores how qualitative insights can inform a two-part intervention strategy to increase HPV vaccination rates.

  • Versta Research. (n.d.). Bridging the quantitative-qualitative gap . Versta Research. Retrieved August 17, 2024, from https://verstaresearch.com/newsletters/bridging-the-quantitative-qualitative-gap/
  • Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation (4th ed.). Jossey-Bass.
  • Smith, D. W. (2018). Phenomenology. In E. N. Zalta (Ed.), Stanford Encyclopedia of Philosophy . Retrieved from https://plato.stanford.edu/entries/phenomenology/#HistVariPhen
  • Nickerson, C. (2023, October 16). Symbolic interaction theory . Simply Psychology. https://www.simplypsychology.org/symbolic-interaction-theory.html
  • DePoy, E., & Gitlin, L. N. (2016). Introduction to research (5th ed.). Elsevier.
  • ATLAS.ti. (n.d.). Unstructured interviews . ATLAS.ti. Retrieved August 17, 2024, from https://atlasti.com/research-hub/unstructured-interviews
  • O'Connor, O. (2020, August 14). The history of qualitative research . Medium. https://oliconner.medium.com/the-history-of-qualitative-research-f6e07c58e439
  • Sociology Institute. (n.d.). Max Weber: Interpretive sociology & legacy . Sociology Institute. Retrieved August 18, 2024, from https://sociology.institute/introduction-to-sociology/max-weber-interpretive-sociology-legacy
  • Kuhn, T. S. (2012). The structure of scientific revolutions (4th ed.). University of Chicago Press.
  • Encyclopaedia Britannica. (n.d.). Paul Felix Lazarsfeld . Encyclopaedia Britannica. Retrieved August 17, 2024, from https://www.britannica.com/biography/Paul-Felix-Lazarsfeld
  • Nickerson, C. (2019). Verstehen in Sociology: Empathetic Understanding . Simply Psychology. Retrieved August 18, 2024, from: https://www.simplypsychology.org/verstehen.html
  • Omniconvert. (2021, October 4). Qualitative research: Definition, methodology, limitations, and examples . Omniconvert. https://www.omniconvert.com/blog/qualitative-research-definition-methodology-limitation-examples/
  • Vaughan, T. (2021, August 5). 10 advantages and disadvantages of qualitative research . Poppulo. https://www.poppulo.com/blog/10-advantages-and-disadvantages-of-qualitative-research
  • Felitti, V. J. (2002). The relation between adverse childhood experiences and adult health: Turning gold into lead. The Permanente Journal, 6 (1), 44–47. https://www.thepermanentejournal.org/doi/10.7812/TPP/02.994
  • Johnson, D. M. (2024). Voters' perception of public corruption and low voter turnout: A qualitative case study of Cook County (Doctoral dissertation). Walden University.

About the Author

Emilie Rose Jones

Emilie Rose Jones

Emilie currently works in Marketing & Communications for a non-profit organization based in Toronto, Ontario. She completed her Masters of English Literature at UBC in 2021, where she focused on Indigenous and Canadian Literature. Emilie has a passion for writing and behavioural psychology and is always looking for opportunities to make knowledge more accessible. 

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations.

is hypothesis used in qualitative research

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

Annual revenue increase.

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue .

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75% .

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Randomized Controlled Trial

Psychological theories.

A side profile outline of a human head contains a gear within the brain area, symbolizing thinking or cognitive processes, against a plain white background.

Automatic Thinking

Notes illustration

Eager to learn about how behavioral science can help your organization?

Get new behavioral science insights in your inbox every month..

Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • Product Demos
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Qualitative Research

What is qualitative research?

Quantitative vs qualitative research, approaches to qualitative research, qualitative data types and category types, disadvantages of qualitative research, how to use qualitative research to your business’s advantage, 6 steps to conducting good qualitative research, how do you arrange qualitative data for analysis, qualitative data analysis, how qualtrics products can enhance & simplify the qualitative research process, try qualtrics for free, your ultimate guide to qualitative research (with methods and examples).

31 min read You may be already using qualitative research and want to check your understanding, or you may be starting from the beginning. Learn about qualitative research methods and how you can best use them for maximum effect.

Qualitative research is a research method that collects non-numerical data. Typically, it goes beyond the information that quantitative research provides (which we will cover below) because it is used to gain an understanding of underlying reasons, opinions, and motivations.

Qualitative research methods focus on the thoughts, feelings, reasons, motivations, and values of a participant, to understand why people act in the way they do .

In this way, qualitative research can be described as naturalistic research, looking at naturally-occurring social events within natural settings. So, qualitative researchers would describe their part in social research as the ‘vehicle’ for collecting the qualitative research data.

Qualitative researchers discovered this by looking at primary and secondary sources where data is represented in non-numerical form. This can include collecting qualitative research data types like quotes, symbols, images, and written testimonials.

These data types tell qualitative researchers subjective information. While these aren’t facts in themselves, conclusions can be interpreted out of qualitative that can help to provide valuable context.

Because of this, qualitative research is typically viewed as explanatory in nature and is often used in social research, as this gives a window into the behavior and actions of people.

It can be a good research approach for health services research or clinical research projects.

Free eBook: The qualitative research design handbook

In order to compare qualitative and quantitative research methods, let’s explore what quantitative research is first, before exploring how it differs from qualitative research.

Quantitative research

Quantitative research is the research method of collecting quantitative research data – data that can be converted into numbers or numerical data, which can be easily quantified, compared, and analyzed .

Quantitative research methods deal with primary and secondary sources where data is represented in numerical form. This can include closed-question poll results, statistics, and census information or demographic data.

Quantitative research data tends to be used when researchers are interested in understanding a particular moment in time and examining data sets over time to find trends and patterns.

The difference between quantitative and qualitative research methodology

While qualitative research is defined as data that supplies non-numerical information, quantitative research focuses on numerical data.

In general, if you’re interested in measuring something or testing a hypothesis, use quantitative research methods. If you want to explore ideas, thoughts, and meanings, use qualitative research methods.

quantitative vs qualitative research

Where both qualitative and quantitative methods are not used, qualitative researchers will find that using one without the other leaves you with missing answers.

For example, if a retail company wants to understand whether a new product line of shoes will perform well in the target market:

  • Qualitative research methods could be used with a sample of target customers, which would provide subjective reasons why they’d be likely to purchase or not purchase the shoes, while
  • Quantitative research methods into the historical customer sales information on shoe-related products would provide insights into the sales performance, and likely future performance of the new product range.

There are five approaches to qualitative research methods:

  • Grounded theory: Grounded theory relates to where qualitative researchers come to a stronger hypothesis through induction, all throughout the process of collecting qualitative research data and forming connections. After an initial question to get started, qualitative researchers delve into information that is grouped into ideas or codes, which grow and develop into larger categories, as the qualitative research goes on. At the end of the qualitative research, the researcher may have a completely different hypothesis, based on evidence and inquiry, as well as the initial question.
  • Ethnographic research : Ethnographic research is where researchers embed themselves into the environment of the participant or group in order to understand the culture and context of activities and behavior. This is dependent on the involvement of the researcher, and can be subject to researcher interpretation bias and participant observer bias . However, it remains a great way to allow researchers to experience a different ‘world’.
  • Action research: With the action research process, both researchers and participants work together to make a change. This can be through taking action, researching and reflecting on the outcomes. Through collaboration, the collective comes to a result, though the way both groups interact and how they affect each other gives insights into their critical thinking skills.
  • Phenomenological research: Researchers seek to understand the meaning of an event or behavior phenomenon by describing and interpreting participant’s life experiences. This qualitative research process understands that people create their own structured reality (‘the social construction of reality’), based on their past experiences. So, by viewing the way people intentionally live their lives, we’re able to see the experiential meaning behind why they live as they do.
  • Narrative research: Narrative research, or narrative inquiry, is where researchers examine the way stories are told by participants, and how they explain their experiences, as a way of explaining the meaning behind their life choices and events. This qualitative research can arise from using journals, conversational stories, autobiographies or letters, as a few narrative research examples. The narrative is subjective to the participant, so we’re able to understand their views from what they’ve documented/spoken.

Web Graph of Qualitative Research

Qualitative research methods can use structured research instruments for data collection, like:

Surveys for individual views

A survey is a simple-to-create and easy-to-distribute qualitative research method, which helps gather information from large groups of participants quickly. Traditionally, paper-based surveys can now be made online, so costs can stay quite low.

Qualitative research questions tend to be open questions that ask for more information and provide a text box to allow for unconstrained comments.

Examples include:

  • Asking participants to keep a written or a video diary for a period of time to document their feelings and thoughts
  • In-Home-Usage tests: Buyers use your product for a period of time and report their experience

Surveys for group consensus (Delphi survey)

A Delphi survey may be used as a way to bring together participants and gain a consensus view over several rounds of questions. It differs from traditional surveys where results go to the researcher only. Instead, results go to participants as well, so they can reflect and consider all responses before another round of questions are submitted.

This can be useful to do as it can help researchers see what variance is among the group of participants and see the process of how consensus was reached.

  • Asking participants to act as a fake jury for a trial and revealing parts of the case over several rounds to see how opinions change. At the end, the fake jury must make a unanimous decision about the defendant on trial.
  • Asking participants to comment on the versions of a product being developed, as the changes are made and their feedback is taken onboard. At the end, participants must decide whether the product is ready to launch.

Semi-structured interviews

Interviews are a great way to connect with participants, though they require time from the research team to set up and conduct, especially if they’re done face-to-face.

Researchers may also have issues connecting with participants in different geographical regions. The researcher uses a set of predefined open-ended questions, though more ad-hoc questions can be asked depending on participant answers.

  • Conducting a phone interview with participants to run through their feedback on a product. During the conversation, researchers can go ‘off-script’ and ask more probing questions for clarification or build on the insights.

Focus groups

Participants are brought together into a group, where a particular topic is discussed. It is researcher-led and usually occurs in-person in a mutually accessible location, to allow for easy communication between participants in focus groups.

In focus groups , the researcher uses a set of predefined open-ended questions, though more ad-hoc questions can be asked depending on participant answers.

  • Asking participants to do UX tests, which are interface usability tests to show how easily users can complete certain tasks

Direct observation

This is a form of ethnographic research where researchers will observe participants’ behavior in a naturalistic environment. This can be great for understanding the actions in the culture and context of a participant’s setting.

This qualitative research method is prone to researcher bias as it is the researcher that must interpret the actions and reactions of participants. Their findings can be impacted by their own beliefs, values, and inferences.

  • Embedding yourself in the location of your buyers to understand how a product would perform against the values and norms of that society

One-to-one interviews

One-to-one interviews are one of the most commonly used data collection instruments for qualitative research questions, mainly because of their approach. The interviewer or the researcher collects data directly from the interviewee one-to-one. The interview method may be informal and unstructured – conversational. The open-ended questions are mostly asked spontaneously, with the interviewer letting the interview flow dictate the questions to be asked.

Record keeping

This method uses existing reliable documents and similar sources of information as the data source. This data can be used in new research. It is similar to going to a library. There, one can go over books and other reference material to collect relevant data that can be used in the research.

Process of observation

In this data collection method, the researcher immerses themselves in the setting where their respondents are, keeps a keen eye on the participants, and takes notes. This is known as the process of observation.

Besides taking notes, other documentation methods, such as video and audio recording, photography, and similar methods, can be used.

Longitudinal studies

This data collection method is repeatedly performed on the same data source over an extended period. It is an observational research method that goes on for a few years and sometimes can go on for even decades. Such data collection methods aim to find correlations through empirical studies of subjects with common traits.

Case studies

This method gathers data from an in-depth analysis of case studies. The versatility of this method is demonstrated in how this method can be used to analyze both simple and complex subjects. The strength of this method is how judiciously it uses a combination of one or more qualitative methods to draw inferences.

What is data coding in qualitative research?

Data coding in qualitative research involves a systematic process of organizing and interpreting collected data. This process is crucial for identifying patterns and themes within complex data sets. Here’s how it works:

  • Data Collection : Initially, researchers gather data through various methods such as interviews, focus groups, and observations. The raw data often includes transcriptions of conversations, notes, or multimedia recordings.
  • Initial Coding : Once data is collected, researchers begin the initial coding phase. They break down the data into manageable segments and assign codes—short phrases or words that summarize each piece of information. This step is often referred to as open coding.
  • Categorization : Next, researchers categorize the codes into broader themes or concepts. This helps in organizing the data and identifying major patterns. These themes can be linked to theoretical frameworks or emerging patterns from the data itself.
  • Review and Refinement : The coding process is iterative, meaning researchers continuously review and refine their codes and categories. They may merge similar codes, adjust categories, or add new codes as deeper understanding develops.
  • Thematic Analysis : Finally, researchers perform a thematic analysis to draw meaningful conclusions from the data. They explore how the identified themes relate to the research questions and objectives, providing insights and answering key queries.

Methods and tools for coding

  • Manual Coding : Involves using highlighters, sticky notes, and physical organization methods.
  • Software Tools : Programs like NVivo, ATLAS.ti, and MAXQDA streamline the coding process, allowing researchers to handle large volumes of data efficiently.

Data coding transforms raw qualitative data into structured information, making it essential for deriving actionable insights and achieving research objectives.

Qualitative research methods often deliver information in the following qualitative research data types:

  • Written testimonials

Through contextual analysis of the information, researchers can assign participants to category types:

  • Social class
  • Political alignment
  • Most likely to purchase a product
  • Their preferred training learning style

Why is qualitative data important?

Qualitative data plays a pivotal role in understanding the nuances of human behavior and emotions. Unlike quantitative data, which deals with numbers and hard statistics, qualitative data captures the vivid tapestry of opinions, experiences, and motivations.

Understanding emotions and perceptions

One primary reason qualitative data is crucial is its ability to reveal the emotions and perceptions of individuals. This type of data goes beyond mere numbers to provide insights into how people feel and think. For example, understanding consumer sentiments can help businesses tailor their products and services to meet customer needs more effectively.

Rich context and insights

Qualitative analysis dives deep into textual data, uncovering rich context and subtle patterns that might be missed with quantitative methods alone. This kind of data provides comprehensive insights by examining the intricate details of user feedback, interviews, or focus group discussions. For instance, companies like IBM and Nielsen use qualitative data to gain a deeper understanding of market trends and consumer preferences.

Forming research parameters

Researchers use qualitative data to establish parameters for broader studies. By identifying recurring themes and traits, they can design more targeted and effective surveys and experiments. This initial qualitative phase is essential in ensuring that subsequent quantitative research is grounded in real-world observations.

Solving complex problems

In market research, qualitative data is invaluable for solving complex problems. It enables researchers to decode the language of their consumers, identifying pain points and areas for improvement. Brands like Coca-Cola and P&G frequently rely on qualitative insights to refine their marketing strategies and enhance customer satisfaction.

In sum, qualitative data is essential for its ability to capture the depth and complexity of human experiences. It provides the contextual groundwork needed to make informed decisions, understand consumer behavior, and ultimately drive successful outcomes in various fields.

How do you organize qualitative data?

Organizing qualitative data is crucial to extract meaningful insights efficiently. Here’s a step-by-step guide to help you streamline the process:

1. Align with research objectives

Start by revisiting your research objectives. Clarifying the core questions you aim to answer can guide you in structuring your data. Create a table or spreadsheet where these objectives are clearly laid out.

2. Categorize the data

Sort your data based on themes or categories relevant to your research objectives. Use different coding techniques to label each piece of information. Tools like NVivo or Atlas.ti can help in coding and categorizing qualitative data effectively.

3. Use visual aids

Visualizing data can make patterns more apparent. Consider using charts, graphs, or mind maps to represent your categorized data. Applications like Microsoft Excel or Tableau are excellent for creating visual representations.

4. Develop a index system

Create an index system to keep track of where each piece of information fits within your categories. This can be as simple as a detailed index in a Word document or a more complex system within your data analysis software.

5. Summary tables

Develop summary tables that distill large amounts of information into key points. These tables should reflect the core themes and subthemes you’ve identified, making it easier to draw conclusions.

6. Avoid unnecessary data

Don’t fall into the trap of hoarding unorganized or irrelevant information. Regularly review your data to ensure it aligns with your research goals. Trim any redundant or extraneous data to maintain clarity and focus.

By following these steps, you can turn your raw qualitative data into an organized, insightful resource that directly supports your research objectives.

Advantages of qualitative research

  • Useful for complex situations: Qualitative research on its own is great when dealing with complex issues, however, providing background context using quantitative facts can give a richer and wider understanding of a topic. In these cases, quantitative research may not be enough.
  • A window into the ‘why’: Qualitative research can give you a window into the deeper meaning behind a participant’s answer. It can help you uncover the larger ‘why’ that can’t always be seen by analyzing numerical data.
  • Can help improve customer experiences: In service industries where customers are crucial, like in private health services, gaining information about a customer’s experience through health research studies can indicate areas where services can be improved.
  • You need to ask the right question: Doing qualitative research may require you to consider what the right question is to uncover the underlying thinking behind a behavior. This may need probing questions to go further, which may suit a focus group or face-to-face interview setting better.
  • Results are interpreted: As qualitative research data is written, spoken, and often nuanced, interpreting the data results can be difficult as they come in non-numerical formats. This might make it harder to know if you can accept or reject your hypothesis.
  • More bias: There are lower levels of control to qualitative research methods, as they can be subject to biases like confirmation bias, researcher bias, and observation bias. This can have a knock-on effect on the validity and truthfulness of the qualitative research data results.

Qualitative methods help improve your products and marketing in many different ways:

  • Understand the emotional connections to your brand
  • Identify obstacles to purchase
  • Uncover doubts and confusion about your messaging
  • Find missing product features
  • Improve the usability of your website, app, or chatbot experience
  • Learn about how consumers talk about your product
  • See how buyers compare your brand to others in the competitive set
  • Learn how an organization’s employees evaluate and select vendors

Businesses can benefit from qualitative research by using it to understand the meaning behind data types. There are several steps to this:

  • Define your problem or interest area: What do you observe is happening and is it frequent? Identify the data type/s you’re observing.
  • Create a hypothesis: Ask yourself what could be the causes for the situation with those qualitative research data types.
  • Plan your qualitative research: Use structured qualitative research instruments like surveys, focus groups, or interviews to ask questions that test your hypothesis.
  • Data Collection: Collect qualitative research data and understand what your data types are telling you. Once data is collected on different types over long time periods, you can analyze it and give insights into changing attitudes and language patterns.
  • Data analysis: Does your information support your hypothesis? (You may need to redo the qualitative research with other variables to see if the results improve)
  • Effectively present the qualitative research data: Communicate the results in a clear and concise way to help other people understand the findings.

Transcribing and organizing your qualitative data is crucial for robust analysis. Follow these steps to ensure your data is systematically arranged and ready for interpretation.

1. Transcribe your sata

Converting your gathered information into a textual format is the first step. This involves:

  • Listening to audio recordings: Jot down every nuance and detail.
  • Reading through notes: Ensure all handwritten or typed notes are coherent and complete.

2. Choose a suitable format

Once transcribed, your data needs to be formatted for ease of analysis. You have several options:

  • Spreadsheets: Tools like Microsoft Excel or Google Sheets allow for easy sorting and categorization.
  • Specialized software: Consider using computer-assisted qualitative data analysis software (CAQDAS) such as NVivo, ATLAS.ti, or MAXQDA to handle large volumes of data efficiently.

3. Organize by themes

Begin to identify patterns or themes in your data. This method, often called coding, involves:

  • Highlighting Key Points: Use different colors or symbols to mark recurring ideas.
  • Creating Categories: Group similar themes together to form a coherent structure.

4. Label and store

Finally, label and store your data meticulously to ensure easy retrieval and reference. Label:

  • Files and Documents: With clear titles and dates.
  • Sections within Documents: With headings and subheadings to distinguish different themes and patterns.

By following these systematic steps, you can convert raw qualitative data into a structured format ready for comprehensive analysis.

Evaluating qualitative research can be tough when there are several analytics platforms to manage and lots of subjective data sources to compare.

Qualtrics provides a number of qualitative research analysis tools, like Text iQ, powered by Qualtrics iQ , provides powerful machine learning and native language processing to help you discover patterns and trends in text.

This also provides you with:

  • Sentiment analysis — a technique to help identify the underlying sentiment (say positive, neutral, and/or negative) in qualitative research text responses
  • Topic detection/categorisation — this technique is the grouping or bucketing of similar themes that can are relevant for the business & the industry (e.g., ‘Food quality,’ ‘Staff efficiency,’ or ‘Product availability’)

Validating your qualitative data

Validating data is one of the crucial steps of qualitative data analysis for successful research. Since data is quintessential for research, ensuring that the data is not flawed is imperative. Please note that data validation is not just one step in this analysis; it is a recurring step that needs to be followed throughout the research process.

There are two sides to validating data:

  • Ensuring that the methods used are designed to produce accurate data.
  • The extent to which the methods consistently produce accurate data over time.

Incorporating these validation steps ensures that the qualitative data you gather through tools like Text iQ is both reliable and accurate, providing a solid foundation for your research conclusions.

What are the approaches to qualitative data analysis?

Qualitative data analysis can be tackled using two main approaches: the deductive approach and the inductive approach. Each method offers unique benefits and caters to different research needs.

Deductive approach

The deductive approach involves analyzing qualitative data within a pre-established framework. Typically, researchers use predefined questions to guide their analysis, making it a structured and straightforward process. This method is particularly useful when researchers have a clear hypothesis or a reasonable expectation of the data they will gather.

Advantages :

  • Quick and efficient
  • Suitable for studies with known variables

Disadvantages :

  • Limited flexibility
  • May not uncover unexpected insights

Inductive approach

Contrastingly, the inductive approach is characterized by its flexibility and open-ended nature. Rather than starting with a set structure, researchers use this approach to let patterns and themes emerge naturally from the data. This method is time-consuming but thorough, making it ideal for exploratory research where little is known about the phenomenon under study.

  • High flexibility
  • Uncovers insights that may not be immediately obvious
  • Time-intensive
  • Requires rigorous interpretation skills

Both approaches have their merits and can be chosen based on the objectives of your research. By understanding the key differences between the deductive and inductive methods, you can select the approach that best suits your analytical needs.

What is the inductive approach to qualitative data analysis?

The inductive approach to qualitative data analysis is a flexible and explorative method. Unlike approaches that follow a fixed framework, the inductive approach builds theories and patterns from the data itself. Here’s a closer look:

  • No fixed framework: This method does not rely on predetermined structures or strict guidelines. Instead, it allows patterns and themes to naturally emerge from the data.
  • Exploratory nature: Often used when little is known about the research phenomenon, this approach helps researchers unearth new insights without preconceptions.
  • Time-consuming but thorough: Due to its comprehensive nature, the inductive approach can be more time-intensive. Researchers meticulously examine data to uncover meaningful connections and build a deep understanding of the subject matter.
  • Flexible and adaptive: This approach is particularly useful in dynamic research environments where the subject matter is complex or not well understood.

In essence, the inductive approach is about letting the data lead the research, allowing for the discovery of unexpected insights and a more nuanced understanding of the studied phenomena.

The deductive approach to qualitative data analysis is a method where researchers begin with a predefined structure or framework to guide their examination of data. Essentially, this means they start with specific questions or hypotheses in mind, which helps in directing the analysis process.

Key elements of the deductive approach:

  • Researchers have a clear idea of what they are looking for based on prior knowledge or theories.
  • This structured framework acts as a guide throughout the analysis.
  • Specific questions are developed beforehand.
  • These questions help in filtering and categorizing the data effectively.
  • The deductive method is typically faster and more straightforward.
  • It is particularly useful when researchers anticipate certain types of responses or patterns from their sample population.

In summary, the deductive approach involves using existing theories and structured queries to systematically analyze qualitative data, making the process efficient and focused.

How to conclude the qualitative data analysis process

Concluding your qualitative data analysis involves presenting your findings in a structured report that stakeholders can readily understand and utilize.

Start by describing your methodology . Detail the specific methods you employed during your research, including how you gathered and analyzed data. This helps readers appreciate the rigor of your process.

Next, highlight both the strengths and limitations of your study. Discuss what worked well and areas that posed challenges, providing a balanced view that showcases the robustness of your research while acknowledging potential shortcomings.

Following this, present your key findings and insights . Summarize the main conclusions drawn from your data, ensuring clarity and conciseness. Use bullet points or numbered lists to enhance readability where appropriate.

Moreover, offer suggestions or inferences based on your findings. Identify actionable recommendations or indicate future research areas that emerged from your study.

Finally, emphasize the importance of the synergy between analytics and reporting . Analytics uncover valuable insights, but it’s the reporting that effectively communicates these insights to stakeholders, enabling informed decision-making.

Even in today’s data-obsessed marketplace, qualitative data is valuable – maybe even more so because it helps you establish an authentic human connection to your customers. If qualitative research doesn’t play a role to inform your product and marketing strategy, your decisions aren’t as effective as they could be.

The Qualtrics XM system gives you an all-in-one, integrated solution to help you all the way through conducting qualitative research. From survey creation and data collection to textual analysis and data reporting, it can help all your internal teams gain insights from your subjective and categorical data.

Qualitative methods are catered through templates or advanced survey designs. While you can manually collect data and conduct data analysis in a spreadsheet program, this solution helps you automate the process of qualitative research, saving you time and administration work.

Using computational techniques helps you to avoid human errors, and participant results come in are already incorporated into the analysis in real-time.

Our key tools, Text IQ™ and Driver IQ™ make analyzing subjective and categorical data easy and simple. Choose to highlight key findings based on topic, sentiment, or frequency. The choice is yours.

Some examples of your workspace in action, using drag and drop to create fast data visualizations quickly:

Qualitative research Qualtrics products

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

Lean Six Sigma Training Certification

6sigma.us

  • Facebook Instagram Twitter LinkedIn YouTube
  • (877) 497-4462

SixSigma.us

The Importance of Qualitative Data Analysis in Research: A Comprehensive Guide

August 29th, 2024

Qualitative data analysis, in essence, is the systematic examination of non-numerical information to uncover patterns, themes, and insights.

This process is crucial in various fields, from product development to business process improvement.

Key Highlights

  • Defining qualitative data analysis and its importance
  • Comparing qualitative and quantitative research methods
  • Exploring key approaches: thematic, grounded theory, content analysis
  • Understanding the qualitative data analysis process
  • Reviewing CAQDAS tools for efficient analysis
  • Ensuring rigor through triangulation and member checking
  • Addressing challenges and ethical considerations
  • Examining future trends in qualitative research

Introduction to Qualitative Data Analysis

Qualitative data analysis is a sophisticated process of examining non-numerical information to extract meaningful insights.

It’s not just about reading through text; it’s about diving deep into the nuances of human experiences, opinions, and behaviors.

This analytical approach is crucial in various fields, from product development to process improvement , and even in understanding complex social phenomena.

Image: Qualitative Data Analysis

Importance of Qualitative Research Methods

The importance of qualitative research methods cannot be overstated. In my work with companies like 3M , Dell , and Intel , I’ve seen how qualitative analysis can uncover insights that numbers alone simply can’t reveal.

These methods allow us to understand the ‘why’ behind the ‘what’, providing context and depth to our understanding of complex issues.

Whether it’s improving a manufacturing process or developing a new product, qualitative research methods offer a rich, nuanced perspective that’s invaluable for informed decision-making.

Comparing Qualitative vs Quantitative Analysis

While both qualitative and quantitative analyses are essential tools in a researcher’s arsenal, they serve different purposes.

Quantitative analysis, which I’ve extensively used in Six Sigma projects, deals with numerical data and statistical methods.

It’s excellent for measuring, ranking, and categorizing phenomena. On the other hand, qualitative analysis focuses on the rich, contextual data that can’t be easily quantified.

It’s about understanding meanings, experiences, and perspectives.

Image: Qualitative and Quantitative Analysis

Key Approaches in Qualitative Data Analysis

Explore essential techniques like thematic analysis, grounded theory, content analysis, and discourse analysis.

Understand how each approach offers unique insights into qualitative data interpretation and theory building.

Thematic Analysis Techniques

Thematic analysis is a cornerstone of qualitative data analysis. It involves identifying patterns or themes within qualitative data.

In my workshops on Statistical Thinking and Business Process Charting , I often emphasize the power of thematic analysis in uncovering underlying patterns in complex datasets.

This approach is particularly useful when dealing with interview transcripts or open-ended survey responses.

The key is to immerse yourself in the data, coding it systematically, and then stepping back to see the broader themes emerge.

Grounded Theory Methodology

Grounded theory is another powerful approach in qualitative data analysis. Unlike methods that start with a hypothesis, grounded theory allows theories to emerge from the data itself.

I’ve found this particularly useful in projects where we’re exploring new territory without preconceived notions.

It’s a systematic yet flexible approach that can lead to fresh insights and innovative solutions.

The iterative nature of grounded theory, with its constant comparison of data, aligns well with the continuous improvement philosophy of Six Sigma .

Content Analysis Strategies

Content analysis is a versatile method that can be both qualitative and quantitative.

In my experience working with diverse industries, content analysis has been invaluable in making sense of large volumes of textual data.

Whether it’s analyzing customer feedback or reviewing technical documentation, content analysis provides a structured way to categorize and quantify qualitative information.

The key is to develop a robust coding framework that captures the essence of your research questions.

Discourse Analysis Approaches

Discourse analysis takes a deeper look at language use and communication practices.

It’s not just about what is said, but how it’s said and in what context. In my work on improving communication processes within organizations , discourse analysis has been a powerful tool.

It helps uncover underlying assumptions, power dynamics, and cultural nuances that might otherwise go unnoticed.

This approach is particularly useful when dealing with complex organizational issues or when trying to understand stakeholder perspectives in depth.

Image: Integrations of Different Qualitative Data Analysis Approaches

The Qualitative Data Analysis Process

Navigate through data collection, coding techniques, theme development, and interpretation. Learn how to transform raw qualitative data into meaningful insights through systematic analysis.

Data collection methods (interviews, focus groups, observation)

The foundation of any good qualitative analysis lies in robust data collection. In my experience, a mix of methods often yields the best results.

In-depth interviews provide individual perspectives, focus groups offer insights into group dynamics, and observation allows us to see behaviors in their natural context.

When working on process improvement projects , I often combine these methods to get a comprehensive view of the situation.

The key is to align your data collection methods with your research questions and the nature of the information you’re seeking.

Qualitative Data Coding Techniques

Coding is the heart of qualitative data analysis. It’s the process of labeling and organizing your qualitative data to identify different themes and the relationships between them.

In my workshops, I emphasize the importance of developing a clear, consistent coding system.

This might involve open coding to identify initial concepts, axial coding to make connections between categories, and selective coding to integrate and refine the theory.

The goal is to transform raw data into meaningful, analyzable units.

Developing Themes and Patterns

Once your data is coded, the next step is to look for overarching themes and patterns. This is where the analytical magic happens.

It’s about stepping back from the details and seeing the bigger picture. In my work with companies like Motorola and HP, I’ve found that visual tools like mind maps or thematic networks can be incredibly helpful in this process.

They allow you to see connections and hierarchies within your data that might not be immediately apparent in text form.

Data Interpretation and Theory Building

The final step in the qualitative data analysis process is interpretation and theory building.

This is where you bring together your themes and patterns to construct a coherent narrative or theory that answers your research questions.

It’s crucial to remain grounded in your data while also being open to new insights. In my experience, the best interpretations often challenge our initial assumptions and lead to innovative solutions.

Tools and Software for Qualitative Analysis

Discover the power of CAQDAS in streamlining qualitative data analysis workflows. Explore popular tools like NVivo, ATLAS.ti, and MAXQDA for efficient data management and analysis .

Overview of CAQDAS (Computer Assisted Qualitative Data Analysis Software)

Computer Assisted Qualitative Data Analysis Software (CAQDAS) has revolutionized the way we approach qualitative analysis.

These tools streamline the coding process, help manage large datasets, and offer sophisticated visualization options.

As someone who’s seen the evolution of these tools over the past two decades, I can attest to their transformative power.

They allow researchers to handle much larger datasets and perform more complex analyses than ever before.

Popular Tools: NVivo, ATLAS.ti, MAXQDA

Among the most popular CAQDAS tools are NVivo, ATLAS.ti, and MAXQDA.

Each has its strengths, and the choice often depends on your specific needs and preferences. NVivo , for instance, offers robust coding capabilities and is excellent for managing multimedia data.

ATLAS.ti is known for its intuitive interface and powerful network view feature. MAXQDA stands out for its mixed methods capabilities, blending qualitative and quantitative approaches seamlessly.

Ensuring Rigor in Qualitative Data Analysis

Implement strategies like data triangulation, member checking, and audit trails to enhance credibility. Understand the importance of reflexivity in maintaining objectivity throughout the research process.

Data triangulation methods

Ensuring rigor in qualitative analysis is crucial for producing trustworthy results.

Data triangulation is a powerful method for enhancing the credibility of your findings. It involves using multiple data sources, methods, or investigators to corroborate your results.

In my Six Sigma projects, I often employ methodological triangulation, combining interviews, observations, and document analysis to get a comprehensive view of a process or problem.

Member Checking for Validity

Member checking is another important technique for ensuring the validity of your qualitative analysis.

This involves taking your findings back to your participants to confirm that they accurately represent their experiences and perspectives.

In my work with various organizations, I’ve found that this not only enhances the credibility of the research but also often leads to new insights as participants reflect on the findings.

Creating an Audit Trail

An audit trail is essential for demonstrating the rigor of your qualitative analysis.

It’s a detailed record of your research process, including your raw data, analysis notes, and the evolution of your coding scheme.

Practicing Reflexivity

Reflexivity is about acknowledging and critically examining your own role in the research process. As researchers, we bring our own biases and assumptions to our work.

Practicing reflexivity involves constantly questioning these assumptions and considering how they might be influencing our analysis.

Challenges and Best Practices in Qualitative Data Analysis

Address common hurdles such as data saturation , researcher bias, and ethical considerations. Learn best practices for conducting rigorous and ethical qualitative research in various contexts.

Dealing with data saturation

One of the challenges in qualitative research is knowing when you’ve reached data saturation – the point at which new data no longer brings new insights.

In my experience, this requires a balance of systematic analysis and intuition. It’s important to continuously review and compare your data as you collect it.

In projects I’ve led, we often use data matrices or summary tables to track emerging themes and identify when we’re no longer seeing new patterns emerge.

Overcoming Researcher Bias

Researcher bias is an ever-present challenge in qualitative analysis. Our own experiences and preconceptions can inadvertently influence how we interpret data.

To overcome this, I advocate for a combination of strategies. Regular peer debriefing sessions , where you discuss your analysis with colleagues, can help uncover blind spots.

Additionally, actively seeking out negative cases or contradictory evidence can help challenge your assumptions and lead to more robust findings.

Ethical Considerations in Qualitative Research

Ethical considerations are paramount in qualitative research, given the often personal and sensitive nature of the data.

Protecting participant confidentiality, ensuring informed consent, and being transparent about the research process are all crucial.

In my work across various industries and cultures, I’ve learned the importance of being sensitive to cultural differences and power dynamics.

It’s also vital to consider the potential impact of your research on participants and communities.

Ethical qualitative research is not just about following guidelines, but about constantly reflecting on the implications of your work.

The Future of Qualitative Data Analysis

As we look to the future of qualitative data analysis, several exciting trends are emerging.

The increasing use of artificial intelligence and machine learning in qualitative analysis tools promises to revolutionize how we handle large datasets.

We’re also seeing a growing interest in visual and sensory methods of data collection and analysis, expanding our understanding of qualitative data beyond text.

In conclusion, mastering qualitative data analysis is an ongoing journey. It requires a combination of rigorous methods, creative thinking, and ethical awareness.

As we move forward, the field will undoubtedly continue to evolve, but its fundamental importance in research and decision-making will remain constant.

For those willing to dive deep into the complexities of qualitative data, the rewards in terms of insights and understanding are immense.

SixSigma.us offers both Live Virtual classes as well as Online Self-Paced training. Most option includes access to the same great Master Black Belt instructors that teach our World Class in-person sessions. Sign-up today!

Virtual Classroom Training Programs Self-Paced Online Training Programs

SixSigma.us Accreditation & Affiliations

PMI-logo-6sigma-us

Monthly Management Tips

  • Be the first one to receive the latest updates and information from 6Sigma
  • Get curated resources from industry-experts
  • Gain an edge with complete guides and other exclusive materials
  • Become a part of one of the largest Six Sigma community
  • Unlock your path to become a Six Sigma professional

" * " indicates required fields

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on 4 April 2022 by Pritha Bhandari . Revised on 30 January 2023.

Qualitative research involves collecting and analysing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analysing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, and history.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organisation?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography, action research, phenomenological research, and narrative research. They share some similarities, but emphasise different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organisations to understand their cultures.
Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Prevent plagiarism, run a free check.

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves ‘instruments’ in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analysing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organise your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorise your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analysing qualitative data. Although these methods share similar processes, they emphasise different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorise common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Researchers must consider practical and theoretical limitations in analysing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analysing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalisability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalisable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labour-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organisation to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organisations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organise your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, January 30). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved 29 August 2024, from https://www.scribbr.co.uk/research-methods/introduction-to-qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

What is Qualitative in Qualitative Research

Patrik aspers.

1 Department of Sociology, Uppsala University, Uppsala, Sweden

2 Seminar for Sociology, Universität St. Gallen, St. Gallen, Switzerland

3 Department of Media and Social Sciences, University of Stavanger, Stavanger, Norway

What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being “qualitative,” the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term “qualitative.” Then, drawing on ideas we find scattered across existing work, and based on Becker’s classic study of marijuana consumption, we formulate and illustrate a definition that tries to capture its core elements. We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. This formulation is developed as a tool to help improve research designs while stressing that a qualitative dimension is present in quantitative work as well. Additionally, it can facilitate teaching, communication between researchers, diminish the gap between qualitative and quantitative researchers, help to address critiques of qualitative methods, and be used as a standard of evaluation of qualitative research.

If we assume that there is something called qualitative research, what exactly is this qualitative feature? And how could we evaluate qualitative research as good or not? Is it fundamentally different from quantitative research? In practice, most active qualitative researchers working with empirical material intuitively know what is involved in doing qualitative research, yet perhaps surprisingly, a clear definition addressing its key feature is still missing.

To address the question of what is qualitative we turn to the accounts of “qualitative research” in textbooks and also in empirical work. In his classic, explorative, interview study of deviance Howard Becker ( 1963 ) asks ‘How does one become a marijuana user?’ In contrast to pre-dispositional and psychological-individualistic theories of deviant behavior, Becker’s inherently social explanation contends that becoming a user of this substance is the result of a three-phase sequential learning process. First, potential users need to learn how to smoke it properly to produce the “correct” effects. If not, they are likely to stop experimenting with it. Second, they need to discover the effects associated with it; in other words, to get “high,” individuals not only have to experience what the drug does, but also to become aware that those sensations are related to using it. Third, they require learning to savor the feelings related to its consumption – to develop an acquired taste. Becker, who played music himself, gets close to the phenomenon by observing, taking part, and by talking to people consuming the drug: “half of the fifty interviews were conducted with musicians, the other half covered a wide range of people, including laborers, machinists, and people in the professions” (Becker 1963 :56).

Another central aspect derived through the common-to-all-research interplay between induction and deduction (Becker 2017 ), is that during the course of his research Becker adds scientifically meaningful new distinctions in the form of three phases—distinctions, or findings if you will, that strongly affect the course of his research: its focus, the material that he collects, and which eventually impact his findings. Each phase typically unfolds through social interaction, and often with input from experienced users in “a sequence of social experiences during which the person acquires a conception of the meaning of the behavior, and perceptions and judgments of objects and situations, all of which make the activity possible and desirable” (Becker 1963 :235). In this study the increased understanding of smoking dope is a result of a combination of the meaning of the actors, and the conceptual distinctions that Becker introduces based on the views expressed by his respondents. Understanding is the result of research and is due to an iterative process in which data, concepts and evidence are connected with one another (Becker 2017 ).

Indeed, there are many definitions of qualitative research, but if we look for a definition that addresses its distinctive feature of being “qualitative,” the literature across the broad field of social science is meager. The main reason behind this article lies in the paradox, which, to put it bluntly, is that researchers act as if they know what it is, but they cannot formulate a coherent definition. Sociologists and others will of course continue to conduct good studies that show the relevance and value of qualitative research addressing scientific and practical problems in society. However, our paper is grounded in the idea that providing a clear definition will help us improve the work that we do. Among researchers who practice qualitative research there is clearly much knowledge. We suggest that a definition makes this knowledge more explicit. If the first rationale for writing this paper refers to the “internal” aim of improving qualitative research, the second refers to the increased “external” pressure that especially many qualitative researchers feel; pressure that comes both from society as well as from other scientific approaches. There is a strong core in qualitative research, and leading researchers tend to agree on what it is and how it is done. Our critique is not directed at the practice of qualitative research, but we do claim that the type of systematic work we do has not yet been done, and that it is useful to improve the field and its status in relation to quantitative research.

The literature on the “internal” aim of improving, or at least clarifying qualitative research is large, and we do not claim to be the first to notice the vagueness of the term “qualitative” (Strauss and Corbin 1998 ). Also, others have noted that there is no single definition of it (Long and Godfrey 2004 :182), that there are many different views on qualitative research (Denzin and Lincoln 2003 :11; Jovanović 2011 :3), and that more generally, we need to define its meaning (Best 2004 :54). Strauss and Corbin ( 1998 ), for example, as well as Nelson et al. (1992:2 cited in Denzin and Lincoln 2003 :11), and Flick ( 2007 :ix–x), have recognized that the term is problematic: “Actually, the term ‘qualitative research’ is confusing because it can mean different things to different people” (Strauss and Corbin 1998 :10–11). Hammersley has discussed the possibility of addressing the problem, but states that “the task of providing an account of the distinctive features of qualitative research is far from straightforward” ( 2013 :2). This confusion, as he has recently further argued (Hammersley 2018 ), is also salient in relation to ethnography where different philosophical and methodological approaches lead to a lack of agreement about what it means.

Others (e.g. Hammersley 2018 ; Fine and Hancock 2017 ) have also identified the treat to qualitative research that comes from external forces, seen from the point of view of “qualitative research.” This threat can be further divided into that which comes from inside academia, such as the critique voiced by “quantitative research” and outside of academia, including, for example, New Public Management. Hammersley ( 2018 ), zooming in on one type of qualitative research, ethnography, has argued that it is under treat. Similarly to Fine ( 2003 ), and before him Gans ( 1999 ), he writes that ethnography’ has acquired a range of meanings, and comes in many different versions, these often reflecting sharply divergent epistemological orientations. And already more than twenty years ago while reviewing Denzin and Lincoln’ s Handbook of Qualitative Methods Fine argued:

While this increasing centrality [of qualitative research] might lead one to believe that consensual standards have developed, this belief would be misleading. As the methodology becomes more widely accepted, querulous challengers have raised fundamental questions that collectively have undercut the traditional models of how qualitative research is to be fashioned and presented (1995:417).

According to Hammersley, there are today “serious treats to the practice of ethnographic work, on almost any definition” ( 2018 :1). He lists five external treats: (1) that social research must be accountable and able to show its impact on society; (2) the current emphasis on “big data” and the emphasis on quantitative data and evidence; (3) the labor market pressure in academia that leaves less time for fieldwork (see also Fine and Hancock 2017 ); (4) problems of access to fields; and (5) the increased ethical scrutiny of projects, to which ethnography is particularly exposed. Hammersley discusses some more or less insufficient existing definitions of ethnography.

The current situation, as Hammersley and others note—and in relation not only to ethnography but also qualitative research in general, and as our empirical study shows—is not just unsatisfactory, it may even be harmful for the entire field of qualitative research, and does not help social science at large. We suggest that the lack of clarity of qualitative research is a real problem that must be addressed.

Towards a Definition of Qualitative Research

Seen in an historical light, what is today called qualitative, or sometimes ethnographic, interpretative research – or a number of other terms – has more or less always existed. At the time the founders of sociology – Simmel, Weber, Durkheim and, before them, Marx – were writing, and during the era of the Methodenstreit (“dispute about methods”) in which the German historical school emphasized scientific methods (cf. Swedberg 1990 ), we can at least speak of qualitative forerunners.

Perhaps the most extended discussion of what later became known as qualitative methods in a classic work is Bronisław Malinowski’s ( 1922 ) Argonauts in the Western Pacific , although even this study does not explicitly address the meaning of “qualitative.” In Weber’s ([1921–-22] 1978) work we find a tension between scientific explanations that are based on observation and quantification and interpretative research (see also Lazarsfeld and Barton 1982 ).

If we look through major sociology journals like the American Sociological Review , American Journal of Sociology , or Social Forces we will not find the term qualitative sociology before the 1970s. And certainly before then much of what we consider qualitative classics in sociology, like Becker’ study ( 1963 ), had already been produced. Indeed, the Chicago School often combined qualitative and quantitative data within the same study (Fine 1995 ). Our point being that before a disciplinary self-awareness the term quantitative preceded qualitative, and the articulation of the former was a political move to claim scientific status (Denzin and Lincoln 2005 ). In the US the World War II seem to have sparked a critique of sociological work, including “qualitative work,” that did not follow the scientific canon (Rawls 2018 ), which was underpinned by a scientifically oriented and value free philosophy of science. As a result the attempts and practice of integrating qualitative and quantitative sociology at Chicago lost ground to sociology that was more oriented to surveys and quantitative work at Columbia under Merton-Lazarsfeld. The quantitative tradition was also able to present textbooks (Lundberg 1951 ) that facilitated the use this approach and its “methods.” The practices of the qualitative tradition, by and large, remained tacit or was part of the mentoring transferred from the renowned masters to their students.

This glimpse into history leads us back to the lack of a coherent account condensed in a definition of qualitative research. Many of the attempts to define the term do not meet the requirements of a proper definition: A definition should be clear, avoid tautology, demarcate its domain in relation to the environment, and ideally only use words in its definiens that themselves are not in need of definition (Hempel 1966 ). A definition can enhance precision and thus clarity by identifying the core of the phenomenon. Preferably, a definition should be short. The typical definition we have found, however, is an ostensive definition, which indicates what qualitative research is about without informing us about what it actually is :

Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives. (Denzin and Lincoln 2005 :2)

Flick claims that the label “qualitative research” is indeed used as an umbrella for a number of approaches ( 2007 :2–4; 2002 :6), and it is not difficult to identify research fitting this designation. Moreover, whatever it is, it has grown dramatically over the past five decades. In addition, courses have been developed, methods have flourished, arguments about its future have been advanced (for example, Denzin and Lincoln 1994) and criticized (for example, Snow and Morrill 1995 ), and dedicated journals and books have mushroomed. Most social scientists have a clear idea of research and how it differs from journalism, politics and other activities. But the question of what is qualitative in qualitative research is either eluded or eschewed.

We maintain that this lacuna hinders systematic knowledge production based on qualitative research. Paul Lazarsfeld noted the lack of “codification” as early as 1955 when he reviewed 100 qualitative studies in order to offer a codification of the practices (Lazarsfeld and Barton 1982 :239). Since then many texts on “qualitative research” and its methods have been published, including recent attempts (Goertz and Mahoney 2012 ) similar to Lazarsfeld’s. These studies have tried to extract what is qualitative by looking at the large number of empirical “qualitative” studies. Our novel strategy complements these endeavors by taking another approach and looking at the attempts to codify these practices in the form of a definition, as well as to a minor extent take Becker’s study as an exemplar of what qualitative researchers actually do, and what the characteristic of being ‘qualitative’ denotes and implies. We claim that qualitative researchers, if there is such a thing as “qualitative research,” should be able to codify their practices in a condensed, yet general way expressed in language.

Lingering problems of “generalizability” and “how many cases do I need” (Small 2009 ) are blocking advancement – in this line of work qualitative approaches are said to differ considerably from quantitative ones, while some of the former unsuccessfully mimic principles related to the latter (Small 2009 ). Additionally, quantitative researchers sometimes unfairly criticize the first based on their own quality criteria. Scholars like Goertz and Mahoney ( 2012 ) have successfully focused on the different norms and practices beyond what they argue are essentially two different cultures: those working with either qualitative or quantitative methods. Instead, similarly to Becker ( 2017 ) who has recently questioned the usefulness of the distinction between qualitative and quantitative research, we focus on similarities.

The current situation also impedes both students and researchers in focusing their studies and understanding each other’s work (Lazarsfeld and Barton 1982 :239). A third consequence is providing an opening for critiques by scholars operating within different traditions (Valsiner 2000 :101). A fourth issue is that the “implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm” (Goertz and Mahoney 2012 :9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving strategies to improve it and to develop standards of evaluation in qualitative research. However, a specific focus on its distinguishing feature of being “qualitative” while being implicitly acknowledged, was discussed only briefly (for example, Best 2004 ).

In 2014 a theme issue was published in this journal on “Methods, Materials, and Meanings: Designing Cultural Analysis,” discussing central issues in (cultural) qualitative research (Berezin 2014 ; Biernacki 2014 ; Glaeser 2014 ; Lamont and Swidler 2014 ; Spillman 2014). We agree with many of the arguments put forward, such as the risk of methodological tribalism, and that we should not waste energy on debating methods separated from research questions. Nonetheless, a clarification of the relation to what is called “quantitative research” is of outmost importance to avoid misunderstandings and misguided debates between “qualitative” and “quantitative” researchers. Our strategy means that researchers, “qualitative” or “quantitative” they may be, in their actual practice may combine qualitative work and quantitative work.

In this article we accomplish three tasks. First, we systematically survey the literature for meanings of qualitative research by looking at how researchers have defined it. Drawing upon existing knowledge we find that the different meanings and ideas of qualitative research are not yet coherently integrated into one satisfactory definition. Next, we advance our contribution by offering a definition of qualitative research and illustrate its meaning and use partially by expanding on the brief example introduced earlier related to Becker’s work ( 1963 ). We offer a systematic analysis of central themes of what researchers consider to be the core of “qualitative,” regardless of style of work. These themes – which we summarize in terms of four keywords: distinction, process, closeness, improved understanding – constitute part of our literature review, in which each one appears, sometimes with others, but never all in the same definition. They serve as the foundation of our contribution. Our categories are overlapping. Their use is primarily to organize the large amount of definitions we have identified and analyzed, and not necessarily to draw a clear distinction between them. Finally, we continue the elaboration discussed above on the advantages of a clear definition of qualitative research.

In a hermeneutic fashion we propose that there is something meaningful that deserves to be labelled “qualitative research” (Gadamer 1990 ). To approach the question “What is qualitative in qualitative research?” we have surveyed the literature. In conducting our survey we first traced the word’s etymology in dictionaries, encyclopedias, handbooks of the social sciences and of methods and textbooks, mainly in English, which is common to methodology courses. It should be noted that we have zoomed in on sociology and its literature. This discipline has been the site of the largest debate and development of methods that can be called “qualitative,” which suggests that this field should be examined in great detail.

In an ideal situation we should expect that one good definition, or at least some common ideas, would have emerged over the years. This common core of qualitative research should be so accepted that it would appear in at least some textbooks. Since this is not what we found, we decided to pursue an inductive approach to capture maximal variation in the field of qualitative research; we searched in a selection of handbooks, textbooks, book chapters, and books, to which we added the analysis of journal articles. Our sample comprises a total of 89 references.

In practice we focused on the discipline that has had a clear discussion of methods, namely sociology. We also conducted a broad search in the JSTOR database to identify scholarly sociology articles published between 1998 and 2017 in English with a focus on defining or explaining qualitative research. We specifically zoom in on this time frame because we would have expect that this more mature period would have produced clear discussions on the meaning of qualitative research. To find these articles we combined a number of keywords to search the content and/or the title: qualitative (which was always included), definition, empirical, research, methodology, studies, fieldwork, interview and observation .

As a second phase of our research we searched within nine major sociological journals ( American Journal of Sociology , Sociological Theory , American Sociological Review , Contemporary Sociology , Sociological Forum , Sociological Theory , Qualitative Research , Qualitative Sociology and Qualitative Sociology Review ) for articles also published during the past 19 years (1998–2017) that had the term “qualitative” in the title and attempted to define qualitative research.

Lastly we picked two additional journals, Qualitative Research and Qualitative Sociology , in which we could expect to find texts addressing the notion of “qualitative.” From Qualitative Research we chose Volume 14, Issue 6, December 2014, and from Qualitative Sociology we chose Volume 36, Issue 2, June 2017. Within each of these we selected the first article; then we picked the second article of three prior issues. Again we went back another three issues and investigated article number three. Finally we went back another three issues and perused article number four. This selection criteria was used to get a manageable sample for the analysis.

The coding process of the 89 references we gathered in our selected review began soon after the first round of material was gathered, and we reduced the complexity created by our maximum variation sampling (Snow and Anderson 1993 :22) to four different categories within which questions on the nature and properties of qualitative research were discussed. We call them: Qualitative and Quantitative Research, Qualitative Research, Fieldwork, and Grounded Theory. This – which may appear as an illogical grouping – merely reflects the “context” in which the matter of “qualitative” is discussed. If the selection process of the material – books and articles – was informed by pre-knowledge, we used an inductive strategy to code the material. When studying our material, we identified four central notions related to “qualitative” that appear in various combinations in the literature which indicate what is the core of qualitative research. We have labeled them: “distinctions”, “process,” “closeness,” and “improved understanding.” During the research process the categories and notions were improved, refined, changed, and reordered. The coding ended when a sense of saturation in the material arose. In the presentation below all quotations and references come from our empirical material of texts on qualitative research.

Analysis – What is Qualitative Research?

In this section we describe the four categories we identified in the coding, how they differently discuss qualitative research, as well as their overall content. Some salient quotations are selected to represent the type of text sorted under each of the four categories. What we present are examples from the literature.

Qualitative and Quantitative

This analytic category comprises quotations comparing qualitative and quantitative research, a distinction that is frequently used (Brown 2010 :231); in effect this is a conceptual pair that structures the discussion and that may be associated with opposing interests. While the general goal of quantitative and qualitative research is the same – to understand the world better – their methodologies and focus in certain respects differ substantially (Becker 1966 :55). Quantity refers to that property of something that can be determined by measurement. In a dictionary of Statistics and Methodology we find that “(a) When referring to *variables, ‘qualitative’ is another term for *categorical or *nominal. (b) When speaking of kinds of research, ‘qualitative’ refers to studies of subjects that are hard to quantify, such as art history. Qualitative research tends to be a residual category for almost any kind of non-quantitative research” (Stiles 1998:183). But it should be obvious that one could employ a quantitative approach when studying, for example, art history.

The same dictionary states that quantitative is “said of variables or research that can be handled numerically, usually (too sharply) contrasted with *qualitative variables and research” (Stiles 1998:184). From a qualitative perspective “quantitative research” is about numbers and counting, and from a quantitative perspective qualitative research is everything that is not about numbers. But this does not say much about what is “qualitative.” If we turn to encyclopedias we find that in the 1932 edition of the Encyclopedia of the Social Sciences there is no mention of “qualitative.” In the Encyclopedia from 1968 we can read:

Qualitative Analysis. For methods of obtaining, analyzing, and describing data, see [the various entries:] CONTENT ANALYSIS; COUNTED DATA; EVALUATION RESEARCH, FIELD WORK; GRAPHIC PRESENTATION; HISTORIOGRAPHY, especially the article on THE RHETORIC OF HISTORY; INTERVIEWING; OBSERVATION; PERSONALITY MEASUREMENT; PROJECTIVE METHODS; PSYCHOANALYSIS, article on EXPERIMENTAL METHODS; SURVEY ANALYSIS, TABULAR PRESENTATION; TYPOLOGIES. (Vol. 13:225)

Some, like Alford, divide researchers into methodologists or, in his words, “quantitative and qualitative specialists” (Alford 1998 :12). Qualitative research uses a variety of methods, such as intensive interviews or in-depth analysis of historical materials, and it is concerned with a comprehensive account of some event or unit (King et al. 1994 :4). Like quantitative research it can be utilized to study a variety of issues, but it tends to focus on meanings and motivations that underlie cultural symbols, personal experiences, phenomena and detailed understanding of processes in the social world. In short, qualitative research centers on understanding processes, experiences, and the meanings people assign to things (Kalof et al. 2008 :79).

Others simply say that qualitative methods are inherently unscientific (Jovanović 2011 :19). Hood, for instance, argues that words are intrinsically less precise than numbers, and that they are therefore more prone to subjective analysis, leading to biased results (Hood 2006 :219). Qualitative methodologies have raised concerns over the limitations of quantitative templates (Brady et al. 2004 :4). Scholars such as King et al. ( 1994 ), for instance, argue that non-statistical research can produce more reliable results if researchers pay attention to the rules of scientific inference commonly stated in quantitative research. Also, researchers such as Becker ( 1966 :59; 1970 :42–43) have asserted that, if conducted properly, qualitative research and in particular ethnographic field methods, can lead to more accurate results than quantitative studies, in particular, survey research and laboratory experiments.

Some researchers, such as Kalof, Dan, and Dietz ( 2008 :79) claim that the boundaries between the two approaches are becoming blurred, and Small ( 2009 ) argues that currently much qualitative research (especially in North America) tries unsuccessfully and unnecessarily to emulate quantitative standards. For others, qualitative research tends to be more humanistic and discursive (King et al. 1994 :4). Ragin ( 1994 ), and similarly also Becker, ( 1996 :53), Marchel and Owens ( 2007 :303) think that the main distinction between the two styles is overstated and does not rest on the simple dichotomy of “numbers versus words” (Ragin 1994 :xii). Some claim that quantitative data can be utilized to discover associations, but in order to unveil cause and effect a complex research design involving the use of qualitative approaches needs to be devised (Gilbert 2009 :35). Consequently, qualitative data are useful for understanding the nuances lying beyond those processes as they unfold (Gilbert 2009 :35). Others contend that qualitative research is particularly well suited both to identify causality and to uncover fine descriptive distinctions (Fine and Hallett 2014 ; Lichterman and Isaac Reed 2014 ; Katz 2015 ).

There are other ways to separate these two traditions, including normative statements about what qualitative research should be (that is, better or worse than quantitative approaches, concerned with scientific approaches to societal change or vice versa; Snow and Morrill 1995 ; Denzin and Lincoln 2005 ), or whether it should develop falsifiable statements; Best 2004 ).

We propose that quantitative research is largely concerned with pre-determined variables (Small 2008 ); the analysis concerns the relations between variables. These categories are primarily not questioned in the study, only their frequency or degree, or the correlations between them (cf. Franzosi 2016 ). If a researcher studies wage differences between women and men, he or she works with given categories: x number of men are compared with y number of women, with a certain wage attributed to each person. The idea is not to move beyond the given categories of wage, men and women; they are the starting point as well as the end point, and undergo no “qualitative change.” Qualitative research, in contrast, investigates relations between categories that are themselves subject to change in the research process. Returning to Becker’s study ( 1963 ), we see that he questioned pre-dispositional theories of deviant behavior working with pre-determined variables such as an individual’s combination of personal qualities or emotional problems. His take, in contrast, was to understand marijuana consumption by developing “variables” as part of the investigation. Thereby he presented new variables, or as we would say today, theoretical concepts, but which are grounded in the empirical material.

Qualitative Research

This category contains quotations that refer to descriptions of qualitative research without making comparisons with quantitative research. Researchers such as Denzin and Lincoln, who have written a series of influential handbooks on qualitative methods (1994; Denzin and Lincoln 2003 ; 2005 ), citing Nelson et al. (1992:4), argue that because qualitative research is “interdisciplinary, transdisciplinary, and sometimes counterdisciplinary” it is difficult to derive one single definition of it (Jovanović 2011 :3). According to them, in fact, “the field” is “many things at the same time,” involving contradictions, tensions over its focus, methods, and how to derive interpretations and findings ( 2003 : 11). Similarly, others, such as Flick ( 2007 :ix–x) contend that agreeing on an accepted definition has increasingly become problematic, and that qualitative research has possibly matured different identities. However, Best holds that “the proliferation of many sorts of activities under the label of qualitative sociology threatens to confuse our discussions” ( 2004 :54). Atkinson’s position is more definite: “the current state of qualitative research and research methods is confused” ( 2005 :3–4).

Qualitative research is about interpretation (Blumer 1969 ; Strauss and Corbin 1998 ; Denzin and Lincoln 2003 ), or Verstehen [understanding] (Frankfort-Nachmias and Nachmias 1996 ). It is “multi-method,” involving the collection and use of a variety of empirical materials (Denzin and Lincoln 1998; Silverman 2013 ) and approaches (Silverman 2005 ; Flick 2007 ). It focuses not only on the objective nature of behavior but also on its subjective meanings: individuals’ own accounts of their attitudes, motivations, behavior (McIntyre 2005 :127; Creswell 2009 ), events and situations (Bryman 1989) – what people say and do in specific places and institutions (Goodwin and Horowitz 2002 :35–36) in social and temporal contexts (Morrill and Fine 1997). For this reason, following Weber ([1921-22] 1978), it can be described as an interpretative science (McIntyre 2005 :127). But could quantitative research also be concerned with these questions? Also, as pointed out below, does all qualitative research focus on subjective meaning, as some scholars suggest?

Others also distinguish qualitative research by claiming that it collects data using a naturalistic approach (Denzin and Lincoln 2005 :2; Creswell 2009 ), focusing on the meaning actors ascribe to their actions. But again, does all qualitative research need to be collected in situ? And does qualitative research have to be inherently concerned with meaning? Flick ( 2007 ), referring to Denzin and Lincoln ( 2005 ), mentions conversation analysis as an example of qualitative research that is not concerned with the meanings people bring to a situation, but rather with the formal organization of talk. Still others, such as Ragin ( 1994 :85), note that qualitative research is often (especially early on in the project, we would add) less structured than other kinds of social research – a characteristic connected to its flexibility and that can lead both to potentially better, but also worse results. But is this not a feature of this type of research, rather than a defining description of its essence? Wouldn’t this comment also apply, albeit to varying degrees, to quantitative research?

In addition, Strauss ( 2003 ), along with others, such as Alvesson and Kärreman ( 2011 :10–76), argue that qualitative researchers struggle to capture and represent complex phenomena partially because they tend to collect a large amount of data. While his analysis is correct at some points – “It is necessary to do detailed, intensive, microscopic examination of the data in order to bring out the amazing complexity of what lies in, behind, and beyond those data” (Strauss 2003 :10) – much of his analysis concerns the supposed focus of qualitative research and its challenges, rather than exactly what it is about. But even in this instance we would make a weak case arguing that these are strictly the defining features of qualitative research. Some researchers seem to focus on the approach or the methods used, or even on the way material is analyzed. Several researchers stress the naturalistic assumption of investigating the world, suggesting that meaning and interpretation appear to be a core matter of qualitative research.

We can also see that in this category there is no consensus about specific qualitative methods nor about qualitative data. Many emphasize interpretation, but quantitative research, too, involves interpretation; the results of a regression analysis, for example, certainly have to be interpreted, and the form of meta-analysis that factor analysis provides indeed requires interpretation However, there is no interpretation of quantitative raw data, i.e., numbers in tables. One common thread is that qualitative researchers have to get to grips with their data in order to understand what is being studied in great detail, irrespective of the type of empirical material that is being analyzed. This observation is connected to the fact that qualitative researchers routinely make several adjustments of focus and research design as their studies progress, in many cases until the very end of the project (Kalof et al. 2008 ). If you, like Becker, do not start out with a detailed theory, adjustments such as the emergence and refinement of research questions will occur during the research process. We have thus found a number of useful reflections about qualitative research scattered across different sources, but none of them effectively describe the defining characteristics of this approach.

Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research. But because we understand that fieldwork tends to focus primarily on the collection and analysis of qualitative data, we expected to find within it discussions on the meaning of “qualitative.” But, again, this was not the case.

Instead, we found material on the history of this approach (for example, Frankfort-Nachmias and Nachmias 1996 ; Atkinson et al. 2001), including how it has changed; for example, by adopting a more self-reflexive practice (Heyl 2001), as well as the different nomenclature that has been adopted, such as fieldwork, ethnography, qualitative research, naturalistic research, participant observation and so on (for example, Lofland et al. 2006 ; Gans 1999 ).

We retrieved definitions of ethnography, such as “the study of people acting in the natural courses of their daily lives,” involving a “resocialization of the researcher” (Emerson 1988 :1) through intense immersion in others’ social worlds (see also examples in Hammersley 2018 ). This may be accomplished by direct observation and also participation (Neuman 2007 :276), although others, such as Denzin ( 1970 :185), have long recognized other types of observation, including non-participant (“fly on the wall”). In this category we have also isolated claims and opposing views, arguing that this type of research is distinguished primarily by where it is conducted (natural settings) (Hughes 1971:496), and how it is carried out (a variety of methods are applied) or, for some most importantly, by involving an active, empathetic immersion in those being studied (Emerson 1988 :2). We also retrieved descriptions of the goals it attends in relation to how it is taught (understanding subjective meanings of the people studied, primarily develop theory, or contribute to social change) (see for example, Corte and Irwin 2017 ; Frankfort-Nachmias and Nachmias 1996 :281; Trier-Bieniek 2012 :639) by collecting the richest possible data (Lofland et al. 2006 ) to derive “thick descriptions” (Geertz 1973 ), and/or to aim at theoretical statements of general scope and applicability (for example, Emerson 1988 ; Fine 2003 ). We have identified guidelines on how to evaluate it (for example Becker 1996 ; Lamont 2004 ) and have retrieved instructions on how it should be conducted (for example, Lofland et al. 2006 ). For instance, analysis should take place while the data gathering unfolds (Emerson 1988 ; Hammersley and Atkinson 2007 ; Lofland et al. 2006 ), observations should be of long duration (Becker 1970 :54; Goffman 1989 ), and data should be of high quantity (Becker 1970 :52–53), as well as other questionable distinctions between fieldwork and other methods:

Field studies differ from other methods of research in that the researcher performs the task of selecting topics, decides what questions to ask, and forges interest in the course of the research itself . This is in sharp contrast to many ‘theory-driven’ and ‘hypothesis-testing’ methods. (Lofland and Lofland 1995 :5)

But could not, for example, a strictly interview-based study be carried out with the same amount of flexibility, such as sequential interviewing (for example, Small 2009 )? Once again, are quantitative approaches really as inflexible as some qualitative researchers think? Moreover, this category stresses the role of the actors’ meaning, which requires knowledge and close interaction with people, their practices and their lifeworld.

It is clear that field studies – which are seen by some as the “gold standard” of qualitative research – are nonetheless only one way of doing qualitative research. There are other methods, but it is not clear why some are more qualitative than others, or why they are better or worse. Fieldwork is characterized by interaction with the field (the material) and understanding of the phenomenon that is being studied. In Becker’s case, he had general experience from fields in which marihuana was used, based on which he did interviews with actual users in several fields.

Grounded Theory

Another major category we identified in our sample is Grounded Theory. We found descriptions of it most clearly in Glaser and Strauss’ ([1967] 2010 ) original articulation, Strauss and Corbin ( 1998 ) and Charmaz ( 2006 ), as well as many other accounts of what it is for: generating and testing theory (Strauss 2003 :xi). We identified explanations of how this task can be accomplished – such as through two main procedures: constant comparison and theoretical sampling (Emerson 1998:96), and how using it has helped researchers to “think differently” (for example, Strauss and Corbin 1998 :1). We also read descriptions of its main traits, what it entails and fosters – for instance, an exceptional flexibility, an inductive approach (Strauss and Corbin 1998 :31–33; 1990; Esterberg 2002 :7), an ability to step back and critically analyze situations, recognize tendencies towards bias, think abstractly and be open to criticism, enhance sensitivity towards the words and actions of respondents, and develop a sense of absorption and devotion to the research process (Strauss and Corbin 1998 :5–6). Accordingly, we identified discussions of the value of triangulating different methods (both using and not using grounded theory), including quantitative ones, and theories to achieve theoretical development (most comprehensively in Denzin 1970 ; Strauss and Corbin 1998 ; Timmermans and Tavory 2012 ). We have also located arguments about how its practice helps to systematize data collection, analysis and presentation of results (Glaser and Strauss [1967] 2010 :16).

Grounded theory offers a systematic approach which requires researchers to get close to the field; closeness is a requirement of identifying questions and developing new concepts or making further distinctions with regard to old concepts. In contrast to other qualitative approaches, grounded theory emphasizes the detailed coding process, and the numerous fine-tuned distinctions that the researcher makes during the process. Within this category, too, we could not find a satisfying discussion of the meaning of qualitative research.

Defining Qualitative Research

In sum, our analysis shows that some notions reappear in the discussion of qualitative research, such as understanding, interpretation, “getting close” and making distinctions. These notions capture aspects of what we think is “qualitative.” However, a comprehensive definition that is useful and that can further develop the field is lacking, and not even a clear picture of its essential elements appears. In other words no definition emerges from our data, and in our research process we have moved back and forth between our empirical data and the attempt to present a definition. Our concrete strategy, as stated above, is to relate qualitative and quantitative research, or more specifically, qualitative and quantitative work. We use an ideal-typical notion of quantitative research which relies on taken for granted and numbered variables. This means that the data consists of variables on different scales, such as ordinal, but frequently ratio and absolute scales, and the representation of the numbers to the variables, i.e. the justification of the assignment of numbers to object or phenomenon, are not questioned, though the validity may be questioned. In this section we return to the notion of quality and try to clarify it while presenting our contribution.

Broadly, research refers to the activity performed by people trained to obtain knowledge through systematic procedures. Notions such as “objectivity” and “reflexivity,” “systematic,” “theory,” “evidence” and “openness” are here taken for granted in any type of research. Next, building on our empirical analysis we explain the four notions that we have identified as central to qualitative work: distinctions, process, closeness, and improved understanding. In discussing them, ultimately in relation to one another, we make their meaning even more precise. Our idea, in short, is that only when these ideas that we present separately for analytic purposes are brought together can we speak of qualitative research.

Distinctions

We believe that the possibility of making new distinctions is one the defining characteristics of qualitative research. It clearly sets it apart from quantitative analysis which works with taken-for-granted variables, albeit as mentioned, meta-analyses, for example, factor analysis may result in new variables. “Quality” refers essentially to distinctions, as already pointed out by Aristotle. He discusses the term “qualitative” commenting: “By a quality I mean that in virtue of which things are said to be qualified somehow” (Aristotle 1984:14). Quality is about what something is or has, which means that the distinction from its environment is crucial. We see qualitative research as a process in which significant new distinctions are made to the scholarly community; to make distinctions is a key aspect of obtaining new knowledge; a point, as we will see, that also has implications for “quantitative research.” The notion of being “significant” is paramount. New distinctions by themselves are not enough; just adding concepts only increases complexity without furthering our knowledge. The significance of new distinctions is judged against the communal knowledge of the research community. To enable this discussion and judgements central elements of rational discussion are required (cf. Habermas [1981] 1987 ; Davidsson [ 1988 ] 2001) to identify what is new and relevant scientific knowledge. Relatedly, Ragin alludes to the idea of new and useful knowledge at a more concrete level: “Qualitative methods are appropriate for in-depth examination of cases because they aid the identification of key features of cases. Most qualitative methods enhance data” (1994:79). When Becker ( 1963 ) studied deviant behavior and investigated how people became marihuana smokers, he made distinctions between the ways in which people learned how to smoke. This is a classic example of how the strategy of “getting close” to the material, for example the text, people or pictures that are subject to analysis, may enable researchers to obtain deeper insight and new knowledge by making distinctions – in this instance on the initial notion of learning how to smoke. Others have stressed the making of distinctions in relation to coding or theorizing. Emerson et al. ( 1995 ), for example, hold that “qualitative coding is a way of opening up avenues of inquiry,” meaning that the researcher identifies and develops concepts and analytic insights through close examination of and reflection on data (Emerson et al. 1995 :151). Goodwin and Horowitz highlight making distinctions in relation to theory-building writing: “Close engagement with their cases typically requires qualitative researchers to adapt existing theories or to make new conceptual distinctions or theoretical arguments to accommodate new data” ( 2002 : 37). In the ideal-typical quantitative research only existing and so to speak, given, variables would be used. If this is the case no new distinction are made. But, would not also many “quantitative” researchers make new distinctions?

Process does not merely suggest that research takes time. It mainly implies that qualitative new knowledge results from a process that involves several phases, and above all iteration. Qualitative research is about oscillation between theory and evidence, analysis and generating material, between first- and second -order constructs (Schütz 1962 :59), between getting in contact with something, finding sources, becoming deeply familiar with a topic, and then distilling and communicating some of its essential features. The main point is that the categories that the researcher uses, and perhaps takes for granted at the beginning of the research process, usually undergo qualitative changes resulting from what is found. Becker describes how he tested hypotheses and let the jargon of the users develop into theoretical concepts. This happens over time while the study is being conducted, exemplifying what we mean by process.

In the research process, a pilot-study may be used to get a first glance of, for example, the field, how to approach it, and what methods can be used, after which the method and theory are chosen or refined before the main study begins. Thus, the empirical material is often central from the start of the project and frequently leads to adjustments by the researcher. Likewise, during the main study categories are not fixed; the empirical material is seen in light of the theory used, but it is also given the opportunity to kick back, thereby resisting attempts to apply theoretical straightjackets (Becker 1970 :43). In this process, coding and analysis are interwoven, and thus are often important steps for getting closer to the phenomenon and deciding what to focus on next. Becker began his research by interviewing musicians close to him, then asking them to refer him to other musicians, and later on doubling his original sample of about 25 to include individuals in other professions (Becker 1973:46). Additionally, he made use of some participant observation, documents, and interviews with opiate users made available to him by colleagues. As his inductive theory of deviance evolved, Becker expanded his sample in order to fine tune it, and test the accuracy and generality of his hypotheses. In addition, he introduced a negative case and discussed the null hypothesis ( 1963 :44). His phasic career model is thus based on a research design that embraces processual work. Typically, process means to move between “theory” and “material” but also to deal with negative cases, and Becker ( 1998 ) describes how discovering these negative cases impacted his research design and ultimately its findings.

Obviously, all research is process-oriented to some degree. The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. Merging of data, which of course can be done in a quantitative research process, does not mean new data. New hypotheses are frequently tested, but the “raw data is often the “the same.” Obviously, over time new datasets are made available and put into use.

Another characteristic that is emphasized in our sample is that qualitative researchers – and in particular ethnographers – can, or as Goffman put it, ought to ( 1989 ), get closer to the phenomenon being studied and their data than quantitative researchers (for example, Silverman 2009 :85). Put differently, essentially because of their methods qualitative researchers get into direct close contact with those being investigated and/or the material, such as texts, being analyzed. Becker started out his interview study, as we noted, by talking to those he knew in the field of music to get closer to the phenomenon he was studying. By conducting interviews he got even closer. Had he done more observations, he would undoubtedly have got even closer to the field.

Additionally, ethnographers’ design enables researchers to follow the field over time, and the research they do is almost by definition longitudinal, though the time in the field is studied obviously differs between studies. The general characteristic of closeness over time maximizes the chances of unexpected events, new data (related, for example, to archival research as additional sources, and for ethnography for situations not necessarily previously thought of as instrumental – what Mannay and Morgan ( 2015 ) term the “waiting field”), serendipity (Merton and Barber 2004 ; Åkerström 2013 ), and possibly reactivity, as well as the opportunity to observe disrupted patterns that translate into exemplars of negative cases. Two classic examples of this are Becker’s finding of what medical students call “crocks” (Becker et al. 1961 :317), and Geertz’s ( 1973 ) study of “deep play” in Balinese society.

By getting and staying so close to their data – be it pictures, text or humans interacting (Becker was himself a musician) – for a long time, as the research progressively focuses, qualitative researchers are prompted to continually test their hunches, presuppositions and hypotheses. They test them against a reality that often (but certainly not always), and practically, as well as metaphorically, talks back, whether by validating them, or disqualifying their premises – correctly, as well as incorrectly (Fine 2003 ; Becker 1970 ). This testing nonetheless often leads to new directions for the research. Becker, for example, says that he was initially reading psychological theories, but when facing the data he develops a theory that looks at, you may say, everything but psychological dispositions to explain the use of marihuana. Especially researchers involved with ethnographic methods have a fairly unique opportunity to dig up and then test (in a circular, continuous and temporal way) new research questions and findings as the research progresses, and thereby to derive previously unimagined and uncharted distinctions by getting closer to the phenomenon under study.

Let us stress that getting close is by no means restricted to ethnography. The notion of hermeneutic circle and hermeneutics as a general way of understanding implies that we must get close to the details in order to get the big picture. This also means that qualitative researchers can literally also make use of details of pictures as evidence (cf. Harper 2002). Thus, researchers may get closer both when generating the material or when analyzing it.

Quantitative research, we maintain, in the ideal-typical representation cannot get closer to the data. The data is essentially numbers in tables making up the variables (Franzosi 2016 :138). The data may originally have been “qualitative,” but once reduced to numbers there can only be a type of “hermeneutics” about what the number may stand for. The numbers themselves, however, are non-ambiguous. Thus, in quantitative research, interpretation, if done, is not about the data itself—the numbers—but what the numbers stand for. It follows that the interpretation is essentially done in a more “speculative” mode without direct empirical evidence (cf. Becker 2017 ).

Improved Understanding

While distinction, process and getting closer refer to the qualitative work of the researcher, improved understanding refers to its conditions and outcome of this work. Understanding cuts deeper than explanation, which to some may mean a causally verified correlation between variables. The notion of explanation presupposes the notion of understanding since explanation does not include an idea of how knowledge is gained (Manicas 2006 : 15). Understanding, we argue, is the core concept of what we call the outcome of the process when research has made use of all the other elements that were integrated in the research. Understanding, then, has a special status in qualitative research since it refers both to the conditions of knowledge and the outcome of the process. Understanding can to some extent be seen as the condition of explanation and occurs in a process of interpretation, which naturally refers to meaning (Gadamer 1990 ). It is fundamentally connected to knowing, and to the knowing of how to do things (Heidegger [1927] 2001 ). Conceptually the term hermeneutics is used to account for this process. Heidegger ties hermeneutics to human being and not possible to separate from the understanding of being ( 1988 ). Here we use it in a broader sense, and more connected to method in general (cf. Seiffert 1992 ). The abovementioned aspects – for example, “objectivity” and “reflexivity” – of the approach are conditions of scientific understanding. Understanding is the result of a circular process and means that the parts are understood in light of the whole, and vice versa. Understanding presupposes pre-understanding, or in other words, some knowledge of the phenomenon studied. The pre-understanding, even in the form of prejudices, are in qualitative research process, which we see as iterative, questioned, which gradually or suddenly change due to the iteration of data, evidence and concepts. However, qualitative research generates understanding in the iterative process when the researcher gets closer to the data, e.g., by going back and forth between field and analysis in a process that generates new data that changes the evidence, and, ultimately, the findings. Questioning, to ask questions, and put what one assumes—prejudices and presumption—in question, is central to understand something (Heidegger [1927] 2001 ; Gadamer 1990 :368–384). We propose that this iterative process in which the process of understanding occurs is characteristic of qualitative research.

Improved understanding means that we obtain scientific knowledge of something that we as a scholarly community did not know before, or that we get to know something better. It means that we understand more about how parts are related to one another, and to other things we already understand (see also Fine and Hallett 2014 ). Understanding is an important condition for qualitative research. It is not enough to identify correlations, make distinctions, and work in a process in which one gets close to the field or phenomena. Understanding is accomplished when the elements are integrated in an iterative process.

It is, moreover, possible to understand many things, and researchers, just like children, may come to understand new things every day as they engage with the world. This subjective condition of understanding – namely, that a person gains a better understanding of something –is easily met. To be qualified as “scientific,” the understanding must be general and useful to many; it must be public. But even this generally accessible understanding is not enough in order to speak of “scientific understanding.” Though we as a collective can increase understanding of everything in virtually all potential directions as a result also of qualitative work, we refrain from this “objective” way of understanding, which has no means of discriminating between what we gain in understanding. Scientific understanding means that it is deemed relevant from the scientific horizon (compare Schütz 1962 : 35–38, 46, 63), and that it rests on the pre-understanding that the scientists have and must have in order to understand. In other words, the understanding gained must be deemed useful by other researchers, so that they can build on it. We thus see understanding from a pragmatic, rather than a subjective or objective perspective. Improved understanding is related to the question(s) at hand. Understanding, in order to represent an improvement, must be an improvement in relation to the existing body of knowledge of the scientific community (James [ 1907 ] 1955). Scientific understanding is, by definition, collective, as expressed in Weber’s famous note on objectivity, namely that scientific work aims at truths “which … can claim, even for a Chinese, the validity appropriate to an empirical analysis” ([1904] 1949 :59). By qualifying “improved understanding” we argue that it is a general defining characteristic of qualitative research. Becker‘s ( 1966 ) study and other research of deviant behavior increased our understanding of the social learning processes of how individuals start a behavior. And it also added new knowledge about the labeling of deviant behavior as a social process. Few studies, of course, make the same large contribution as Becker’s, but are nonetheless qualitative research.

Understanding in the phenomenological sense, which is a hallmark of qualitative research, we argue, requires meaning and this meaning is derived from the context, and above all the data being analyzed. The ideal-typical quantitative research operates with given variables with different numbers. This type of material is not enough to establish meaning at the level that truly justifies understanding. In other words, many social science explanations offer ideas about correlations or even causal relations, but this does not mean that the meaning at the level of the data analyzed, is understood. This leads us to say that there are indeed many explanations that meet the criteria of understanding, for example the explanation of how one becomes a marihuana smoker presented by Becker. However, we may also understand a phenomenon without explaining it, and we may have potential explanations, or better correlations, that are not really understood.

We may speak more generally of quantitative research and its data to clarify what we see as an important distinction. The “raw data” that quantitative research—as an idealtypical activity, refers to is not available for further analysis; the numbers, once created, are not to be questioned (Franzosi 2016 : 138). If the researcher is to do “more” or “change” something, this will be done by conjectures based on theoretical knowledge or based on the researcher’s lifeworld. Both qualitative and quantitative research is based on the lifeworld, and all researchers use prejudices and pre-understanding in the research process. This idea is present in the works of Heidegger ( 2001 ) and Heisenberg (cited in Franzosi 2010 :619). Qualitative research, as we argued, involves the interaction and questioning of concepts (theory), data, and evidence.

Ragin ( 2004 :22) points out that “a good definition of qualitative research should be inclusive and should emphasize its key strengths and features, not what it lacks (for example, the use of sophisticated quantitative techniques).” We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. Qualitative research, as defined here, is consequently a combination of two criteria: (i) how to do things –namely, generating and analyzing empirical material, in an iterative process in which one gets closer by making distinctions, and (ii) the outcome –improved understanding novel to the scholarly community. Is our definition applicable to our own study? In this study we have closely read the empirical material that we generated, and the novel distinction of the notion “qualitative research” is the outcome of an iterative process in which both deduction and induction were involved, in which we identified the categories that we analyzed. We thus claim to meet the first criteria, “how to do things.” The second criteria cannot be judged but in a partial way by us, namely that the “outcome” —in concrete form the definition-improves our understanding to others in the scientific community.

We have defined qualitative research, or qualitative scientific work, in relation to quantitative scientific work. Given this definition, qualitative research is about questioning the pre-given (taken for granted) variables, but it is thus also about making new distinctions of any type of phenomenon, for example, by coining new concepts, including the identification of new variables. This process, as we have discussed, is carried out in relation to empirical material, previous research, and thus in relation to theory. Theory and previous research cannot be escaped or bracketed. According to hermeneutic principles all scientific work is grounded in the lifeworld, and as social scientists we can thus never fully bracket our pre-understanding.

We have proposed that quantitative research, as an idealtype, is concerned with pre-determined variables (Small 2008 ). Variables are epistemically fixed, but can vary in terms of dimensions, such as frequency or number. Age is an example; as a variable it can take on different numbers. In relation to quantitative research, qualitative research does not reduce its material to number and variables. If this is done the process of comes to a halt, the researcher gets more distanced from her data, and it makes it no longer possible to make new distinctions that increase our understanding. We have above discussed the components of our definition in relation to quantitative research. Our conclusion is that in the research that is called quantitative there are frequent and necessary qualitative elements.

Further, comparative empirical research on researchers primarily working with ”quantitative” approaches and those working with ”qualitative” approaches, we propose, would perhaps show that there are many similarities in practices of these two approaches. This is not to deny dissimilarities, or the different epistemic and ontic presuppositions that may be more or less strongly associated with the two different strands (see Goertz and Mahoney 2012 ). Our point is nonetheless that prejudices and preconceptions about researchers are unproductive, and that as other researchers have argued, differences may be exaggerated (e.g., Becker 1996 : 53, 2017 ; Marchel and Owens 2007 :303; Ragin 1994 ), and that a qualitative dimension is present in both kinds of work.

Several things follow from our findings. The most important result is the relation to quantitative research. In our analysis we have separated qualitative research from quantitative research. The point is not to label individual researchers, methods, projects, or works as either “quantitative” or “qualitative.” By analyzing, i.e., taking apart, the notions of quantitative and qualitative, we hope to have shown the elements of qualitative research. Our definition captures the elements, and how they, when combined in practice, generate understanding. As many of the quotations we have used suggest, one conclusion of our study holds that qualitative approaches are not inherently connected with a specific method. Put differently, none of the methods that are frequently labelled “qualitative,” such as interviews or participant observation, are inherently “qualitative.” What matters, given our definition, is whether one works qualitatively or quantitatively in the research process, until the results are produced. Consequently, our analysis also suggests that those researchers working with what in the literature and in jargon is often called “quantitative research” are almost bound to make use of what we have identified as qualitative elements in any research project. Our findings also suggest that many” quantitative” researchers, at least to some extent, are engaged with qualitative work, such as when research questions are developed, variables are constructed and combined, and hypotheses are formulated. Furthermore, a research project may hover between “qualitative” and “quantitative” or start out as “qualitative” and later move into a “quantitative” (a distinct strategy that is not similar to “mixed methods” or just simply combining induction and deduction). More generally speaking, the categories of “qualitative” and “quantitative,” unfortunately, often cover up practices, and it may lead to “camps” of researchers opposing one another. For example, regardless of the researcher is primarily oriented to “quantitative” or “qualitative” research, the role of theory is neglected (cf. Swedberg 2017 ). Our results open up for an interaction not characterized by differences, but by different emphasis, and similarities.

Let us take two examples to briefly indicate how qualitative elements can fruitfully be combined with quantitative. Franzosi ( 2010 ) has discussed the relations between quantitative and qualitative approaches, and more specifically the relation between words and numbers. He analyzes texts and argues that scientific meaning cannot be reduced to numbers. Put differently, the meaning of the numbers is to be understood by what is taken for granted, and what is part of the lifeworld (Schütz 1962 ). Franzosi shows how one can go about using qualitative and quantitative methods and data to address scientific questions analyzing violence in Italy at the time when fascism was rising (1919–1922). Aspers ( 2006 ) studied the meaning of fashion photographers. He uses an empirical phenomenological approach, and establishes meaning at the level of actors. In a second step this meaning, and the different ideal-typical photographers constructed as a result of participant observation and interviews, are tested using quantitative data from a database; in the first phase to verify the different ideal-types, in the second phase to use these types to establish new knowledge about the types. In both of these cases—and more examples can be found—authors move from qualitative data and try to keep the meaning established when using the quantitative data.

A second main result of our study is that a definition, and we provided one, offers a way for research to clarify, and even evaluate, what is done. Hence, our definition can guide researchers and students, informing them on how to think about concrete research problems they face, and to show what it means to get closer in a process in which new distinctions are made. The definition can also be used to evaluate the results, given that it is a standard of evaluation (cf. Hammersley 2007 ), to see whether new distinctions are made and whether this improves our understanding of what is researched, in addition to the evaluation of how the research was conducted. By making what is qualitative research explicit it becomes easier to communicate findings, and it is thereby much harder to fly under the radar with substandard research since there are standards of evaluation which make it easier to separate “good” from “not so good” qualitative research.

To conclude, our analysis, which ends with a definition of qualitative research can thus both address the “internal” issues of what is qualitative research, and the “external” critiques that make it harder to do qualitative research, to which both pressure from quantitative methods and general changes in society contribute.

Acknowledgements

Financial Support for this research is given by the European Research Council, CEV (263699). The authors are grateful to Susann Krieglsteiner for assistance in collecting the data. The paper has benefitted from the many useful comments by the three reviewers and the editor, comments by members of the Uppsala Laboratory of Economic Sociology, as well as Jukka Gronow, Sebastian Kohl, Marcin Serafin, Richard Swedberg, Anders Vassenden and Turid Rødne.

Biographies

is professor of sociology at the Department of Sociology, Uppsala University and Universität St. Gallen. His main focus is economic sociology, and in particular, markets. He has published numerous articles and books, including Orderly Fashion (Princeton University Press 2010), Markets (Polity Press 2011) and Re-Imagining Economic Sociology (edited with N. Dodd, Oxford University Press 2015). His book Ethnographic Methods (in Swedish) has already gone through several editions.

is associate professor of sociology at the Department of Media and Social Sciences, University of Stavanger. His research has been published in journals such as Social Psychology Quarterly, Sociological Theory, Teaching Sociology, and Music and Arts in Action. As an ethnographer he is working on a book on he social world of big-wave surfing.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Patrik Aspers, Email: [email protected] .

Ugo Corte, Email: [email protected] .

  • Åkerström M. Curiosity and serendipity in qualitative research. Qualitative Sociology Review. 2013; 9 (2):10–18. [ Google Scholar ]
  • Alford, Robert R. 1998. The craft of inquiry. Theories, methods, evidence . Oxford: Oxford University Press.
  • Alvesson M, Kärreman D. Qualitative research and theory development . Mystery as method . London: SAGE Publications; 2011. [ Google Scholar ]
  • Aspers, Patrik. 2006. Markets in Fashion, A Phenomenological Approach. London Routledge.
  • Atkinson P. Qualitative research. Unity and diversity. Forum: Qualitative Social Research. 2005; 6 (3):1–15. [ Google Scholar ]
  • Becker HS. Outsiders. Studies in the sociology of deviance . New York: The Free Press; 1963. [ Google Scholar ]
  • Becker HS. Whose side are we on? Social Problems. 1966; 14 (3):239–247. [ Google Scholar ]
  • Becker HS. Sociological work. Method and substance. New Brunswick: Transaction Books; 1970. [ Google Scholar ]
  • Becker HS. The epistemology of qualitative research. In: Richard J, Anne C, Shweder RA, editors. Ethnography and human development. Context and meaning in social inquiry. Chicago: University of Chicago Press; 1996. pp. 53–71. [ Google Scholar ]
  • Becker HS. Tricks of the trade. How to think about your research while you're doing it. Chicago: University of Chicago Press; 1998. [ Google Scholar ]
  • Becker, Howard S. 2017. Evidence . Chigaco: University of Chicago Press.
  • Becker H, Geer B, Hughes E, Strauss A. Boys in White, student culture in medical school. New Brunswick: Transaction Publishers; 1961. [ Google Scholar ]
  • Berezin M. How do we know what we mean? Epistemological dilemmas in cultural sociology. Qualitative Sociology. 2014; 37 (2):141–151. [ Google Scholar ]
  • Best, Joel. 2004. Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , eds . Charles, Ragin, Joanne, Nagel, and Patricia White, 53-54. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf .
  • Biernacki R. Humanist interpretation versus coding text samples. Qualitative Sociology. 2014; 37 (2):173–188. [ Google Scholar ]
  • Blumer H. Symbolic interactionism: Perspective and method. Berkeley: University of California Press; 1969. [ Google Scholar ]
  • Brady H, Collier D, Seawright J. Refocusing the discussion of methodology. In: Henry B, David C, editors. Rethinking social inquiry. Diverse tools, shared standards. Lanham: Rowman and Littlefield; 2004. pp. 3–22. [ Google Scholar ]
  • Brown AP. Qualitative method and compromise in applied social research. Qualitative Research. 2010; 10 (2):229–248. [ Google Scholar ]
  • Charmaz K. Constructing grounded theory. London: Sage; 2006. [ Google Scholar ]
  • Corte, Ugo, and Katherine Irwin. 2017. “The Form and Flow of Teaching Ethnographic Knowledge: Hands-on Approaches for Learning Epistemology” Teaching Sociology 45(3): 209-219.
  • Creswell JW. Research design. Qualitative, quantitative, and mixed method approaches. 3. Thousand Oaks: SAGE Publications; 2009. [ Google Scholar ]
  • Davidsson D. The myth of the subjective. In: Davidsson D, editor. Subjective, intersubjective, objective. Oxford: Oxford University Press; 1988. pp. 39–52. [ Google Scholar ]
  • Denzin NK. The research act: A theoretical introduction to Ssociological methods. Chicago: Aldine Publishing Company Publishers; 1970. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Introduction. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. Collecting and interpreting qualitative materials. Thousand Oaks: SAGE Publications; 2003. pp. 1–45. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Introduction. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. The Sage handbook of qualitative research. Thousand Oaks: SAGE Publications; 2005. pp. 1–32. [ Google Scholar ]
  • Emerson RM, editor. Contemporary field research. A collection of readings. Prospect Heights: Waveland Press; 1988. [ Google Scholar ]
  • Emerson RM, Fretz RI, Shaw LL. Writing ethnographic fieldnotes. Chicago: University of Chicago Press; 1995. [ Google Scholar ]
  • Esterberg KG. Qualitative methods in social research. Boston: McGraw-Hill; 2002. [ Google Scholar ]
  • Fine, Gary Alan. 1995. Review of “handbook of qualitative research.” Contemporary Sociology 24 (3): 416–418.
  • Fine, Gary Alan. 2003. “ Toward a Peopled Ethnography: Developing Theory from Group Life.” Ethnography . 4(1):41-60.
  • Fine GA, Hancock BH. The new ethnographer at work. Qualitative Research. 2017; 17 (2):260–268. [ Google Scholar ]
  • Fine GA, Hallett T. Stranger and stranger: Creating theory through ethnographic distance and authority. Journal of Organizational Ethnography. 2014; 3 (2):188–203. [ Google Scholar ]
  • Flick U. Qualitative research. State of the art. Social Science Information. 2002; 41 (1):5–24. [ Google Scholar ]
  • Flick U. Designing qualitative research. London: SAGE Publications; 2007. [ Google Scholar ]
  • Frankfort-Nachmias C, Nachmias D. Research methods in the social sciences. 5. London: Edward Arnold; 1996. [ Google Scholar ]
  • Franzosi R. Sociology, narrative, and the quality versus quantity debate (Goethe versus Newton): Can computer-assisted story grammars help us understand the rise of Italian fascism (1919- 1922)? Theory and Society. 2010; 39 (6):593–629. [ Google Scholar ]
  • Franzosi R. From method and measurement to narrative and number. International journal of social research methodology. 2016; 19 (1):137–141. [ Google Scholar ]
  • Gadamer, Hans-Georg. 1990. Wahrheit und Methode, Grundzüge einer philosophischen Hermeneutik . Band 1, Hermeneutik. Tübingen: J.C.B. Mohr.
  • Gans H. Participant Observation in an Age of “Ethnography” Journal of Contemporary Ethnography. 1999; 28 (5):540–548. [ Google Scholar ]
  • Geertz C. The interpretation of cultures. New York: Basic Books; 1973. [ Google Scholar ]
  • Gilbert N. Researching social life. 3. London: SAGE Publications; 2009. [ Google Scholar ]
  • Glaeser A. Hermeneutic institutionalism: Towards a new synthesis. Qualitative Sociology. 2014; 37 :207–241. [ Google Scholar ]
  • Glaser, Barney G., and Anselm L. Strauss. [1967] 2010. The discovery of grounded theory. Strategies for qualitative research. Hawthorne: Aldine.
  • Goertz G, Mahoney J. A tale of two cultures: Qualitative and quantitative research in the social sciences. Princeton: Princeton University Press; 2012. [ Google Scholar ]
  • Goffman E. On fieldwork. Journal of Contemporary Ethnography. 1989; 18 (2):123–132. [ Google Scholar ]
  • Goodwin J, Horowitz R. Introduction. The methodological strengths and dilemmas of qualitative sociology. Qualitative Sociology. 2002; 25 (1):33–47. [ Google Scholar ]
  • Habermas, Jürgen. [1981] 1987. The theory of communicative action . Oxford: Polity Press.
  • Hammersley M. The issue of quality in qualitative research. International Journal of Research & Method in Education. 2007; 30 (3):287–305. [ Google Scholar ]
  • Hammersley, Martyn. 2013. What is qualitative research? Bloomsbury Publishing.
  • Hammersley M. What is ethnography? Can it survive should it? Ethnography and Education. 2018; 13 (1):1–17. [ Google Scholar ]
  • Hammersley M, Atkinson P. Ethnography . Principles in practice . London: Tavistock Publications; 2007. [ Google Scholar ]
  • Heidegger M. Sein und Zeit. Tübingen: Max Niemeyer Verlag; 2001. [ Google Scholar ]
  • Heidegger, Martin. 1988. 1923. Ontologie. Hermeneutik der Faktizität, Gesamtausgabe II. Abteilung: Vorlesungen 1919-1944, Band 63, Frankfurt am Main: Vittorio Klostermann.
  • Hempel CG. Philosophy of the natural sciences. Upper Saddle River: Prentice Hall; 1966. [ Google Scholar ]
  • Hood JC. Teaching against the text. The case of qualitative methods. Teaching Sociology. 2006; 34 (3):207–223. [ Google Scholar ]
  • James W. Pragmatism. New York: Meredian Books; 1907. [ Google Scholar ]
  • Jovanović G. Toward a social history of qualitative research. History of the Human Sciences. 2011; 24 (2):1–27. [ Google Scholar ]
  • Kalof L, Dan A, Dietz T. Essentials of social research. London: Open University Press; 2008. [ Google Scholar ]
  • Katz J. Situational evidence: Strategies for causal reasoning from observational field notes. Sociological Methods & Research. 2015; 44 (1):108–144. [ Google Scholar ]
  • King G, Keohane RO, Sidney S, Verba S. Scientific inference in qualitative research. Princeton: Princeton University Press; 1994. Designing social inquiry. [ Google Scholar ]
  • Lamont M. Evaluating qualitative research: Some empirical findings and an agenda. In: Lamont M, White P, editors. Report from workshop on interdisciplinary standards for systematic qualitative research. Washington, DC: National Science Foundation; 2004. pp. 91–95. [ Google Scholar ]
  • Lamont M, Swidler A. Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology. 2014; 37 (2):153–171. [ Google Scholar ]
  • Lazarsfeld P, Barton A. Some functions of qualitative analysis in social research. In: Kendall P, editor. The varied sociology of Paul Lazarsfeld. New York: Columbia University Press; 1982. pp. 239–285. [ Google Scholar ]
  • Lichterman, Paul, and Isaac Reed I (2014), Theory and Contrastive Explanation in Ethnography. Sociological methods and research. Prepublished 27 October 2014; 10.1177/0049124114554458.
  • Lofland J, Lofland L. Analyzing social settings. A guide to qualitative observation and analysis. 3. Belmont: Wadsworth; 1995. [ Google Scholar ]
  • Lofland J, Snow DA, Anderson L, Lofland LH. Analyzing social settings. A guide to qualitative observation and analysis. 4. Belmont: Wadsworth/Thomson Learning; 2006. [ Google Scholar ]
  • Long AF, Godfrey M. An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology. 2004; 7 (2):181–196. [ Google Scholar ]
  • Lundberg G. Social research: A study in methods of gathering data. New York: Longmans, Green and Co.; 1951. [ Google Scholar ]
  • Malinowski B. Argonauts of the Western Pacific: An account of native Enterprise and adventure in the archipelagoes of Melanesian New Guinea. London: Routledge; 1922. [ Google Scholar ]
  • Manicas P. A realist philosophy of science: Explanation and understanding. Cambridge: Cambridge University Press; 2006. [ Google Scholar ]
  • Marchel C, Owens S. Qualitative research in psychology. Could William James get a job? History of Psychology. 2007; 10 (4):301–324. [ PubMed ] [ Google Scholar ]
  • McIntyre LJ. Need to know. Social science research methods. Boston: McGraw-Hill; 2005. [ Google Scholar ]
  • Merton RK, Barber E. The travels and adventures of serendipity . A Study in Sociological Semantics and the Sociology of Science. Princeton: Princeton University Press; 2004. [ Google Scholar ]
  • Mannay D, Morgan M. Doing ethnography or applying a qualitative technique? Reflections from the ‘waiting field‘ Qualitative Research. 2015; 15 (2):166–182. [ Google Scholar ]
  • Neuman LW. Basics of social research. Qualitative and quantitative approaches. 2. Boston: Pearson Education; 2007. [ Google Scholar ]
  • Ragin CC. Constructing social research. The unity and diversity of method. Thousand Oaks: Pine Forge Press; 1994. [ Google Scholar ]
  • Ragin, Charles C. 2004. Introduction to session 1: Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , 22, ed. Charles C. Ragin, Joane Nagel, Patricia White. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf
  • Rawls, Anne. 2018. The Wartime narrative in US sociology, 1940–7: Stigmatizing qualitative sociology in the name of ‘science,’ European Journal of Social Theory (Online first).
  • Schütz A. Collected papers I: The problem of social reality. The Hague: Nijhoff; 1962. [ Google Scholar ]
  • Seiffert H. Einführung in die Hermeneutik. Tübingen: Franke; 1992. [ Google Scholar ]
  • Silverman D. Doing qualitative research. A practical handbook. 2. London: SAGE Publications; 2005. [ Google Scholar ]
  • Silverman D. A very short, fairly interesting and reasonably cheap book about qualitative research. London: SAGE Publications; 2009. [ Google Scholar ]
  • Silverman D. What counts as qualitative research? Some cautionary comments. Qualitative Sociology Review. 2013; 9 (2):48–55. [ Google Scholar ]
  • Small ML. “How many cases do I need?” on science and the logic of case selection in field-based research. Ethnography. 2009; 10 (1):5–38. [ Google Scholar ]
  • Small, Mario L 2008. Lost in translation: How not to make qualitative research more scientific. In Workshop on interdisciplinary standards for systematic qualitative research, ed in Michelle Lamont, and Patricia White, 165–171. Washington, DC: National Science Foundation.
  • Snow DA, Anderson L. Down on their luck: A study of homeless street people. Berkeley: University of California Press; 1993. [ Google Scholar ]
  • Snow DA, Morrill C. New ethnographies: Review symposium: A revolutionary handbook or a handbook for revolution? Journal of Contemporary Ethnography. 1995; 24 (3):341–349. [ Google Scholar ]
  • Strauss AL. Qualitative analysis for social scientists. 14. Chicago: Cambridge University Press; 2003. [ Google Scholar ]
  • Strauss AL, Corbin JM. Basics of qualitative research. Techniques and procedures for developing grounded theory. 2. Thousand Oaks: Sage Publications; 1998. [ Google Scholar ]
  • Swedberg, Richard. 2017. Theorizing in sociological research: A new perspective, a new departure? Annual Review of Sociology 43: 189–206.
  • Swedberg R. The new 'Battle of Methods'. Challenge January–February. 1990; 3 (1):33–38. [ Google Scholar ]
  • Timmermans S, Tavory I. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory. 2012; 30 (3):167–186. [ Google Scholar ]
  • Trier-Bieniek A. Framing the telephone interview as a participant-centred tool for qualitative research. A methodological discussion. Qualitative Research. 2012; 12 (6):630–644. [ Google Scholar ]
  • Valsiner J. Data as representations. Contextualizing qualitative and quantitative research strategies. Social Science Information. 2000; 39 (1):99–113. [ Google Scholar ]
  • Weber, Max. 1904. 1949. Objectivity’ in social Science and social policy. Ed. Edward A. Shils and Henry A. Finch, 49–112. New York: The Free Press.

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Science News
  • Meetings and Events
  • Social Media
  • Press Resources
  • Email Updates
  • Innovation Speaker Series

Day Two: Placebo Workshop: Translational Research Domains and Key Questions

July 11, 2024

July 12, 2024

Day 1 Recap and Day 2 Overview

ERIN KING: All right. It is 12:01 so we'll go ahead and get started. And so on behalf of the Co-Chairs and the NIMH Planning Committee, I'd like to welcome you back to day two of the NIMH Placebo Workshop, Translational Research Domains and Key Questions. Before we begin, I will just go over our housekeeping items again. So attendees have been entered into the workshop in listen-only mode with cameras disabled. You can submit your questions via the Q&A box at any time during the presentation. And be sure to address your question to the speaker that you would like to respond.

For more information on today's speakers, their biographies can be found on the event registration website. If you have technical difficulties hearing or viewing the workshop, please note these in the Q&A box and our technicians will work to fix the problem. And you can also send an e-mail to [email protected]. And we'll put that e-mail address in the chat box for you. This workshop will be recorded and posted to the NIMH event web page for later viewing.

Now I would like to turn it over to our workshop Co-Chair, Dr. Cristina Cusin, for today's introduction.

CRISTINA CUSIN: Thank you so much, Erin. Welcome, everybody. It's very exciting to be here for this event.

My job is to provide you a brief recap of day one and to introduce you to the speakers of day two. Let me share my slides.

Again, thank you to the amazing Planning Committee. Thanks to their effort, we think this is going to be a success. I learned a lot of new information and a lot of ideas for research proposals and research projects from day one. Very briefly, please go and watch the videos. They are going to be uploaded in a couple of weeks if you missed them.

But we had an introduction from Tor, my Co-Chair. We had an historic perspective on clinical trials from the industry regulatory perspective. We had the current state from the FDA on placebo.

We had an overview of how hard it is to sham, to provide the right sham for device-based trials, and the challenges for TMS. We have seen some new data on the current state of placebo in psychosocial trials and what is the equivalent of a placebo pill for psychosocial trials. And some social neuroscience approach to placebo analgesia. We have come a long way from snake oil and we are trying to figure out what is placebo.

Tor, my Co-Chair, presented some data on the neurocircuitry underlying placebo effect and the questions that how placebo is a mixture of different elements including regression to the mean, sampling bias, selective attrition for human studies, the natural history of illness, the placebo effect per se that can be related to expectations, context, learning, interpretation.

We have seen a little bit of how is the impact on clinical trial design and how do we know that something, it really works. Or whatever this "it" is. And why do even placebo effect exists? It's fascinating idea that placebo exists as a predictive control to anticipate threats and the opportunity to respond in advance and to provide causal inference, a construct perception to infer the underlying state of body and of world.

We have seen historical perspective. And Ni Aye Khin and Mike Detke provided some overview of 25 years of randomized control trials from the data mining in major depressive disorders, schizophrenia trials and the lessons we have learned.

We have seen some strategies, both historical strategies and novel strategies to decrease placebo response in clinical trials and the results. Start from trial design, SPCD, lead-in, placebo phase and flexible dosing. Use of different scales. The use of statistical approaches like last observation carried forward or MMRM. Centralized ratings, self-rating, computer rating for different assessments. And more issues in clinical trials related to patient selection and professional patients.

Last, but not least, the dream of finding biomarkers for psychiatric conditions and tying response, clinical response to biomarkers. And we have seen how difficult it is to compare more recent studies with studies that were started in the '90s.

We have the FDA perspective with Tiffany Farchione in this placebo being a huge issue from the FDA. Especially the discussion towards the end of the day was on how to blind psychedelics.

We have seen an increasing placebo response rate in randomized controlled trials, also in adolescents, that is. And the considerations from the FDA of novel design models in collaboration with industry. We had examples of drugs approved for other disorders, not psychiatric condition, and realized -- made me realize how little we know about the true pathophysiology of psychiatric disorders, likely also heterogeneous conditions.

It made me very jealous of other fields because they have objective measures. They have biology, they have histology, they have imaging, they have lab values. While we are -- we are far behind, and we are not really able to explain to our patients why our mitigations are supposed to work or how they really work.

We heard from Holly Lisanby and Zhi-De Deng. The sham, the difficulty in producing the right sham for each type of device because most of them have auxiliary effects that are separate from the clinical effect like the noise or the scalp stimulation for TMS.

And it's critical to obtain a true blinding and separating sham from verum. We have seen how in clinical trials for devices expectancy from the patient, high tech environment and prolonged contact with clinicians and staff may play a role. And we have seen how difficult it is to develop the best possible sham for TMS studies in tDCS. It's really complicated and it's so difficult also to compare published studies in meta-analysis because they've used very different type of sham. Not all sham are created equal. And some of them could have been biologically active, so therefore invalidating the result or making the study uninformative.

Then we moved on to another fascinating topic with Dr. Rief and Dr. Atlas. What is the impact of psychological factors when you're studying a psychological intervention. Expectations, specific or nonspecific factors in clinical trials and what is interaction between those factors.

More, we learned about the potential nocebo effect of standard medical care or being on a wait list versus being in the active arm of a psychotherapy trial. And the fact that we are not accurately measuring the side effect of psychotherapy trial itself. And we heard more a fascinating talk about the neurocircuitry mediating placebo effect -- salience, affective value, cognitive control. And how perception of provider, perception of his or her warmth and competence and other social factors can affect response and placebo response, induce bias in evaluation of acute pain of others. Another very interesting field of study.

From a clinician perspective, this is -- and from someone who conduct clinical trials, all this was extremely informative because in many case in our patient situation no matter how good the treatment is, they have severe psychosocial stressors. They have difficulties to accessing food, to access treatment, transportation, or they live in an extremely stressful environment. So to disentangle other psychosocial factors from the treatment, from the biology is going to be critical to figure out how to treat best our patients.

And there is so much more work to do. Each of us approach the placebo topic for research from a different perspective. And like the blind man trying to understand what is an elephant, we have to endure it, we have to talk to each other, we have to collaborate and understand better the underlying biology, understand different aspect of the placebo phenomena.

And this lead us to the overview for day two. We are going to hear more about other topic that are so exciting. The placebo, the nocebo effect and other predictive factors in laboratory setting. We are going to hear about genetic of the placebo response to clinical trials. More physiological and psychological and neuromechanism for analgesia. And after a brief break around 1:30, we are going to hear about novel biological and behavioral approaches for the placebo effect.

We are going to hear about brain mapping. We are going to hear about other findings from imaging. And we're going to hear about preclinical modeling. There were some questions yesterday about animal models of placebo. And last, but not least, please stay around because in the panel discussion, we are going to tackle some of your questions. And we are going to have two wonderful moderators, Ted Kaptchuk and Matthew Rudorfer. So please stay with us and ask questions. We love to see more challenges for our speakers. And we're going to be all of the panelists from yesterday, from today are going to be present. Thank you so much.

Now we're going to move on to our first speaker of the day. If I am correct according to the last -- Luana.

Measuring & Mitigating the Placebo Effect

LUANA COLLOCA: Thank you very much, Cristina. First, I would love to thank the organizer. This is a very exciting opportunity to place our awareness about this important phenomenon for clinical trials and the clinical practice.

And today, I wish to give you a very brief overview of the psychoneurobiological mechanism of a placebo and nocebo, the description of some pharmacological studies, and a little bit of information on social learning. That is a topic that has been mentioned a little bit yesterday. And finally, the translational part. Can we translate what we learn from mechanistic approach to placebo and nocebo in terms of a disease and symptomatology and eventually predictors is the bigger question.

So we learned yesterday that placebo effects are generated by verbal suggestion, this medication has strong antidepressant effects. Therapeutic prior experience, merely taking a medication weeks, days before being substitute with a simulation of placebo sham treatment. Observation of a benefit in other people, contextual and treatment cue, and interpersonal interactions.

Especially in the fields of pain where we can simulate nociception, painful experience in laboratory setting, we learn a lot about the modulation related to placebo. In particular, expectation can provide a reaction and activation of parts of the brain like frontal area, nucleus accumbens, ventral striatum. And this kind of mechanism can generate a descending stimulation to make the painful nociceptive stimulus less intense.

The experience of analgesia at the level of a pain mechanism translate into a modulation reduction of a pain intensity. But most important, pain unpleasantness and the effective components of the pain. I will show today some information about the psychological factor, the demographic factor as well as genetic factors that can be predictive of placebo effects in the context of a pain.

On the other hand, a growing interest is related to nocebo effects, the negative counter sides of this phenomenon. When we talk about nocebo effects, we refer to increase in worsening of outcome in symptoms related to negative expectation, prior negative therapeutic experience, observing a negative outcome in others, or even mass psychogenic modeling such as some nocebo-related response during the pandemic. Treatment leaflets, the description of all side effects related to a medication. Patient-clinician communication. The informed consent where we list all of the side effects of a procedure or medication as well as contextual cues in clinical encounters.

And importantly, internal factor like emotion, mood, maladaptive cognitive appraisal, negative valence, personality traits, somatosensory features and omics can be predictive of negative worsening of symptom and outcome related to placebo and nocebo effects. In terms of a nocebo very briefly, there is a lot of attention again related to brain imaging with beautiful data show that the brainstem, the spinal cord, the hippocampus play a critical role during nocebo hyperalgesic effects.

And importantly, we learn that about placebo and nocebo through different approach including brain imaging, as we saw yesterday, but also pharmacological approach. We start from realizing that placebo effects are really neurobiological effects with the use of agonist or antagonist.

In other words, we can use a drug to mimic the action of that drug when we replace the drug with a saline solution, for example. In the cartoon here, you can see a brief pharmacological conditioning with apomorphine. Apomorphine is a dopamine agonist. And after three days of administration, apomorphine was replaced with saline solution in the intraoperative room to allow us to understand if we can mimic at the level of neuronal response the effects of apomorphine.

So in brief these are patients undergoing subthalamic EEG installation of deep brain stimulation. You can see here reaching the subthalamic nucleus. So after crossing the thalamus, the zona incerta, the STN, and the substantia nigra, the surgeon localized the area of stimulation. Because we have two subthalamic nuclei, we can use one as control and the other one as target to study in this case the effects of saline solution given after three days of apomorphine.

What we found was in those people who respond, there was consistency in reduction of clinical symptoms. As you can see here, the UPDRS, a common scale to measure rigidity in Parkinson, the frequency of a discharge at the level of neurons and the self-perception, patients with sentences like I feel like after Levodopa, I feel good. This feeling good translate in less rigidity, less tremor in the surgical room.

On the other hand, some participants didn't respond. Consistently we found no clinical improvement, no difference in preference over this drug at the level of a single unit and no set perception of a benefit. This kind of effects started to trigger the questions what is the reason why some people who responded to placebo and pharmacological conditioning and some other people don't. I will try to address this question in the second part of my talk.

On the other hand, we learn a lot about the endogenous modulation of pain and true placebo effects by using in this case an antagonist. The goal in this experiment was to create a painful sensation through a tourniquet. Week one with no treatment. Week two we pre-inject healthy participant with morphine. Week three the same morphine. And week four we replace morphine with placebo.

And you can see that a placebo increase the pain tolerance in terms of imminent. And this was not carryover effects. In fact, the control at week five showed no differences. Part of the participants were pre-injected with an antagonist Naloxone that when we use Naloxone at high dose, we can block the opioids delta and K receptors. You can see that by pre-injecting Naloxone there is a blockage of placebo analgesia, and I would say this morphine-like effects related to placebo given after morphine.

We start to then consider this phenomenon. Is this a way for tapering opioids. And we called this sort of drug-like effects as dose-extending placebo. The idea is that if we use a pharmacological treatment, morphine, apomorphine, as I showed to you, and then we replace the treatment with a placebo, we can create a pharmacological memory, and this can translate into a clinical benefit. Therefore, the dose-extending placebo can be used to extend the benefit of the drug, but also to reduce side effects related to the active drug.

In particular for placebo given after morphine, you can see on this graph, the effects is similarly strong if we do the repetition of a morphine one day apart or one week apart. Interestingly, this is the best model to be used in animal research.

Here at University of Maryland in collaboration with Todd Degotte, we create a model of anhedonia in mice and we condition animals with Ketamine. The goal was to replace Ketamine with a placebo. There are several control as you can see. But what is important for us, we condition animal with Ketamine week one, three and five. And then we substitute Ketamine with saline along with the CS. The condition of the stimulus was a light, a low light. And we want to compare this with an injection of Ketamine given at week seven.

So as you can see here, of course Ketamine was inducing a benefit as compared to saline and the Ketamine. But what is seen testing when we compare Ketamine week seven with saline replacing the Ketamine, we found no difference; suggesting that even in animals, in mice we were able to create drug-related effects. In this case, a Ketamine antidepressant-like placebo effects. These effects also add dimorphic effects in the sense that we observed this is in males but not in females.

So another approach to use agonist, like I mentioned for apomorphine in Parkinson patient, was to use vasopressin and oxytocin to increase placebo effects. In this case, we used verbal suggestion that in our experience especially with healthy participants tended to create very small sample size in terms of placebo analgesic effects. So we knew that from the literature that there is a dimorphic effects for this hormone. So we inject people with intranasal vasopressin, saline, oxytocin in low dose and no treatment. You can see there was a drug effects in women whereby vasopressin boost placebo analgesic effects, but not in men where yet we found many effects of manipulation but not drug effects.

Importantly, vasopressin affect dispositional anxiety as well as cortisol. And there is a negative correlation between anxiety and cortisol in relationship to vasopressin-induced placebo analgesia.

Another was, can we use medication to study placebo in laboratory setting or can we study placebo and nocebo without any medication? One example is to use a manipulation of the intensity of the painful stimulations. We use a thermal stimulation tailored at three different levels. 80 out of 100 with a visual analog scale, 50 or 20, as you can see from the thermometer.

We also combined the level of pain with a face. So first to emphasize there is three level of pain, participants will see an anticipatory cue just before the thermal stimulation. Ten seconds of the thermal stimulation to provide the experience of analgesia with the green and the hyperalgesia with the red as compared to the control, the yellow condition.

Therefore, the next day we move in the fMRI. And the goal was to try to understand to what extent expectation is relevant in placebo and nocebo effects. We mismatch what they anticipate, and they learn the day before. But also you can see we tailored the intensity at the same identical level. 50 for each participant.

We found that when expectation matched the level of the cues, anticipatory cue and face, we found a strong nocebo effects and placebo effects. You can see in red that despite the level of pain were identical, the perceived red-related stimuli as higher in terms of intensity, and the green related the stimuli as lower when compared to the control. By mismatching what they expect with what they saw, we blocked completely placebo effects and still nocebo persist.

So then I showed to you that we can use conditioning in animals and in humans to create placebo effects. But also by suggestion, the example of vasopressin. Another important model to study placebo effects in laboratory setting is social observation. We see something in other people, we are not told what we are seeing and we don't experience the thermal stimulation. That is the setting. A demonstrator receiving painful or no painful stimulation and someone observing this stimulation.

When we tested the observers, you can see the level of pain were tailored at the same identical intensity. And these were the effects. In 2009, when we first launched this line of research, this was quite surprising. We didn't anticipate that merely observing someone else could boost the expectations and probably creating this long-lasting analgesic effect. This drove our attention to the brain mechanism of what is so important during this transfer of placebo analgesia.

So we scanned participants when they were observing a video this time. And a demonstrator receiving control and placebo cream. We counterbalance the color. We controlled for many variables. So during the observation of another person when they were not stimulated, they didn't receive the cream, there is an activation of the left and right temporoparietal junction and a different activation of the amygdala with the two creams. And importantly, an activation of the periaqueductal gray that I show to you is critical in modulating placebo analgesia.

Afterwards we put both the placebo creams with the two different color. We tailored the level of pain at the identical level of intensity. And we saw how placebo effects through observation are generated. They create strong different expectation and anxiety. And importantly, we found that the functional connectivity between the dorsolateral prefrontal cortex and temporoparietal junction that was active during the observation mediate the behavior results. Suggesting that there is some mechanism here that may be relevant to exploit in clinical trials and clinical practice.

From this, I wish to switch to a more translational approach. Can we replicate these results observed in health participant for nociception in people suffering from chronic pain. So we chose as population of facial pain that is an orphan disease that has no consensus on how to treat it, but also it affects the youngest including children.

So participants were coming to the lab. And thus you can see we used the same identical thermal stimulation, the same electrodes, the same conditioning that I showed to you. We measured expectation before and after the manipulation. The very first question was can we achieve similar monitored distribution of placebo analgesia in people suffering chronically from pain and comorbidities. You can see that we found no difference between temporo parenthala, between TMD and controls. Also, we observed that some people responded to the placebo manipulation with hyperalgesia. We call this nocebo effect.

Importantly, these affects are less relevant than the benefit that sometime can be extremely strong show that both health control and TMD. Because we run experiment in a very beautiful ecological environment where we are diverse, the lab, the experimenters as well as the population we recruit in the lab has a very good distribution of race, ethnicity.

So the very first question was we need to control for this factor. And this turned out to be a beautiful model to study race, ethnicity in the lab. So when chronic pain patient were studied by same experimenter race, dark blue, we observe a larger placebo effect. And this tell us about the disparity in medicine. In fact, we didn't see these effects in our controls.

In chronic pain patient, we also saw a sex concordance influence. But in the opposite sense in women studied by a man experimenter placebo effects are larger. Such an effect was not seen in men.

The other question that we had was what about the contribution of psychological factors. At that stage, there were many different survey used by different labs. Some are based on the different area of, you know, the states of the world, there were trends where in some people in some study they observe an effects of neurodisease, more positive and negative set, that refer to the words. Instead of progressing on single survey, and now we have a beautiful meta-analysis today that is not worth in the sense that it is not predictive of placebo effects.

We use the rogue model suggested by the NIMH. And by doing a sophisticated approach we were able to combine this into four balances. Emotional distress, reward-seeking, pain related fear catastrophizing, empathy and openness. These four valences then were interrelated to predict placebo effects. And you can see that emotional distress is associated with lower magnitude of placebo effects extinguishing over time and lower proportion of placebo responsivity.

Also people who tend to catastrophizing display lower magnitude of placebo effects. In terms of expectation, it is also interesting patients expect to benefit, they have this desire for a reward. But also those people who are more open and characterized by empathy tend for the larger expectations. But this doesn't translate necessarily in larger placebo effects, somehow hinting that the two phenomenon can be not necessarily linked.

Because we study chronic pain patients they come with their own baggage of disease comorbidities. And Dr. Wang in his department look at insomnia. Those people suffering from insomnia tends to have lower placebo analgesic effects along with those who have a poor pattern of sleep, suggesting that clinical factor can be relevant when we wish to predict placebo effects.

Another question that we address how simple SNPs, single nucleotide polymorphism variants in three regions that have been published can be predictive of placebo effects. In particular, I'm referring to OPRM1 that is linked to the gene for endogenous opioids. COMT linked to endogenous dopamine. And FAAH linked to endogenous cannabinoids. And we will learn about that more with the next talk.

And you can see that there is a prediction. These are rogue codes that can be interesting. We model all participants with verbal suggestion alone, the conditioning. There isn't really a huge difference between using one SNP versus two or three. What is truly impact and was stronger in terms of prediction was accounting for the procedure we used to study placebo. Whether by suggestion alone versus condition. When we added the manipulation, the prediction becomes stronger.

More recently, we started gene expression transcriptomic profile associated with placebo effects. We select from the 402 participants randomly 54. And we extract their transcriptomic profiles. Also we select a validation cohort to see if we can't replicate what we discover in terms of mRNA sequencing. But we found over 600 genes associated with the discovered cohort. In blue are the genes downregulated and in red upregulated.

We chose the top 20 genes and did the PCA to validate the top 20. And we found that six of them were replicated and they include all these genes that you see here. The Selenom for us was particularly interesting, as well as the PI3, the CCDC85B, FBXL15, HAGHL and the TNFRSF4. So with this --

LUANA COLLOCA: Yes, I'm done. With this, that is the goal probably one day with AI and other approach to combine clinical psychological brain imaging and so on, characteristic and behavior to predict a level of transitory response to placebo. That may guide us in clinical trials and clinical path to tailor the treatment. Therefore, the placebo and nocebo biological response can be to some extent predicted. And identify those who responded to placebo can help tailoring drug development and symptom management.

Thank you to my lab. All of you, the funding agencies. And finally, for those who like to read more about placebo, this book is available for free to be downloaded. And they include many of the speakers from this two-day event as contributors to this book. Thank you very much.

CRISTINA CUSIN: Thank you so much, Luana. It was a wonderful presentation. We have one question in the Q&A.

Elegant studies demonstrating powerful phenomena. Two questions. Is it possible to extend or sustain placebo-boosting effect? And what is the dose response relationship with placebo or nocebo?

LUANA COLLOCA: Great questions. The goal is to boost a placebo effects. And one way, as I showed was, for example, using intranasal vasopressin. But also extending relationship with placebo we know that we need the minimum of a three or four other administration before boosting this sort of pharmacological memory. And the longer is the administration of the active drug before we replace with placebo, the larger the placebo effects.

For nocebo, we show similar relationship with the collaborators. So again, the longer we condition, the stronger the placebo or nocebo effects. Thank you so much.

CRISTINA CUSIN: I wanted to ask, do you have any theory or interpretation about the potential for transmit to person a placebo response between the observer or such, do you have any interpretation of this phenomenon?

LUANA COLLOCA: It is not completely new in the literature. There is a lot of studies show that we can transfer pain in both animal models and humans.

So transfer analgesia is a natural continuation of that line of research. And the fact that we mimic things that we see in some other people, this is the very most basic form of learning when we grow up. But also from a revolutionary point of view protect us from predators and animals and us as human beings observing is a very good mechanism to boost behaviors and in this case placebo effects. Thank you.

CRISTINA CUSIN: Okay. We will have more time to ask questions.

We are going to move on to the next speaker. Dr. Kathryn Hall.

KATHRYN HALL: Thank you. Can you see my screen okay? Great.

So I'm going to build on Dr. Colloca's talk to really kind of give us a deeper dive into the genetics of the placebo response in clinical trials.

So I have no disclosures. So as we heard and as we have been hearing over the last two days, there is -- there are physiological drivers of placebo effects, whether they are opioid signaling or dopamine signaling. And these are potentiated by the administration or can be potentiated by saline pills, saline injections, sugar pills. And what's really interesting here, I think, is this discussion about how drugs impact the drivers of placebo response. In particular we heard about Naloxone yesterday and proglumide.

What I really want to do today is think about the next layer. Like how do the genes that shape our biology and really drive or influence that -- those physiological drivers of placebo response, how do the genes, A, modify our placebo response? But also, how are they modifying the effect of the drugs and the placebos on this basic -- this network?

And if you think about it, we really don't know much about all of the many interactions that are happening here. And I would actually argue that it goes even beyond genetic variation to other factors that lead to heterogeneity in clinical trials. Today I'm going to really focus on genes and variations in the genome.

So let's go back so we have the same terminology. I'm going to be talking about placebo-responsing trials. And so we saw this graph or a version of this graph yesterday where in clinical trials when we want to assess the effect of a drug, we subtract the outcomes in the placebo arm from the outcomes in the drug treatment arm. And there is a basic assumption here that the placebo response is additive to the drug response.

And what I want to do today is to really challenge that assumption. I want to challenge that expectation. Because I think we have enough literature and enough studies that have already been done that demonstrate that things are not as simple as that and that we might be missing a lot from this basic averaging and subtracting that we are doing.

So the placebo response is that -- is the bold lines there which includes placebo effects which we have been focusing on here. But it also includes a natural history of the disease or the condition, phenomenon such as statistical regression not mean, blinding and bias and Hawthorn effects. So we lump all of those together in the placebo arm of the trial and subtract the placebo response from the drug response to really understand the drug effect.

So one way to ask about, well, how do genes affect this is to look at candidate genes. And as Dr. Colloca pointed out and has done some very elegant studies in this area, genes like COMT, opioid receptors, genes like OPRM1, the FAAH endocannabinoid signaling genes are all candidate genes that we can look at in clinical trials and ask did these genes modify what we see in the placebo arm of trials?

We did some studies in COMT. And I want to just show you those to get a -- so you can get a sense of how genes can influence placebo outcomes. So COMT is catacholamethyl transferase. And it's a protein, an enzyme that metabolizes dopamine which as you saw is important in mediating the placebo response. COMT also metabolizes epinephrin, norepinephrine and catecholest estrogen. So the fact that COMT might be involved in placebo response is really interesting because it might be doing more than just metabolizing dopamine.

So we asked the question what happens if we look at COMT genetic variation in clinical trials of irritable bowel syndrome? And working with Ted Kaptchuk and Tony Lembo at Beth Israel Deaconess Medical Center, we did just that. We looked at COMT effects in a randomized clinical trial of irritable bowel syndrome. And what we did see was that for the gene polymorphism RS46AD we saw that people who had the weak version of the COMT enzyme actually had more placebo response. These are the met/met people here shown on this, in this -- by this arrow. And that the people who had less dopamine because that enzyme didn't work as well for this polymorphism, they had less of a placebo response in one of the treatment arms. And we would later replicate this study in another clinical trial that was recently concluded in 2021.

So to get a sense, as you can see, we are somewhat -- we started off being somewhat limited by what was available in the literature. And so we wanted to expand on that to say more about genes that might be associated with placebo response. So we went back, and we found 48 studies in the literature where there was a gene that was looked at that modified the placebo response.

And when we mapped those to the interactome, which is this constellation of all gene products and their interactions, their physical interactions, we saw that the placebome or the placebo module had certain very interesting characteristics. Two of those characteristics that I think are relevant here today are that they overlapped with the targets of drugs, whether they were analgesics, antidepressive drugs, anti-Parkinson's agents, placebo genes putatively overlapped with drug treatment genes or targets.

They also overlapped with disease-related genes. And so what that suggests is that when we were looking at the outcomes of clinical trial there might be a lot more going on that we are missing.

And let's just think about that for a minute. On the left is what we expect. We expect that we are going to see an effect in the drug, it's going to be greater than the effect of the placebo and that difference is what we want, that drug effect. But what we often see is on the right here where there is really no difference between drug and placebo. And so we are left to scratch our heads. Many companies go out of business. Many sections of companies close. And, quite frankly, patients are left in need. Money is left on the table because we can't discern between drug and placebo.

And I think what is interesting is that's been a theme that's kind of arisen since yesterday where oh, if only we had better physiological markers or better genes that targeted physiology then maybe we could see a difference and we can, you know, move forward with our clinical trials.

But what I'm going to argue today is actually what we need to do is to think about what is happening in the placebo arm, what is contributing to the heterogeneity in the placebo arm, and I'm going to argue that when we start to look at that compared to what is happening in the drug treatment arm, oftentimes -- and I'm going to give you demonstration after demonstration. And believe me, this is just the tip of the iceberg.

What we are seeing is there are differential effects by genotype in the drug treatment arm and the placebo treatment arm such that if you average out what's happening in these -- in these drug and placebo arms, you would basically see that there is no difference. But actually there's some people that are benefiting from the drug but not placebo. And conversely, benefiting from placebo but not drug. Average out to no difference.

Let me give you some examples. We had this hypothesis and we started to look around to see if we could get partners who had already done clinical trials that had happened to have genotyped COMT. And what we saw in this clinical trial for chronic fatigue syndrome where adolescents were treated with clonidine was that when we looked in the placebo arm, we saw that the val/val patients, so this is the COMT genotype. The low activity -- sorry, that is high activity genotype. They had the largest number increase in the number of steps they were taking per week. In contrast, the met/met people, the people with the weaker COMT had fewer, almost no change in the number of steps they were taking per week.

So you would look at this and you would say, oh, the val/val people were the placebo responders and the met/met people didn't respond to placebo. But what we saw when we looked into the drug treatment arm was very surprising. We saw that clonidine literally erased the effect that we were seeing in placebo for the val/val participants in this trial. And clonidine basically was having no effect on the heterozygotes, the val/mets or on the met/mets. And so this trial rightly concluded that there was no benefit for clonidine.

But if they hadn't taken this deeper look at what was happening, they would have missed that clonidine may potentially be harmful to people with chronic fatigue in this particular situation. What we really need to do I think is look not just in the placebo or not just in the drug treatment arm but in both arms to understand what is happening there.

And I'm going to give you another example. And, like I said, the literature is replete with these examples. On the left is an example from a drug that was used to test cognitive -- in cognitive scales, Tolcupone, which actually targets COMT. And what you can see here again on the left is differential outcomes in the placebo arm and in the drug treatment arm that if you were to just average these two you would not see the differences.

On the right is a really interesting study looking at alcohol among people with alcohol disorder, number of percent drinking days. And they looked at both COMT and OPRM1. And this is what Dr. Colloca was just talking about there seemed to be not just gene-placebo drug interactions but gene-gene drug placebo interactions. This is a complicated space. And I know we like things to be very simple. But I think what these data are showing is we need to pay more attention.

So let me give you another example because these -- you know, you could argue, okay, those are objective outcomes -- sorry, subjective outcomes. Let's take a look at the Women's Health Study. Arguably, one of the largest studies on aspirin versus placebo in history. 30,000 women were randomized to aspirin or placebo. And lo and behold, after 10 years of following them the p value was nonsignificant. There was no difference between drug and placebo.

So we went to this team, and we asked them, could we look at COMT because we had a hypothesis that COMT might modify the outcomes in the placebo arm and potentially differentially modify the treatments in the drug treatment arm. You might be saying that can't have anything to do with the placebo effect and we completely agree. This if we did find it would suggest that there might be something to do with the placebo response that is related to natural history. And I'm going to show you the data that -- what we found.

So when we compared the outcomes in the placebo arm to the aspirin arm, what we found was the met/met women randomized to placebo had the highest of everybody rates of cardiovascular disease. Which means the highest rates of myocardial infarction, stroke, revascularization and death from a cardiovascular disease cause. In contrast, the met/met women on aspirin had benefit, had a statistically significant reduction in these rates.

Conversely, the val/val women on placebo did the best, but the val/val women on aspirin had the highest rates, had significantly higher rates than the val/val women on placebo. What does this tell us? Well, we can't argue that this is a placebo effect because we don't have the control for placebo effects, which is a no treatment control.

But we can say that these are striking differences that, like I said before, if you don't pay attention to them, you miss the point that there are subpopulations for benefit or harm because of differential outcomes in the drug and placebo arms of the trial.

And so I'm going to keep going. There are other examples of this. We also partnered with a group at Brigham and Women's Hospital that had done the CAMP study, the Childhood Asthma Management Study. And in this study, they randomized patients to placebo, Budesonide or Nedocromil for five years and study asthma outcomes.

Now what I was showing you previously was candidate gene analyses. What this was, was a GWAS. We wanted to be agnostic and ask are there genes that modify the placebo outcomes and are these outcomes different in the -- when we look in the drug treatment arm. And so that little inset is a picture of all of the genes that were looked at in the GWAS. And we had a borderline genome Y significant hit called BBS9. And when we looked at BBS9 in the placebo arm, those white boxes at the top are the baseline levels of coughing and wheezing among these children. And in the gray are at the end of the treatment their level of coughing and wheezing.

And what you can see here is that participants with the AA genotype were the ones that benefited from the Bedenoside -- from placebo, whereas the GG, the patients with the GG genotype really there was no significant change.

Now, when we looked in the drug treatment arms, we were surprised to see that the outcomes were the same, of course, at baseline. There is no -- everybody is kind of the same. But you can see the differential responses depending on the genotype. And so, again, not paying attention to these gene drug/placebo interactions we miss another story that is happening here among our patients.

Now, I just want to -- I added this one because it is important just to realize that this is not just about gene-drug placebo. But these are also about epigenetic effects. And so here is the same study that I showed earlier on alcohol use disorder. They didn't just stop at looking at the polymorphisms or the genetic variants. This team also went so far as to look at methylation of OPRM1 and COMT.

So methylation is basically when the promoter region of a gene is basically blocked because it has a methyl group. It has methylation on some of the nucleotides in that region. So you can't make the protein as efficiently. And if you look on the right, what you can see in the three models that they looked at, they looked at other genes. They also looked at SLC6A3 that's involved in dopamine transport. And what you can see here is that there is significant gene by group by time interactions for all these three genes, these are candidate genes that they looked at.

And even more fascinating is their gene-by-gene interactions. Basically it is saying that you cannot say what the outcome is going to be unless you know the patient's or the participant's COMT or OPRM genotype A and also how methylated the promoter region of that -- of these genes are. So this makes for a very complicated story. And I know we like very simple stories.

But I want to say that I'm just adding to that picture that we had before to say that it's not just in terms of the gene's polymorphisms, but as Dr. Colloca just elegantly showed it is transcription as well as methylation that might be modifying what is happening in the drug treatment arm and the placebo treatment arm. And to add to this it might also be about the natural history of the condition.

So BBS9 is actually a gene that is involved in the cilia, the activity of the formation of the cilia which is really important in breathing in the nasal canal. And so, you can see that it is not just about what's happening in the moment when you are doing the placebo or drug or the clinical trial, it also might -- the genes might also be modifying where the patient starts out and how the patient might develop over time. So, in essence, we have a very complicated playground here.

But I think I have shown you that genetic variation, whether it is polymorphisms in the gene, gene-gene interactions or epigenetics or all of the above can modify the outcomes in placebo arms of clinical trials. And that this might be due to the genetic effects on placebo effects or the genetic effects on natural history. And this is something I think we need to understand and really pay attention to.

And I also think I've showed you, and these are just a few examples, there are many more. But genetic variation can differentially modify drugs and placebos and that these potential interactive effects really challenge this basic assumption of additivity that I would argue we have had for far too long and we really need to rethink.

TED KAPTCHUK: (Laughing) Very cool.

KATHRYN HALL: Hi, Ted.

TED KAPTCHUK: Oh, I didn't know I was on.

KATHRYN HALL: Yeah, that was great. That's great.

So in summary, can we use these gene-placebo drug interactions to improve clinical trials. Can we change our expectations about what is happening. And perhaps as we have been saying for the last two days, we don't need new drugs with clear physiological effects, what we need is to understand drug and placebo interactions and how they impact subpopulations and can reveal who benefits or is harmed by therapies.

And finally, as we started to talk about in the last talk, can we use drugs to boost placebo responses? Perhaps some drugs already do. Conversely, can we use drugs to block placebo responses? And perhaps some drugs already do.

So I just want to thank my collaborators. There was Ted Kaptchuk, one of my very close mentors and collaborators. And really, thank you for your time.

CRISTINA CUSIN: Thank you so much. It was a terrific presentation. And definitely Ted's captured laugh, it was just one of the best spontaneous laughs.

We have a couple of questions coming through the chat. One is about the heterogeneity of response in placebo arms. It is not uncommon to see quite a dispersion of responses at trials. Was that thought experiment, if one looks at the fraction of high responders in the placebo arms, would one expect to see, enrich for some of the genetic marker for and as placebo response?

KATHRYN HALL: I absolutely think so. We haven't done that. And I would argue that, you know, we have been having kind of quiet conversation here about Naloxone because I think as Lauren said yesterday that the findings of Naloxone is variable. Sometimes it looks like Naloxone is blocking placebo response and sometimes it isn't.

We need to know more about who is in that trial, right? Is this -- I could have gone on and showed you that there is differences by gender, right. And so this heterogeneity that is coming into clinical trials is not just coming from the genetics. It's coming from race, ethnicity, gender, population. Like are you in Russia or are you in China or are you in the U.S. when you're conducting your clinical trial? We really need to start unpacking this and paying attention to it. I think because we are not paying attention to it, we are wasting a lot of money.

CRISTINA CUSIN: And epigenetic is another way to consider traumatic experiences, adverse event learning. There is another component that we are not tracking accurately in clinical trials. I don't think this is a one of the elements routinely collected. Especially in antidepressant clinical trials it is just now coming to the surface.

KATHRYN HALL: Thank you.

CRISTINA CUSIN: Another question comes, it says the different approaches, one is GWAS versus candidate gene approach.

How do you start to think about genes that have a potential implication in neurophysiological pathways and choosing candidates to test versus a more agnostic U.S. approach?

KATHRYN HALL: I believe you have to do both because you don't know what you're going to find if you do a GWAS and it's important to know what is there.

At the same time, I think it's also good to test our assumptions and to replicate our findings, right? So once you do the GWAS and you have a finding -- for instance, our BBS9 finding would be amazing to replicate or to try and test in another cohort. But, of course, it is really difficult to do a whole clinical trial again. These are very expensive, and they last many years.

And so, you know, I think replication is something that is tough to do in this space, but it is really important. And I would do both.

CRISTINA CUSIN: Thank you. We got a little short on time. We are going to move on to the next speaker. Thank you so much.

FADEL ZEIDAN: Good morning. It's me, I imagine. Or good afternoon.

Let me share my screen. Yeah, so good morning. This is going to be a tough act to follow. Dr. Colloca and Dr. Hall's presentations were really elegant. So manage your expectations for mine. And, Ted, please feel free to unmute yourself because I think your laugh is incredibly contagious, and I think we were all were laughing as well.

So my name is Fadel Zeidan, I'm at UC San Diego. And I'll be discussing mostly unpublished data that we have that's under review examining if and how mindfulness meditation assuages pain and if the mechanism supporting mindfulness meditation-based analgesia are distinct from placebo.

And so, you know, this is kind of like a household slide that we all are here because we all appreciate how much of an epidemic chronic pain is and, you know, how significant it is, how much it impacts our society and the world. And it is considered a silent epidemic because of the catastrophic and staggering cost to our society. And that is largely due to the fact that the subjective experience of pain is modulated and constructed by a constellation of interactions between sensory, cognitive, emotional dimensions, genetics, I mean I can -- the list can go on.

And so what we've been really focused on for the last 20 years or so is to appreciate if there is a non-pharmacological approach, a self-regulated approach that can be used to directly assuage the experience of pain to acutely modify exacerbated pain.

And to that extent, we've been studying meditation, mindfulness-based meditation. And mindfulness is a very nebulous construct. If you go from one lab to another lab to another lab, you are going to get a different definition of what it is. But obviously my lab's definition is the correct one. And so the way that we define it is awareness of arising sensory events without reaction, without judgment.

And we could develop this construct, this disposition by practicing mindfulness-based meditation, which I'll talk about here in a minute. And we've seen a lot of -- and this is an old slide -- a lot of new evidence, converging evidence demonstrating that eight weeks of manualized mindfulness-based interventions can produce pretty robust improvements in chronic pain and opiate misuse. These are mindfulness-based stress reduction programs, mindfulness-oriented recovery enhancement, mindfulness-based cognitive therapy which are about eight weeks long, two hours of formalized didactics a week, 45 minutes a day of homework.

There is yoga, there is mental imagery, breathing meditation, walking meditation, a silent retreat and about a $600 tab. Which may not be -- I mean although they are incredibly effective, may not be targeting demographics and folks that may not have the time and resources to participate in such an intense program.

And to that extent and, you know, as an immigrant to this country I've noticed that we are kind of like this drive-thru society where, you know, we have a tendency to eat our lunches and our dinners in our cars. We're attracted to really brief interventions for exercise or anything really, pharmaceuticals, like ":08 Abs" and "Buns of Steel." And we even have things called like the military diet that promise that you'll lose ten pounds in three days without dying.

So we seemingly are attracted to these fast-acting interventions. And so to this extent we've worked for quite some time to develop a very user friendly, very brief mindfulness-based intervention. So this is an intervention that is about four sessions, 20 minutes each session. And participants are -- we remove all religious aspects, all spiritual aspects. And we really don't even call it meditation, we call it mindfulness-based mental training.

And our participants are taught to sit in a straight posture, close their eyes, and to focus on the changing sensations of the breath as they arise. And what we've seen is this repetitive practice enhances cognitive flexibility and the ability to -- flexibility and the ability to sustain attention. And when individual's minds drift away from focusing on the breath, they are taught to acknowledge distractive thoughts, feelings, emotions without judging themselves or the experience. Doing so by returning their attention back to the breath.

So there is really a one-two punch here where, A, you're focusing on the breath and enhancing cognitive flexibility; and, B, you're training yourself to not judge discursive events. And that we believe enhances emotion regulation. So quite malleable to physical training we would say mental training. Now that we have the advent of imaging, we can actually see that there are changes in the brain related to this.

But as many of you know, mindfulness is kind of like a household term now. It's all over our mainstream media. You know, we have, you know, Lebron meditating courtside. Oprah meditating with her Oprah blanket. Anderson Cooper is meditating on TV. And Time Magazine puts, you know, people on the cover meditating. And it's just all over the place.

And so these types of images and these types of, I guess, insinuations could elicit nonspecific effects related to meditation. And for quite some time I've been trying to really appreciate not is meditation more effective than placebo, although that's interesting, but does mindfulness meditation engage mechanisms that also are shared by placebo? So beliefs that you are meditating could elicit analgesic responses.

The majority of the manualized interventions in their manuals they use terms like the power of meditation, which I guarantee you is analgesic. To focus on the breath, we need to slow the breath down. Not implicit -- not explicitly, but it just happens naturally. And slow breathing can also reduce pain. Facilitator attention, social support, conditioning, all factors that are shared with other therapies and interventions but in particular are also part of meditation training.

So the question is because of all this, is mindfulness meditation merely -- or not merely after these two rich days of dialogue -- but is mindfulness meditation engaging processes that are also shared by placebo.

So if I apply a placebo cream to someone's calf and then throw them in the scanner versus asking someone to meditate, the chances are very high that the brain processes are going to be distinct. So we wanted to create a -- and validate an operationally matched mindfulness meditation intervention that we coined as sham mindfulness meditation. It's not sham meditation because it is meditation. It's a type of meditative practice called Pranayama.

But here in this intervention we randomize folks, we tell folks that they've been randomized to a genuine mindfulness meditation intervention. Straight posture, eyes closed. And every two to three minutes they are instructed to, quote-unquote, take a deep breath as we sit here in mindfulness meditation. We even match the time giving instructions between the genuine and the sham mindfulness meditation intervention.

So the only difference between the sham mindfulness and the genuine mindfulness is that the genuine mindfulness is taught to explicitly focus on the changing sensations of the breath without judgment. The sham mindfulness group is just taking repetitive deep, slow breaths. So if the magic part of mindfulness, if the active component of mindfulness is this nonjudgmental awareness, then we should be able to see disparate mechanisms between these.

And we also use a third arm, a book listening control group called the "Natural History of Selborne" where it's a very boring, arguably emotionally pain-evocating book for four days. And this is meant to control for facilitator time and -- sorry, facilitator attention and the time elapsed in the other group's interventions.

So we use a very high level of noxious heat to the back of the calf. And we do so because imaging is quite expensive, and we want to ensure that we can see pain-related processing within the brain. Here and across all of our studies, we use ten 12-second plateaus of 49 degrees to the calf, which is pretty painful.

And then we assess pain intensity and pain unpleasantness using a visual analog scale, where here the participants just see red the more they pull on the algometer the more in pain they are. But on the back, the numbers fluoresce where 0 is no pain and 10 is the worst pain imaginable.

So pain intensity can be considered like sensory dimension of pain, and pain unpleasantness could be more like I don't want to say pain affect but more like the bothersome component of pain, pain unpleasantness. So what we did was we combined all of our studies that have used the mindfulness, sham mindfulness in this book listing control, to see does mindfulness meditation engage is mindfulness meditation more effective than sham mindfulness meditation at reducing pain.

We also combined two different fMRI techniques: Blood oxygen dependent level signalling, bold, which allows us a higher temporal resolution and signal to noise ratio than, say, perfusion imaging technique and allows us to look at connectivity. However, meditation is also predicated on changes in respiration rate which could elicit pretty dramatic artifacts in the brain, breathing related artifacts explicitly related to CO2 output.

So using the perfusion based fMRI technique like arterial spin labeling is really advantageous as well, although it's not as temporally resolute as bold, it provides us a direct quantifiable measurement of cerebral blood flow.

So straight to the results. On the Y axis we have the pain ratings, and on the X axis are book listening controls sham mindfulness meditation, mindfulness meditation. Here are large sample sizes. Blue is intensity and red is unpleasantness. This is the post intervention fMRI scans where we see the first half of the scan to the second half of the scan our controlled participants are simply resting and pain just increases because of pain sensitization and being in a claustrophobic MRI environment.

And you can see here that sham mindfulness meditation does produce pretty significant reduction in pain intensity and unpleasantness, more than the control book. But mindfulness meditation is more effective than sham mindfulness and the controls at reducing pain intensity and pain unpleasantness.

There does seem to be some kind of additive component to the genuine intervention, although this is a really easy practice, the sham techniques.

So for folks that have maybe fatigue or cognitive deficits or just aren't into doing mindfulness technique, I highly recommend this technique, which is just a slow breathing approach, and it's dead easy to do.

Anyone that's practiced mindfulness for the first time or a few times can state that it can be quite difficult and what's the word? -- involving, right?

So what happened in the brain? These are our CBF maps from two studies that we replicated in 2011 and '15 where we found that higher activity, higher CBF in the right anterior insula, which is ipsilateral to the stimulation site and higher rostral anterior cingulate cortex subgenual ACC was associated with greater pain relief, pain intensity, and in the context of pain unpleasantness, higher over the frontal cortical activity was associated with lower pain, and this is very reproducible where we see greater thalamic deactivation predicts greater analgesia on the unpleasantness side.

These areas, obviously right entry insula in conjunction with other areas is associated with interoceptive processing awareness of somatic sensations. And then the ACC and the OFC are associated with higher order cognitive flexibility, emotional regulation processes. And the thalamus is really the gatekeeper from the brain -- I'm sorry, from the body to the brain. Nothing can enter the brain except unless it goes through the thalamus, except if it's the sense of smell.

So it's really like this gatekeeper of arising nociceptive information.

So the takehome here is that mindfulness is engaging multiple neural processes to assuage pain. It's not just one singular pathway.

Our gold studies were also pretty insightful. Here we ran a PPI analysis, psychophysiologic interaction analysis and this was whole brain to see what brain regions are associated with pain relief on the context of using the bold technique, and we find that greater ventral medial prefrontal cortical activity deactivation I'm sorry is associated with lower pain, and the vmPFC is a super evolved area that's associated with, like, higher order processes relating to self. It's one of the central nodes of the so called default mode network, a network supporting self referential processing. But in the context of the vmPFC, I like the way that Tor and Mathieu reflect the vmPFC as being more related to affective meaning and has a really nice paper showing that vmPFC is uniquely involved in, quote/unquote, self ownership or subjective value, which is particularly interesting for the context of pain because pain is a very personal experience that's directly related to the interpretation of arising sensations and what they mean to us.

And seemingly -- I apologize for the reverse inferencing here -- but seemingly mindfulness meditation based on our qualitative assessments as well is reducing the ownership or the intrinsic value, the contextual value of those painful sensations, i.e., they don't feel like they bother -- that pain is there but it doesn't bother our participants as much, which is quite interesting as a manipulation.

We also ran our connectivity analysis between the contralateral thalamus and the whole brain, and we found that greater decoupling between the contralateral thalamus and the precuneus, another central node of the default mode network predicted greater analgesia.

This is a really cool, I think, together mechanism showing that two separate analyses are indicating that the default mode network could be an analgesic system which we haven't seen before. We have seen the DMN involved in chronic pain and pain related exacerbations, but I don't think we've seen it as being a part of an analgesic, like being a pain relieving mechanism. Interestingly, the thalamus and precuneus together are the first two nodes to go offline when we lose consciousness, and they're the first two nodes to come back online when we recover from consciousness, suggesting that these two -- that the thalamus and precuneus are involved in self referential awareness, consciousness of self, things of this nature.

Again, multiple processes involved in meditation based pain relief which maybe gives rise to why we are seeing consistently that meditation could elicit long lasting improvements in pain unpleasantness, in particular, as compared to sensory pain. Although it does that as well.

And also the data gods were quite kind on this because these mechanisms are also quite consistent with the primary premises of Buddhist and contemplative scriptures saying that the primary principle is that your experiences are not you.

Not that there is no self, but that the processes that arise in our moment to moment experience are merely reflections and interpretations in judgments, and that may not be the true inherent nature of mind.

And so before I get into more philosophical discourse, I'm going to keep going for the sake of time. Okay.

So what happened with the sham mindfulness meditation intervention?

We did not find any neural processes predicted analgesia significantly and during sham mindfulness meditation. What did predict analgesia during sham mindfulness was slower breathing rate, which we've never seen before with mindfulness. We've never seen a significant or even close to significant relationship between mindfulness based meditation analgesia and slow breathing. But over and over we see that sham mindfulness based analgesia is related to slower breathing which provides us this really cool distinct process where kind of this perspective where mindfulness is engaging higher order top down type processes to assuage pain while sham mindfulness may be engaging this more bottom up type response to assuage pain.

I'm going to move on to some other new work, and this is in great collaboration with the lovely Tor Wager, and he's developed, with Marta and Woo, these wonderful signatures, these machine learned multivariate pattern signatures that are remarkably accurate at predicting pain over I think like 98, 99 percent.

His seminal paper, the Neurological Pain Signature, was published in the New England Journal of Medicine that showed that these signatures can predict nociceptive specific, in particular, for this particular, thermal heat pain with incredible accuracy.

And it's not modulated by placebo or affective components, per se. And then the SIIPS is a machine learned signature that is, as they put it, associated with cerebral contributions to pain. But if you look at it closely, these are markers that are highly responsive to the placebo response.

So the SIIPS can be used -- he has this beautiful pre print out, showing that it does respond with incredible accuracy to placebo, varieties of placebo.

So we used this MVPA to see if meditation engages signature supporting placebo responses.

And then Marta Ceko's latest paper with Tor published in Nature and Neuro found that the negative affect of signature predicts pain responses above and beyond nociceptive related processes. So this is pain related to negative affect, which again contributes to the multimodal processing of pain and how now we could use these elegant signatures to kind of disentangle which components of pain meditation and other techniques assuage. Here's the design.

We had 40 -- we combined two studies. One with bold and one with ASL. So this would be the first ASL study with signatures, with these MVPA signatures.

And we had the mindfulness interventions that I described before, the book listing interventions I described before and a placebo cream intervention which I'll describe now, all in response to 49 degrees thermal stimuli.

So across again all of our studies we use the same methods. And the placebo group -- I'll try to be quick about this -- this is kind of a combination of Luana Colloca, Don Price and Tor's placebo conditioning interventions where we administer 49 degrees -- we tell our participants that we're testing a new form of lidocaine, and the reason that it's new is that the more applications of this cream, the stronger the analgesia.

And so in the conditioning sessions, they come in, administer 49 degrees, apply and remove this cream, which is just petroleum jelly after 10 minutes, and then we covertly reduce the temperature to 48.

And then they come back in in session two and three, after 49 degrees and removing the cream, we lower the temperature to 47. And then on the last conditioning session, after we remove the cream, we lower the temperature to 46.5, which is a qualitatively completely different experience than 49.

And we do this to lead our participants to believe that the cream is actually working.

And then in a post intervention MRI session, after we remove the cream, we don't modulate the temperature, we just keep it at 49, and that's how we measured placebo in these studies. And then so here, again -- oops -- John Dean and Gabe are coleading this project.

Here, pain intensity on this axis, pain unpleasantness on that axis, controls from the beginning of the scan to the end of the scan significantly go up in pain.

Placebo cream was effective at reducing intensity and unpleasantness, but we see mindfulness meditation was more effective than all the conditions at reducing pain. The signatures, we see that the nociceptive specific signature, the controls go up in pain here.

No change in the placebo and mindfulness meditation you can see here produces a pretty dramatic reduction in the nociceptive specific signature.

The same is true for the negative affective pain signature. Mindfulness meditation uniquely modifies this signature as well which I believe this is one of the first studies to show something like this.

But it does not modulate the placebo signature. What does modulate the placebo signature is our placebo cream, which is a really nice manipulation check for these signatures.

So here, taken together, we show that mindfulness meditation, again, is engaging multiple processes and is reducing pain by directly assuaging nociceptive specific markers as well as markers supporting negative affect but not modulating placebo related signatures, providing further credence that it's not a placebo type response, and we're also demonstrating this granularity between a placebo mechanism that's not being shared by another active mechanism. While we all assume that active therapies and techniques are using a shared subset of mechanisms or processes with placebo, here we're providing accruing evidence that mindfulness is separate from a placebo.

I'll try to be very quick on this last part. This is all not technically related placebo, but I would love to hear everyone's thoughts on these new data we have.

So as we've seen elegantly that pain relief by placebo, distraction, acupuncture, transcranial magnetic stimulation, prayer, are largely driven by endogenous opioidergic release. And, yes, there are other systems. A prime other system is the (indiscernible) system, serotonergic system, dopamine. The list can go on. But it's considered by most of us that the endogenous opioidergic system is this central pain modulatory system.

And the way we do this is by antagonizing endogenous opioids by employing incredibly high administration dosage of naloxone.

And I think this wonderful paper by Ciril Etnes's (phonetic) group provides a nice primer on the appropriate dosages for naloxone to antagonize opiates. And I think a lot of the discussions here where we see differences in naloxone responses are really actually reflective of differences in dosages of naloxone.

It metabolizes so quickly that I would highly recommend a super large bolus with a maintenance infusion IV.

And we've seen this to be a quite effective way to block endogenous opioids. And across four studies now, we've seen that mindfulness based pain relief is not mediated by endogenous opioids. It's something else. We don't know what that something else is but we don't think it's endogenous opioids. But what if it's sex differences that could be driving these opioidergic versus non opioid opioidergic differences?

We've seen that females require -- exhibit higher rates of chronic pain than males. They are prescribed opiates at a higher rate than men. And when you control for weight, they require higher dosages than men. Why?

Well, there's excellent literature in rodent models and preclinical models that demonstrate that male rodents versus female -- male rodents engage endogenous opioids to reduce pain but female rodents do not.

And this is a wonderful study by Ann Murphy that basically shows that males, in response to morphine, have a greater latency and paw withdrawal when coupled with morphine and not so much with females.

But when you add naloxone to the picture, with morphine, the latency goes down. It basically blocks the analgesia in male rodents but enhances analgesia in female rodents.

We basically asked -- we basically -- Michaela, an undergraduate student doing an odyssey thesis asked this question: Are males and females in humans engaging in distinct systems to assuage pain?

She really took off with this and here's the design. We had heat, noxious heat in the baseline.

CRISTINA CUSIN: Doctor, you have one minute left. Can you wrap up?

FADEL ZEIDAN: Yep. Basically we asked, are there sex differences between males and females during meditation in response to noxious heat? And there are.

Baseline, just change in pain. Green is saline. Red is naloxone. You can see that with naloxone onboard, there's greater analgesia in females, and we reversed the analgesia. Largely, there's no differences between baseline in naloxone in males, and the males are reducing pain during saline.

We believe this is the first study to show something like this in humans. Super exciting. It also blocked the stress reduction response in males but not so much in females. Let me just acknowledge our funders. Some of our team. And I apologize for the fast presentation. Thank you.

CRISTINA CUSIN: Thank you so much. That was awesome.

We're a little bit on short on time.

I suggest we go into a short break, ten minute, until 1:40. Please continue to add your questions in Q&A. Our speakers are going to answer or we'll bring some of those questions directly to the discussion panel at the end of the session today. Thank you so much.

Measuring & Mitigating the Placebo Effect (continued)

CRISTINA CUSIN: Hello, welcome back. I'm really honored to introduce our next speaker, Dr. Marta Pecina. And she's going to talk about mapping expectancy-mood interactions in antidepressant placebo effects. Thank you so much.

MARTA PECINA: Thank you, Cristina. It is my great pleasure to be here. And just I'm going to switch gears a little bit to talk about antidepressant placebo effects. And in particular, I'm going to talk about the relationship between acute expectancy-mood neural dynamics and long-term antidepressant placebo effects.

So while we all know that depression is a very prevalent disorder, and just in 2020, Major Depressive Disorder affected 21 million adults in the U.S. and 280 million adults worldwide. And current projections indicate that by the year 2030 it will be the leading cause of disease burden globally.

Now, response rates to first-line treatments, antidepressant treatments are approximately 50%. And complete remission is only achieved in 30 to 35% of individuals. Also, depression tends to be a chronic disorder with 50% of those recovering from a first episode having an additional episode. And 80% of those with two or more episodes having another recurrence.

And so for patients who are nonresponsive to two intervention, remission rates with subsequent therapy drop significantly to 10 to 25%. And so, in summary, we're facing a disorder that is very resistant or becomes resistant very easily. And in this context, one would expect that antidepressant placebo effects would actually be low. But we all know that this is not the case. The response rate to placebos is approximately 40% compared to 50% response rates to antidepressants. And obviously this varies across studies.

But what we do know and learned yesterday as well is that response rates to placebos have increased approximately 7% over the last 40 years. And so these high prevalence of placebo response in depressions have significantly contributed to the current psychopharmacology crisis where large pharma companies have reduced at least in half the number of clinical trials devoted to CNS disorders.

Now, antidepressant placebo response rates among individuals with depression are higher than in any other psychiatric condition. And this was recently published again in this meta-analysis of approximately 10,000 psychiatric patients. Now, other disorders where placebo response rates are also prevalent are generalized anxiety disorder, panic disorders, HDHC or PTSD. And maybe less frequent, although still there, in schizophrenia or OCD.

Now, importantly, placebo effects appear not only in response to pills but also surgical interventions or devices, as it was also mentioned yesterday. And this is particularly important today where there is a very large development of device-based interventions for psychiatric conditions. So, for example, in this study that also was mentioned yesterday of deep brain stimulation, patients with resistant depression were assigned to six months of either active or some pseudo level DBS. And this was followed by open level DBS.

As you can see here in this table, patients from both groups improved significantly compared to baseline, but there were no significant differences between the two groups. And for this reason, DBS has not yet been approved by the FDA for depression, even though it's been approved for OCD or Parkinson's disease as we all know.

Now what is a placebo effect, that's one of the main questions of this workshop, and how does it work from a clinical neuroscience perspective? Well, as it's been mentioned already, most of what we know about the placebo effect comes from the field of placebo analgesia. And in summary, classical theories of the placebo effect have consistently argued that placebo effects results from either positive expectancies regarding the potential beneficial effects of a drug or classical conditioning where the pairing of a neutral stimulus, in this case the placebo pill, with an unconditioned stimulus, in this case the active drug, results in a conditioned response.

Now more recently, theories of the placebo effect have used computational models to predict placebo effects. And these theories posit that individuals update their expectancies as new sensory evidence is accumulated by signaling the response between what is expected and what is perceived. And this information is then used to refine future expectancies. Now these conceptual models have been incorporated into a trial-by-trial manipulation of both expectancies of pain relief and pain sensory experience. And this has rapidly advanced our understanding of the neural and molecular mechanisms of placebo analgesia.

And so, for example, in these meta analytic studies using these experiments they have revealed really two patterns of distinct activations with decreases in brain activity in regions involving brain processing such as the dorsal medial prefrontal cortex, the amygdala and the thalamus; and increases in brain activity in regions involving effective appraisal, such as the vmDFC, the nucleus accumbens, and the PAG.

Now what happens in depression? Well, in the field of antidepressant placebo effects, the long-term dynamics of mood and antidepressant responses have not allowed us to have such trial-by-trial manipulation of expectancies. And so instead researchers have used broad brain changes in the context of a randomized control trial or a placebo lead-in phase which has, to some extent, limited the progress of the field.

Now despite these methodological limitations of these studies, they provide important insights about the neural correlates of antidepressant placebo effects. In particular, from studies -- two early on studies we can see the placebo was associated with increased activations broadly in cortical regions and decreased activations in subcortical regions. And these deactivations in subcortical regions were actually larger in patients who were assigned to an SSRI drug treatment.

We also demonstrated that there is similar to pain, antidepressant placebo effects were associated with enhanced endogenous opiate release during placebo administration, predicting the response to open label treatment after ten weeks. And we have also -- we and others have demonstrated that increased connectivity between the salience network and the rostral anterior cingulate during antidepressant placebo effects can actually predict short-term and long-term placebo effects.

Now an important limitation, and as I already mentioned, is that this study is basically the delay mechanism of action of common antidepressant and this low dynamics of mood which really limit the possibility of actively manipulating antidepressant expectancies.

So to address this important gap, we develop a trial-by-trial manipulation of antidepressant expectancies to be used inside of the scanner. And the purpose was really to be able to further disassociate expectancy and mood dynamics during antidepressant placebo effects.

And so the basic structure of this test involved an expectancy condition where subjects are presented with a four-second infusion cue followed by an expectancy rating cue, and a reinforcement condition which consist of 20 seconds of some neurofeedback followed by a mood rating cue. Now the expectancy and the reinforcement condition map onto the classical theories of the placebo effect that I explained earlier.

During the expectancy condition, the antidepressant infusions are compared to periods of calibration where no drug is administered. And during the reinforcement condition, on the other hand, some neurofeedback of positive sign 80% of the time as compared to some neurofeedback of baseline sign 80% of the time. And so this two-by-two study design results in four different conditions. The antidepressant reinforced, the antidepressant not reinforced, the calibration reinforced, and the calibration not reinforced.

And so the cover story is that we tell participants that we are testing the effects of a new fast-acting antidepressant compared to a conventional antidepressant, but in reality, they are both saline. And then we tell them that they will receive multiple infusions of these drugs inside of the scanner while we record their brain activity which we call neurofeedback. So then patients learn that positive neurofeedback compared to baseline is more likely to cause mood improvement. But they are not told that the neurofeedback is simulated.

Then we place an intravenous line for the administration of the saline infusion, and we bring them inside of the scanner. For these kind of experiments we recruit individuals who are 18 through 55 with or without anxiety disorders and have a HAMD depression rating scale greater than 16, consistent with moderate depression. They're antidepressant medication free for at least 25 -- 21 days and then we use consenting procedures that involve authorized deception.

Now, as suspected, behavioral results during this test consistently show that antidepressant expectancies are higher during the antidepressant infusions compared to the calibration, especially when they are reinforced by positive sham neurofeedback. Now mood responses also are significantly higher during positive sham neurofeedback compared to baseline. But this is also enhanced during the administration of the antidepressant infusions.

Now interestingly, these effects are moderated by the present severity such that the effects of the test conditions and the expectancies and mood ratings are weaker in more severe depression even though their overall expectancies are higher, and their overall mood are lower.

Now at a neuron level, what we see is that the presentation of the infusion cue is associated with an increased activation in the occipital cortex and the dorsal attention network suggesting greater attention processing engaged during the presentation of the treatment cue. And similarly, the reinforcement condition revealed increased activations in the dorsal attention network with additional responses in the ventral striatum suggesting that individuals processed the sham positive neurofeedback cue as rewarding.

Now an important question for us was now that we can manipulate acute placebo -- antidepressant placebo responses, can we use this experiment to understand the mechanisms implicated in short-term and long-term antidepressant placebo effects. And so as I mentioned earlier, there was emerging evidence suggesting that placebo analgesic could be explained by computational models, in particular reinforcement learning.

And so we tested the hypothesis that antidepressant placebo effects could be explained by similar models. So as you know, under these theories, learning occurs when an experienced outcome differs from what is expected. And this is called the prediction error. And then the expected value of the next possible outcome is updated with a portion of this prediction error as reflected in this cue learning rule.

Now in the context of our experiment, model predicted expectancies for each of the four trial conditions would be updated every time the antidepressant or the calibration infusion cue is presented and an outcome, whether positive or baseline neurofeedback, is observed based on a similar learning rule.

Now this basic model was then compared against two alternative models. One which included differential learning rates to account for the possibility that learning would depend on whether participants were updating expectancies for the placebo or the calibration. And then an additional model to account for the possibility that subjects were incorporating positive mood responses as mood rewards.

And then finally, we constructed this additional model to allow the possibility of the combination of models two and three. And so using patient model comparison, we found that the model -- the fourth model, model four which included a placebo bias learning in our reinforcement by mood dominated all the other alternatives after correction for patient omnibus risk.

Now we then map the expected value and reward predictions error signals from our reinforcement learning models into our raw data. And what we found was that expected value signals map into the salience network raw responses; whereas reward prediction errors map onto the dorsal attention network raw responses. And so all together, the combination of our model-free and model-based results reveal that the processing of the antidepressant in patient cue increase activation in the dorsal attention network; whereas, the encoding of the expectancies took place in the salience network once salience had been attributed to the cue.

And then furthermore, we demonstrated that the reinforcement learning model predicted expectancies in coding the salience network triggered mood changes that are perceived as reward signals. And then these mood reward signals further reinforce antidepressant expectancies through the information of expectancy mood dynamics defined by models of reinforcement learning, an idea that could possibly contribute to the formation of long-lasting antidepressant placebo effects.

And so the second aim was really -- was going to look at these in particular how to use behavioral neuroresponses of placebo effects to predict long-term placebo effects in the context of a clinical trial. And so our hypothesis was that during placebo administration greater salient attribution to the contextual cue in the salience network would transfer to regions involved in mood regulation to induce mood changes. So in particular we hypothesized that the DMN would play a key role in belief-induced mood regulation.

And why the DMN? Well, we knew that activity in the rostral anterior cingulate, which is a key node of the DMN, is a robust predictor of mood responses to both active antidepressant and placebos, implying its involvement in nonspecific treatment response mechanisms. We also knew that the rostral anterior cingulate is a robust predicter of placebo analgesia consistent with its role in cognitive appraisals, predictions and evaluation. And we also had evidence that the SN to DMN functional connectivity appears to be a predictor of placebo and antidepressant responses over ten weeks of treatment.

And so in our clinical trial, which you can see the cartoon diagram here, we randomized six individuals to placebo or escitalopram 20 milligrams. And this table is just to say there were no significant differences between the two groups in regard to the gender, race, age, or depression severity. But what we found interesting is that there were also no significant differences in the correct belief assignment with 60% of subjects in each group approximately guessing that they were receiving escitalopram.

Now as you can see here, participants showed lower MADR scores at eight weeks in both groups. But there was no significant differences between the two groups. However, when split in the two groups by the last drug assignment belief, subjects with the drug assignment belief improved significantly compared to those with a placebo assignment belief.

And so the next question was can we use neuroimaging to predict these responses? And what we found was at a neural level during expectancy process the salience network had an increased pattern of functional connectivity with the DMN as well as with other regions of the brainstem including the thalamus. Now at the end -- we also found that increased SN to DMN functional connectivity predicted expectancy ratings during the antidepressant placebo fMRI task such that higher connectivity was associated with greater modulation of the task conditions on expectancy ratings.

Now we also found that enhanced functional connectivity between the SN and the DMN predicted the response to eight weeks of treatment, especially on individuals who believed that they were of the antidepressant group. Now this data supports that during placebo administration, greater salient attributions to the contextual cue is encoded in the salience network; whereas belief-induced mood regulation is associated with an increased functional connectivity between the SN and DMN and altogether this data suggest that enhancements to DMN connectivity enables the switch from greater salient attribution to the treatment cue to DMN-mediated mood regulation.

And so finally, and this is going to be brief, but the next question for us was can we modulate these networks to actually enhance placebo-related activity. And in particular, we decided to use theta burst stimulation which can potentiate or depotentiate brain activity in response to brief periods of stimulation. And so in this study participants undergo three counterbalance sessions of TBS with either continuous, intermittent, or sham known to depotentiate, potentiate, and have no effect.

So each TBS is followed by an fMRI session during the antidepressant placebo effect task which happens approximately an hour after stimulation. The inclusive criteria are very similar to all of our other studies. And our pattern of stimulation is pretty straightforward. We do two blocks of TBS. And during the first block stimulation intensity is gradually escalated in 5% increments in order to enhance tolerability. And during the second session the stimulation is maintained constant at 80% of the moderate threshold.

Then we use the modified cTBS session consisting of three stimuli applied at intervals of 30 hertz. We first repeat it at 6 hertz for a total of 600 stimuli in a continuous train of 33.3 seconds. Then we did the iTBS session consist of a burst of three stimuli applied at intervals of 50 hertz with bursts repeated at 5 hertz for a total of 600 stimulus during 192 seconds. We also use a sham condition where 50% of subjects are assigned to sham TBS simulating the iTBS stimulus pattern, and 50% are assigned to sham TBS simulating the cTBS pattern.

Now our target is the DMN which is the cortical target for the dorsal medial -- the cortical target for the DMN -- sorry, the dmPFC which is the cortical target for the DMN. And this corresponds to the -- and we found these effects based on the results from the antidepressant placebo fMRI task.

And so this target corresponds to our neurosynth scalp which is located 30% of the distance from the nasion-to-inion forward from the vertex and 5% left which corresponds to an EEG location of F1. And the connectivity map of these regions actually result in activation of the DMN. Now we can also show here the E-Field map of this target which basically demonstrates supports a nice coverage of the DMN.

And so what we found here is that the iTBS compared to sham and cTBS enhances the effect of the reinforcement condition of mood responses. And we also found that at a neural level iTBS compared to cTBS shows significant greater bold responses during expectancy processing within the DMN with sham responses in the middle but really not significantly different from iTBS. Now, increased bold responses in the ventral medial prefrontal cortex were associated with a greater effect of the task conditions of mood responses.

And so all together our results suggest that first trial-by-trial modulation of antidepressant expectancies effectively disassociates expectancy mood dynamics. Antidepressant expectancies are predicted by models of reinforcement learning and they're encoded in the salience network. We also showed that enhanced SN to DMN connectivity enables the switch from greater salient attribution to treatment cues to DMN-mediated mood regulation, contributing to the formation of acute expectancy-mood interactions and long-term antidepressant placebo effects. And iTBS potentiation of the DMN enhances placebo-induced mood responses and expectancy processing.

With this, I would just like to thank my collaborators that started this work with me at the University of Michigan and mostly the people in my lab and collaborators at the University of Pittsburgh as well as the funding agencies.

CRISTINA CUSIN: Wonderful presentation. Really terrific way of trying to untangle different mechanism in placebo response in depression, which is not an easy feat.

There are no specific questions in the Q&A. I would encourage everybody attending the workshop to please post your question to the Q&A. Every panelist can answer in writing. And then we will answer more questions during the discussion, but please don't hesitate.

I think I will move on to the next speaker. We have only a couple of minutes so we're just going to move on to Dr. Schmidt. Thank you so much. We can see your slides. We cannot hear you.

LIANE SCHMIDT: Can you hear me now?

CRISTINA CUSIN: Yes, thank you.

LIANE SCHMIDT: Thank you. So I'm Liane Schmidt. I'm an INSERM researcher and team leader at the Paris Brain Institute. And I'm working on placebo effects but understanding the appetitive side of placebo effects. And what I mean by that I will try to explain to you in the next couple of slides.

NIMH Staff: Can you turn on your video?

LIANE SCHMIDT: Sorry?

NIMH Staff: Can you please turn on your video, Dr. Schmidt?

LIANE SCHMIDT: Yeah, yes, yes, sorry about that.

So it's about the appetitive side of placebo effects because actually placebo effects on cognitive processes such as motivation and biases and belief updating because these processes actually play also a role when patients respond to treatment. And when we measure placebo effects, basically when placebo effects matter in the clinical setting.

And this is done at the Paris Brain Institute. And I'm working also in collaboration with the Pitie-Salpetriere Hospital Psychiatry department to get access to patients with depression, for example.

So my talk will be organized around three parts. On the first part, I will show you some data about appetitive placebo effects on taste pleasantness, hunger sensations and reward learning. And this will make the bridge to the second part where I will show you some evidence for asymmetrical learning biases that are more tied to reward learning and that could contribute actually or can emerge after fast-acting antidepressant treatment effects in depression.

And why is this important? I will try to link these two different parts, the first and second part, in the third part to elaborate some perspectives on the synergies between expectations, expectation updating through learning mechanisms, motivational mechanisms, motivational processes and drug experiences and which we can -- might harness actually by using computational models such as, for example, risk-reward Wagner models as Marta just showed you all the evidence for this in her work.

The appetitive side of placebo effects is actually known very well from the field of research in consumer psychology and marketing research where price labels, for example, or quality labels can affect decision-making processes and also experiences like taste pleasantness experience. And since we are in France, one of the most salient examples for these kind of effects comes from wine tasting. And many people have shown -- many studies have shown that basically the price of wine can influence how pleasant it tastes.

And we and other people have shown that this is mediated by activation in what is called the brain valuation systems or regions that encode expected and experienced reward. And one of the most prominent hubs in this brain valuation system is the ventral medial prefrontal cortex, what you see here on the SPM on the slide. That can explain, that basically translates these price label effects on taste pleasantness liking. And what is interesting is also that its sensitivity to monetary reward, for example, obtaining by surprise a monetary reward. It activates, basically the vmPFC activates when you obtain such a reward surprisingly.

And the more in participants who activate the vmPFC more in these kind of positive surprises, these are also the participants in which the vmPFC encoded more strongly the difference between expensive and cheap wines, which makes a nice parallel to what we know from placebo hyperalgesia where it has also been shown that the sensitivity of the reward system in the brain can moderate placebo analgesia with participants with higher reward sensitivity in the ventral striatum, for example, another region showing stronger placebo analgesia.

So this is to basically hope to let you appreciate that these effects parallel nicely what we know from placebo effects in the pain and also in disease. So we went further beyond actually, so beyond just taste liking which is basically experiencing rewards such as wine. But what could be -- could placebos also affect motivational processes per se? So when we, for example, want something more.

And one way to study is to study basic motivation such as, for example, hunger. It is long thought, for instance, eating behavior that is conceptualized to be driven by homeostatic markers, hormone markers such as Ghrelin and Leptin that signal satiety and energy stores. And as a function of these different hormonal markers in our blood, we're going to go and look for food and eat food. But we also know from the placebo effects on taste pleasantness that there is a possibility that our higher order beliefs about our internal states not our hormones can influence whether we want to eat food, whether we engage in these types of very basic motivations. And that we tested that, and other people also, that's a replication.

In the study where we gave healthy participants who came into the lab in a fasted state a glass of water. And we told them well, water sometimes can stimulate hunger by stimulating the receptors in your mouth. And sometimes you can also drink a glass of water to kill your hunger. And a third group, a control group was given a glass of water and told it's just water; it does nothing to hunger. And then we asked them to rate how hungry they feel over the course of the experiment. And it's a three-hour experiment. Everybody has fasted. And they have to do this food choice task in an fMRI scanner so they get -- everybody gets hungry over this three hours.

But what was interesting and what you see here on this rain cloud plot is that participants who believed or drank the water suggested to be a hunger killer increased in their hunger rating less than participants who believed the water will enhance their hunger. So this is a nice replication what we already know from the field; other people have shown this, too.

And the interesting thing is that it also affected this food wanting, this motivational process how much you want to eat food. So when people laid there in the fMRI scanner, they saw different food items, and they were asked whether they want to eat it or not for real at the end of the experiment. So it's incentive compatible. And what you see here is basically what we call stimulus value. So how much do you want to eat this food.

And the hunger sensation ratings that I just showed you before parallel what we find here. The people in the decreased hunger suggestion group wanted to eat the food less than in the increased hunger suggestion group, showing that it is not only an effect on subjective self-reports or how you feel your body signals about hunger. It's also about what you would actually prefer, what your subjective preference of food that is influenced by the placebo manipulation. And it's also influencing how your brain valuation system again encodes the value for your food preference. And that's what you see on this slide.

Slide two, you see the ventral medial prefrontal cortex. The yellow boxes that the more yellow they are, the stronger they correlate to your food wanting. And you see on the right side with the temporal time courses of the vmPFC that that food wanting encoding is stronger when people were on the increased hunger suggestion group than in the decreased hunger suggestion group.

So basically what I've showed you here is three placebo effects. Placebo effects on subjective hunger ratings, placebo effects on food choices, and placebo effects on how the brain encodes food preference and food choices. And you could wonder -- these are readouts. So these are behavioral readouts, neural readouts. But you could wonder what is the mechanism behind? Basically what is in between the placebo intervention here and basically the behavior feed and neural readout of this effect.

And one snippet of the answer to this question is when you look at the expectation ratings. For example, expectations have long been shown to be one of the mediators, the cognitive mediators of placebo effects across domains. And that's what we see here, too. Especially in the hunger killer suggestion group. The participants who believed that the hunger -- that the drug will kill their hunger more strongly were also those whose hunger increased less over the course of the experiment experience.

And this moderated activity in the region that you see here, which is called the medial prefrontal cortex, that basically activated when people saw food on the screen and thought about whether they want to eat it or not. And this region activated by that activity was positively moderated by the strength of the expectancy about the glass of water to decrease their hunger. So the more you expect that the water will decrease your hunger, the more the mPFC activates when you see food on the screen.

It's an interesting brain region because it's right between the ventral medial prefrontal cortex that encodes the value, the food preference, and the dorsal lateral prefrontal cortex. And it has been shown by past research to connect to the vmPFC when participants self-control, especially during food decision-making paradigms.

But another mechanism or another way to answer the question about the mechanism of how the placebo intervention can affect this behavior in neural effects is to use computational modelings to better understand the preference formation -- the preference formation basically. And one way is that -- is drift diffusion modeling. So these drift diffusion models come from perceptual research for understanding perception. And they are recently also used to better understand preference formation. And they assume that your preference for a yes or no food choice, for example, is a noisy accumulation of evidence.

And there are two types of evidence you accumulate in these two -- in these decision-making paradigms is basically how tasty and how healthy food is. How much you like the taste, how much you consider the health. And this could influence this loop of your evidence accumulation how rapidly basically you reach a threshold towards yes or no.

It could also be that the placebo and the placebo manipulation could influence this loop. But the model loops test several other hypotheses. It could be that the placebo intervention basically affected also just the threshold like that reflects how carefully you made the decision towards a yes or no choice. It could be your initial bias; that is, basically initially you were biased towards a yes or a no response. Or it could be the nondecision time which reflects more sensory motor integration.

And the answer to this question is basically that three parameters were influenced by the placebo manipulation. Basically how much you integrated healthiness and tastiness in your initial starting bias. So you paid more attention to the healthiness when you believed that you were on a hunger killer. And more the tastiness when you believed that you were on a hunger enhancer. And similarly, you were initially biased towards accepting food more when participants believed they were on a hunger enhancer than on a hunger killer.

Interestingly, so this basically shows that this decision-making process is biased by the placebo intervention and basically also how much you filter information that is most relevant. When you are hungry, basically taste is very relevant for your choices. When you are believing you are less hungry, then you have more actually space or you pay less attention to taste, but you can also pay attention more to healthiness of food.

And so the example that shows that this might be a filtering of expectation-relevant information is to use psychophysiologic interaction analyzers that look basically at the brain activity in the vmPFC, that's our seed region. Where in the brain does it connect when people, when participants see food on a computer screen and have to think about whether they want to eat this food or not?

And what we observed there that's connected to the dlPFC, the dorsal lateral prefrontal cortex region. And it's a region of interest that we localized first to be sure it is actually a region that is inter -- activating through an interference resolution basically when we filter -- have to filter information that is most relevant to a task in a separate Stroop localizer task.

So the vmPFC connects stronger to this dlPFC interference resolution region and this is moderated especially in the decreased hunger suggestion group by how much participants considered the healthiness against the tastiness of food.

To wrap this part up, it's basically that we replicated findings from previous studies about appetitive placebo effects by showing that expectancies about efficiency of a drink can affect hunger sensations. How participants make -- form their food preferences, make food choices. And value encoding in the ventral medial prefrontal cortex.

But we also provided evidence for underlying neurocognitive mechanisms that involve the medial prefrontal cortex that is moderated by the strengths of the hunger expectation. That the food choice formation is biased in the form of attention-filtering mechanism toward expectancy congruent information that is taste for an increased hunger suggestion group, and healthiness for a decreased hunger suggestion group. And this is implemented by regions that are linked to interference resolution but also to valuation preference encoding.

And so why should we care? In the real world, it is not very relevant to provide people with deceptive information about hunger-influencing ingredients of drinks. But studies like this one provide insights into cognitive mechanisms of beliefs about internal states and how these beliefs can affect the interoceptive sensations and also associated motivations such as economic choices, for example.

And this can actually also give us insights into the synergy between drug experiences and outcome expectations. And that could be harnessed via motivational processes. So translated basically via motivational processes. And then through it maybe lead us to better understand active treatment susceptibility.

And I'm going to elaborate on this in the next part of the talk by -- I'm going a little bit far, coming a little bit far, I'm not talking about or showing evidence about placebo effects. But yes -- before that, yes, so basically it is.

Links to these motivational processes have long been suggested actually to be also part of placebo effects or mechanisms of placebo effect. And that is called the placebo-reward hypothesis. And that's based on findings in Parkinson's disease that has shown that when you give Parkinson's patients a placebo but tell them it's a dopaminergic drug, then you can measure dopamine in the brain. And the dopamine -- especially the marker for dopamine, its binding potential decreases. That is what you see here on this PET screen -- PET scan results.

And that suggests that the brain must have released endogenous dopamine. And dopamine is very important for expectations and learning. Basically learning from reward. And clinical benefit is the kind of reward that patients expect. So it might -- it is possible that basically when a patient expects reward clinical benefit, its brain -- their brain releases dopamine in remodulating that region such as the vmPFC or the ventral striatum.

And we have shown this in the past that the behavioral consequence of such a nucleus dopamine release under placebo could be linked to reward learning, indeed. And what we know is that, for example, that Parkinson patients have a deficit in learning from reward when they are off dopaminergic medication. But this normalizes when they are under active dopaminergic medication.

So we wondered if based on these PET studies under placebo, the brain releases dopamine, does this also have behavior consequences on their reward learning ability. And that is what you see here on the screen on the right side on the screen is that the Parkinson patients basically tested on placebo shows similar reward learning abilities as under active drug.

And this again was also underpinned by increased correlation of the ventral medial prefrontal cortex. Again, this hub of the brain valuation system to the learned reward value. That was stronger in the placebo and active drug condition compared to baseline of drug condition.

And I want to make now this -- a link to another type of disease where also the motivation is deficitary, and which is depression. And depression is known to be maintained or is sympathized to be maintained by this triad of very negative beliefs about the world, the future and one's self. Which is very insensitive to belief disconfirming information, especially if the belief disconfirming information is positive, so favorable. And this has been shown by cognitive neuroscience studies to be reflected by a thought of like of good news/bad news bias or optimism biases and belief updating in depression. And this good news/bad news bias is basically a bias healthy people have to consider favorable information that contradicts initial negative beliefs more than negative information.

And this is healthy because it avoids reversing of beliefs. And it also includes a form of motivational process because good news have motivational salience. So it should be more motivating to update beliefs about the future, especially if these beliefs are negative, then when we learn that our beliefs are way too negative and get information about that disconfirms this initial belief. But depressed patients, they like this good news/bad news bias. So we wonder what happens when patients respond to antidepressant treatments that give immediate sensory evidence about being on an antidepressant.

And these new fast-acting antidepressants such as Ketamine, these types of antidepressants that patients know right away whether they got the treatment through dissociative experiences. And so could it be that this effect or is it a cognitive model of depression. So this was the main question of the study. And then we wondered again what is the computational mechanism. And is it linked again also, as shown in the previous studies, to reward learning mechanisms, so biased updating of beliefs. And is it linked to clinical antidepressant effects and also potentially outcome expectations makes the link to placebo effects.

So patients were given the -- were performing a belief updating task three times before receiving Ketamine infusions. And then after first infusion and then one week after the third infusion, each time, testing time we measured the depression with the Montgomery-Asberg Depression Rating Scale. And patients performed this belief updating task where they were presented with different negative life events like, for example, getting a disease, losing a wallet here, for example.

And they were asked to estimate their probability of experiencing this life event in the near future. And they were presented with evidence about the contingencies of this event in the general population, what we call the base rate. And then they had the possibility to update their belief knowing now the base rate.

And this is, for example, a good news trial where participants initially overestimated the chance for losing a wallet and then learn it's much less frequent than they initially thought. Updates, for example, 15%. And in a bad news trial, it's you initially underestimated your probability of experiencing this adverse life event. And if you have a good news/bad news bias, well, you're going to consider this information to a lesser degree than in a good news trial.

And that's what -- exactly what happens in the healthy controls that you see on the left most part of the screen. I don't know whether you can see the models, but basically we have the belief updating Y axis. And this is healthy age-matched controls to patients. And you can see updating of the good news. Updating of the bad news. We tested the participants more than two times within a week. You can see the bias. There is a bias that decreases a little bit with more sequential testing in the healthy controls. But importantly, in the patients the bias is there although before Ketamine treatment.

But it becomes much more stronger after Ketamine treatment. It emerged basically. So patients become more optimistically biased after Ketamine treatment. And this correlates to the MADRS scores. Patients who improve more with treatment are also those who show a stronger good news/bad news bias after one week of treatment.

And we wondered again about the computational mechanisms. So one way to get at this using a Rescoria-Wagner model reward reinforcement learning model that basically assumes that updating is proportional to your surprise which is called the estimation error.

The difference between the initial estimate and the base rate. And this is weighted by learning rate. And the important thing here is the learning rate has got two components, a scaling parameter and an asymmetry parameter. And the asymmetry parameter basically weighs in how much the learning rate varies after good news, after positive estimation error, than after negative estimation errors.

And what we can see that in healthy controls, there is a stronger learning rate for positive estimation errors and less stronger for negative estimation errors translating this good news/bad news bias. It's basically an asymmetrical learning mechanism. And in the patients, the asymmetrical learning is non-asymmetrical before Ketamine treatment. And it becomes asymmetrical as reflected in the learning rates after Ketamine treatment.

So what we take from that is basically that Ketamine induced an optimism bias. But an interesting question is whether -- basically what comes first. Is it basically the improvement in the depression that we measured with the Montgomery-Asberg Depression Rating Scale, or is it the optimism bias that emerged and that triggered basically. Since it's a correlation, we don't know what comes first.

And an interesting side effect or aside we put in the supplement was that in 16 patients, it's a very low sample size, the expectancy about getting better also correlated to the clinical improvement after Ketamine treatment. We have two expectancy ratings here about the efficiency about Ketamine and also what patients expect their intensity of depression will be after Ketamine treatment.

And so that suggested the clinical benefit is kind of in part or synergistically seems to interact with the drug experience that emerges that generates an optimism bias. And to test this more, we continued data collection just on the expectancy ratings. And basically wondered how the clinical improvement after first infusion links to the clinical improvement after third infusion.

And we know from here that patients improve after first infusion are also those that improved after a third infusion. But is it mediated by their expectancy about the Ketamine treatment? And that's what we indeed found is that basically the more patients expected to get better, the more they got better after one week of treatment. But it mediated this link between the first drug experience and the later drug experiences and suggested there might not be an additive effect as other panelist members today already put forward today, it might be synergetic link.

And one way to get at these synergies is basically again use computational models. And this idea has been around although yesterday that basically there could be self-fulfilling prophesies that could contribute to the treatment responsiveness and treatment adherence. And these self-fulfilling prophesies are biased symmetrically learning mechanisms that are more biased when you have positive treatment experiences, initial positive treatment experiences, and then might contribute how you adhere to the treatment in the long term and also how much you benefit from it in the long term. So it's both drug experience and an expectancy.

And so this is nonpublished work where we played with this idea basically using a reinforcement learning model. This is also very inspired by we know from placebo analgesia. Tor and Luana Kuven, they have a paper on showing that self-fulfilling prophecies can be harnessed with biased patient and reinforcement learning models. And the idea of these models is that there are two learning rates, alpha plus and alpha minus. And these learning rates rate differently into the updating of your expectation after drug experience.

LIANE SCHMIDT: Okay, yeah, I'm almost done.

So rate differently on these drug experiences and expectations as a function of whether the initial experience was congruent to your expectations. So a positive experience, then a negative one. And here are some simulations of this model. I'm showing this basically that your expectation is getting more updated the more bias, positively biased you are. Then when you are negatively biased. And these are some predictions of the model concerning depression improvement.

To wrap this up, the conclusion about this is that there seems to be asymmetrical learning that can capture self-fulfilling prophesies and could be a mechanism that translates expectations and drug experiences potentially across domains from placebo hypoalgesia to antidepressant treatment responsiveness. And the open question is obviously to challenge these predictions of these models more with empirical data in pain but also in mood disorders as Marta does and as we do also currently at Cypitria where we test the mechanisms of belief updating biases in depression with fMRI and these mathematical models.

And this has a direct link implication because it could help us to better understand how these fast-acting antidepressants work and what makes patients adhere to them and get responses to them. Thank you for your attention. We are the control-interoception-attention team. And thank you to all the funders.

CRISTINA CUSIN: Fantastic presentation. Thank you so much. Without further ado, let's move on to the next speaker. Dr. Greg Corder.

GREG CORDER: Did that work? Is it showing?

GREG CORDER: Awesome, all right. One second. Let me just move this other screen. Perfect. All right.

Hi, everyone. My name is Greg Corder. I'm an Assistant Professor at the University of Pennsylvania. I guess I get to be the final scientific speaker in this session over what has been an amazing two-day event. So thank you to the organizers for also having me get the honor of representing the entire field of preclinical placebo research as well.

And so I'm going to give a bit of an overview, some of my friends and colleagues over the last few years and then tell you a bit about how we're leveraging a lot of current neuroscience technologies to really identify the cell types and circuits building from, you know, the human fMRI literature that's really honed in on these key circuits for expectations, belief systems as well as endogenous antinociceptive symptoms, in particular opioid cell types.

So the work I'm going to show from my lab has really been driven by these two amazing scientists. Dr. Blake Kimmey, an amazing post-doc in the lab. As well as Lindsay Ejoh, who recently last week just received her D-SPAN F99/K00 on placebo circuitry. And we think this might be one of the first NIH-funded animal projects on placebo. So congratulations, Lindsay, if you are listening.

Okay. So why use animals, right? We've heard an amazing set of stories really nailing down the specific circuits in humans leveraging MRI, fMRI, EEG and PET imaging that give us this really nice roadmap and idea of how beliefs in analgesia might be encoded within different brain circuits and how those might change over times with different types of patient modeling or updating of different experiences.

And we love this literature. We -- in the lab we read it in depth as best as we can. And we use this as a roadmap in our animal studies because we can take advantage of animal models that really allow us to dive deep into the very specific circuits using techniques like that on the screen here from RNA sequencing, electrophysiology really showing that those functional measurements in fMRI are truly existent with the axons projecting from one region to another.

And then we can manipulate those connections and projections using things like optogenetics and chemogenetics that allow us really tight temporal coupling to turn cells on and off. And we can see the effects of that intervention in real time on animal behavior. And that's really the tricky part is we don't get to ask the animals do you feel pain? Do you feel less pain? It's hard to give verbal suggestions to animals.

And so we have to rely on a lot of different tricks and really get into the heads of what it's like to be a small prey animal existing in a world with a lot of large monster human beings around them. So we really have to be very careful about how we design our experiments. And it's hard. Placebo in animals is not an easy subject to get into. And this is reflected in the fact that as far as we can tell, there is only 24 published studies to date on placebo analgesia in animal models.

However, I think this is an excellent opportunity now to really take advantage of what has been the golden age of neuroscience technologies exploding in the last 10-15 years to revisit a lot of these open questions about when are opioids released, are they released? Can animals have expectations? Can they have something like a belief structure and violations of those expectations that lead to different types of predictions errors that can be encoded in different neural circuits. So we have a chance to really do that.

But I think the most critical first thing is how do we begin to behaviorally model placebo in these preclinical models. So I want to touch on a couple of things from some of my colleagues. So on the left here, this is a graph that has been shown on several different presentations over the past two days from Benedetti using these tourniquet pain models where you can provide pharmacological conditioning with an analgesic drug like morphine to increase this pain tolerance.

And then if it is covertly switched out for saline, you can see that there is an elevation in that pain tolerance reflective of something like a placebo analgesic response overall. And this is sensitive to Naloxone, the new opioid receptor antagonist, suggesting endogenous opioids are indeed involved in this type of a placebo-like response.

And my colleague, Dr. Matt Banghart, at UCSD has basically done a fantastic job of recapitulating this exact model in mice where you can basically use morphine and other analgesics to condition them. And so if I just kind of dive in a little bit into Matt's model here.

You can have a mouse that will sit on a noxious hot plate. You know, it's an environment that's unpleasant. You can have contextual cues like different types of patterns on the wall. And you can test the pain behavior responses like how much does the animal flick and flinch and lick and bite and protect itself to the noxious hot plate.

And then you can switch the contextual cues, provide an analgesic drug like morphine, see reductions in those pain behaviors. And then do the same thing in the Benedetti studies, you switch out the morphine for saline, but you keep the contextual cues. So the animal has effectively created a belief that when I am in this environment, when I'm in this doctor's office, I'm going to receive something that is going to reduce my perceptions of pain.

And, indeed, Matt sees a quite robust effect here where this sort of placebo response is -- shows this elevated paw withdrawal latency indicating that there is endogenous nociception occurring with this protocol. And it happens, again, pretty robustly. I mean most of the animals going through this conditioning protocol demonstrate this type of antinociceptive behavioral response. This is a perfect example of how we can leverage what we learn from human studies into rodent studies for acute pain.

And this is also really great to probe the effects of placebo in chronic neuropathic pain models. And so here this is Dr. Damien Boorman who was with Professor Kevin Key in Australia, now with Lauren Martin in Toronto.

And here Damien really amped up the contextual cues here. So this is an animal who has had an injury to the sciatic nerve with this chronic constriction injury. So now this animal is experiencing something like a tonic chronic neuropathic pain state. And then once you let the pain develop, you can have the animals enter into this sort of placebo pharmacological conditioning paradigm where animals will go onto these thermal plates, either hot or cool, in these rooms that have a large amount of visual tactile as well as odorant cues. And they are paired with either morphine or a controlled saline.

Again, the morphine is switched for saline on that last day. And what Damien has observed is that in a subset of the animals, about 30%, you can have these responder populations that show decreased pain behavior which we interpret as something like analgesia overall. So overall you can use these types of pharmacological conditionings for both acute and chronic pain.

So now what we're going to do in our lab is a bit different. And I'm really curious to hear the field's thoughts because all -- everything I'm about to show is completely unpublished. Here we're going to use an experimenter-free, drug-free paradigm of instrumental conditioning to instill something like a placebo effect.

And so this is what Blake and Lindsay have been working on since about 2020. And this is our setup in one of our behavior rooms here. Our apparatus is this tiny little device down here. And everything else are all the computers and optogenetics and calcium imaging techniques that we use to record the activity of what's going on inside the mouse's brain.

But simply, this is just two hot plates that we can control the temperature of. And we allow a mouse to freely explore this apparatus. And we can with a series of cameras and tracking devices plot the place preference of an animal within the apparatus. And we can also record with high speed videography these highly conserved sort of protective recuperative pain-like behaviors that we think are indicative of the negative affect of pain.

So let me walk you through our little model here real quick. Okay. So we call this the placebo analgesia conditioning assay or PAC assay. So here is our two-plate apparatus here. So plate number one, plate number two. And the animal can always explore whichever plate it wants. It's never restricted to one side. And so we have a habituation day, let the animal familiarize itself. Like oh, this is a nice office, I don't know what's about to happen.

And then we have a pretest. And in this pretest, importantly, we make both of these plates, both environments a noxious 45-degree centigrade. So this will allow the animal to form an initial expectation that the entire environment is noxious and it's going to hurt. So both sides are noxious. Then for our conditioning, this is where we actually make one side of the chamber non-noxious. So it's just room temperature. But we keep one side noxious. So now there is a new expectation for the animal that it learns that it can instrumentally move its body from one side to the other side to avoid and escape feeling pain.

And so we'll do this over three days, twice per day. And then on our post tester placebo day we make both environments hot again. So now we'll start the animal off over here and the animals will get to freely choose do they want to go to the side that they expect should be non-noxious? Or what happens? So what happens?

Actually, if you just look at the place preference for this, over the course of conditioning we can see that the animals will, unsurprisingly, choose the environment that is non-noxious. And they spend 100% of their time there basically. But when we flip the plates or flip the conditions such that everything is noxious on the post test day, the animals will still spend a significant amount of time on the expected analgesia side. So I'm going to show you some videos here now and you are all going to become mouse pain behavior experts by the end of this.

So what I'm going to show you are both side by side examples of conditioned and unconditioned animals. And try to follow along with me as you can see what the effect looks like. So on this post test day. Oh, gosh, let's see if this is going to -- here we go. All right. So on the top we have the control animal running back and forth. The bottom is our conditioned animal.

And you'll notice we start the animal over here and it's going to go to the side that it expects it to not hurt. Notice the posture of the animals. This animal is sitting very calm. It's putting its entire body down on the hot plate. This animal, posture up, tail up. It's running around a little bit frantically. You'll notice it start to lick and bite and shake its paws. This animal down here might have a couple of flinches so it's letting you know that some nociception is getting into the nervous system overall.

But over the course of this three-minute test, the animals will rightly choose to spend more time over here. And if we start to quantify these types of behaviors that the animals are doing in both conditions, what we find is that there is actually a pretty significant reduction in these nociceptive behaviors. But it's not across the entire duration of this placebo day or post test day.

So this trial is three minutes long. And what we see is that this antinociceptive and preference choice only exists for about the first 90 seconds of this assay. So this is when the video I just showed, the animal goes to the placebo side, it spends a lot of its time there, does not seem to be displaying pain-like behaviors.

And then around 90 seconds, the animal -- it's like -- it's almost like the belief or the expectation breaks. And at some point, the animal realizes oh, no, this is actually quite hot. It starts to then run around and starts to show some of the more typical nociceptive-like behaviors. And we really like this design because this is really, really amenable to doing different types of calcium imaging, electrophysiology, optogenetics because now we have a really tight timeline that we can observe the changing of neural dynamics at speeds that we can correlate with some type of behavior.

Okay. So what are those circuits that we're interested in overall that could be related to this form of placebo? Again, we like to use the human findings as a wonderful roadmap. And Tor has demonstrated, and many other people have demonstrated this interconnected distributed network involving prefrontal cortex, nucleus accumbens, insula, thalamus, as well as the periaqueductal gray.

And so today I'm going to talk about just the periaqueductal gray. Because there is evidence that there is also release of endogenous opioids within this system here. And so we tend to think that the placebo process and the encoding, whatever that is, the placebo itself is likely not encoded in the PAG. The PAG is kind of the end of the road. It's the thing that gets turned on during placebo and we think is driving the antinociceptive or analgesic effects of the placebo itself.

So the PAG, for anyone who's not as familiar, we like it because it's conserved across species. We look at in a mouse. There's one in a human. So potentially it's really good for translational studies as well. It has a very storied past where it's been demonstrated that the PAG subarchitecture has these beautiful anterior to posterior columns that if you electrically stimulate different parts of PAG, you can produce active versus passive coping mechanisms as well as analgesia that's dependent on opioids as well as endocannabinoids.

And then the PAG is highly connected. Both from ascending nociception from the spinal cord as well as descending control systems from prefrontal cortex as well as the amygdala. So with regard to opioid analgesia. If you micro infuse morphine into the posterior part of the PAG, you can produce an analgesic effect in rodents that is across the entire body. So it's super robust analgesia from this very specific part of the PAG.

If you look at the PAG back there and you do some of these techniques to look for histological indications that the mu opioid receptor is there, it is indeed there. There is a large amount of mu opioid receptors, it's OPRM1. And it's largely on glutamatergic neurons. So the excitatory cells, not the inhibitory cells. They are on some of them.

And as far as E-phys data goes as well, we can see that the mu opioid receptor is there. So DAMGOs and opioid agonist. We can see activation of inhibitory GIRK currents in those cells. So the system is wired up for placebo analgesia to happen in that location. Okay. So how are we actually going to start to tease this out? By finding these cells where they go throughout the brain and then understanding their dynamics during placebo analgesia.

So last year we teamed up with Karl Deisseroth's lab at Stanford to develop a new toolkit that leverages the genetics of the opioid system, in particular the promoter for the mu opioid receptor. And we were able to take the genetic sequence for this promoter and package it into adeno associated viruses along with a range of different tools that allow us to turn on or turn off cells or record their activity. And so we can use this mu opioid receptor promoter to gain genetic access throughout the brain or the nervous system for where the mu opioid receptors are. And we can do so with high fidelity.

This is just an example of our mu opioid virus in the central amygdala which is a highly mu opioid specific area. But so Blake used this tool using the promoter to drive a range of different trans genes within the periaqueductal gray. And right here, this is the G camp. So this is a calcium indicator that allows us to in real time assess the calcium activity of PAG mu opioid cells.

And so what Blake did was he took a mouse, and he recorded the nociceptive responses within that cell type and found that the mu opioid cell types are actually nociceptive. They respond to pain, and they do so with increasing activity to stronger and stronger and more salient and intense noxious stimuli. So these cells are actually nociceptive.

And if we look at a ramping hot plate, we can see that those same mu opioid cell types in the PAG increase the activity as this temperature on this hot plate increases. Those cells can decrease that activity if we infuse morphine.

Unsurprisingly, they express the mu opioid receptor and they're indeed sensitive to morphine. If we give naltrexone to block the mu opioid receptors, we can see greater activity to the noxious stimuli, suggesting that there could be an opioid tone or some type of an endogenous opioid system that's keeping this system in check, that it's repressing its activity. So when we block it, we actually enhance that activity. So it's going to be really important here. The activity of these mu opioid PAG cells correlates with affective measures of pain.

When animals are licking, shaking, biting, when it wants to escape away from noxious stimuli, that's when we see activity within those cells. So this is just correlating different types of behavior when we see peak amplitudes within those cell types. So let me skip that real quick.

Okay. So we have this ability to look and peek into the activity of mu opioid cell types. Let's go back to that placebo assay, our PAC assay I mentioned before. If we record from the PAG on that post test day in an animal that has not undergone conditioning, when the plates are super hot, we see a lot of nocioceptive activity in these cells here. They're bouncing up and down.

But if we look at the activity of the nociception in an animal undergoing placebo, what we see is there's a suppression of neural activity within that first 90 seconds. And this actually does seem to extinguish within the lighter 90 seconds. So kind of tracks along with the behavior of those animals. When they're showing anti nocioceptive behavior, that's when those cells are quiet.

When the pain behavior comes back, that's when those cell types are ramping up. But what about the opioids too? Mu opioid receptor cell type's decreasing activity. What about the opioids themselves here? The way to do this in animals has been to use microdialysis, fantastic technique but it's got some limitations to it. This is a way of sampling peptides in real time and then using liquid chromatography to tell if the protein was present. However, the sampling rate is about 10 minutes.

And in terms of the brain processing, 10 minutes might as well be an eternity. If we're talking about milliseconds here. But we want to know what these cells here and these red dots are doing. These are the enkephaliner cells in the PAG. We needed revolution in technologies. One of those came several years ago from Dr. Lin Tian, who developed some of the first sensors for dopamine. Some of you may have heard of it. It's called D-Light.

This is a version of D-Light. But it's actually an enkephalin opioid sensor. What Lin did to genetically engineer this is to take the delta opioid receptor, highly select it for enkephalin, and then link it with this GFP molecule here such that when enkephalin binds to the sensor it will fluoresce.

We can capture that florescence with microscopes that we implant over the PAG and we can see when enkephalin is being released with subsecond resolution. And so what we did for that is we want to see if enkephalin is indeed being released onto those mu opioid receptor expressing pain encoding neurons in the PAG. What I showed you before is that those PAG neurons, they ramp up their activity as the nociception increases, a mouse standing on a hot plate. We see nociception ramp up. What do you all think happened with the opoids?

It wasn't what we expected. It actually drops. So what we can tell is that there's a basal opioid tone within the PAG, but that as nociception increases, acute nociception, we see a decrease suppression of opioid peptide release.

We think this has to do with stuff that Tor has published on previously that the PAG is more likely involved in updating prediction errors. And this acute pain phenomenon we think is reflective of the need to experience pain to update your priors about feeling pain and to bias the selection of the appropriate behaviors, like affect related things to avoid pain. However, what happens in our placebo assay?

We actually see the opposite. So if we condition animals to expect pain relief within that PAC assay, we actually see an increase from the deltoid sensor suggesting that there is an increase in enkephalin release post conditioning. So there can be differential control of the opioid system within this brain region. So this next part is the fun thing you can do with animals. What if we just bypassed the need to do the placebo assay?

If we know that we just need to cause release of enkephalin within the PAG to produce pain relief, we could just directly do that with optigenetics. So we tried to us this animal that allows us to put a red light sensitive opsin protein into the enkephalinergic interneurons into the PAG.

When we shine red light on top of these cells, they turn on and they start to release their neurotransmitters. These are GABAergic and enkephalinergic. So they're dumping out GABA and now dumping out enkephalin into the ERG. We can visualize that using the Delta Light sensor from Lin Tien.

So here is an example of optogenetically released enkephalin within the PAG over 10 minutes. The weird thing that we still don't fully understand is that this signal continues after the optogenetic stimulation. So can we harness the placebo effect in mice? At least it seems we can. So if we turn on these cells strongly, cause them to release enkephalin and put animals back on these ramping hot plate tests we don't see any changes in the latency to detect pain, but we see specific ablation or reductions in these affective motivational pain like behaviors overall. Moderator: You have one minute remaining.

GREGORY CORDER: Cool. In this last minutes, people are skeptical. Can we actually test these higher order cognitive processes in animals? And for anyone who is not a behavioral preclinical neural scientist, you might not be aware there's an absolute revolution happening in behavior with the use of deep learning modules that can precisely and accurately quantify animal behavior. So this is an example of a deep learning tracking system.

We've built the Light Automated Pain Evaluator that can capture a range of different pain related behaviors fully automated without human intervention whatsoever that can be paired with brain reporting techniques like calcium imaging, that allow us to fit a lot of different computational models to understand what the activity of single neurons might be doing, let's say, in the cingulate cortex that might be driving that placebo response.

We can really start to tie now in at single cell resolution the activity of prefrontal cortex to drive these placebo effects and see if that alters anti nocioceptive behavior endogenously. I'll stop there and thank all the amazing people, Blake, Greg, and Lindsay, who did this work, as well as all of our funders and the numerous collaborators who have helped us do this. So thank you.

CRISTINA CUSIN: Terrific talk. Thank you so much. We're blown away. I'll leave the discussion to our two moderators. They're going to gather some of the questions from the chat and some of their own questions for all the presenters from today and from yesterday as well.

TED KAPTCHUK: Matt, you start gathering questions. I got permission to say a few moments of comments. I wanted to say this is fantastic. I actually learned an amazing amount of things. The amount of light that was brought forward about what we know about placebos and how we can possibly control placebo effects, how we can possibly harness placebo effects.

There was so much light and new information. What I want to do in my four minutes of comments is look to the future. What I mean by that is -- I want to give my comments and you can take them or leave them but I've got a few minutes.

What I want to say is we got the light, but we didn't put them together. There's no way we could have. We needed to be more in the same room. How does this fit in with your model? It's hard to do. What I mean by putting things together is I'll give you an example. In terms of how do we control placebo effects in clinical trials. I not infrequently get asked by the pharmaceutical industry, when you look at our placebo data -- we just blew it. Placebo was good as or always as good as the drug.

And the first thing I say is I want to talk to experts in that disease. I want to know the natural history. I want to know how you made your entry criteria so I can understand regression to the mean.

I want to know what's the relationship of the objective markers and subjective markers so I can begin to think about how much is the placebo response. I always tell them I don't know. If I knew how to reduce -- increase the difference between drug and placebo I'd be a rich man, I wouldn't be an academic. What I usually wind up saying is, get a new drug. And they pay me pretty well for that. And the reason is that they don't know anything about natural history. We're trying to harness something, and I just want to say -- I've done a lot of natural history controls, and that's more interesting than the rest of the experiments because they're unbelievable, the amount of improvement people show entering the trial without any treatment.

I just want to say we need to look at other things besides the placebo effect. We want to control the placebo response in a randomized control trial. I want to say that going forward. But I also want to say that we need a little bit of darkness. We need to be able to say, you know, I disagree with you. I think this other data, and one of the things I've learned doing placebo reach there's a paper that contradicts your paper real quickly and there's lots of contradictory information. It's very easy to say you're wrong, and we don't say it enough.

I want to take one example -- please forgive me -- I know that my research could be said that, Ted, you're wrong. But I just want to say something. Consistently in the two days of talk everyone talks about the increase of the placebo response over time. No one refers to the article published in 2022 in BMJ, first author was Mark Stone and senior author was Irving Kirsch. And they analyzed all FDA Mark Stone is in the Division of Psychiatry at CDER at the FDA. They analyzed all data of placebo controlled trials in major depressive disorder. They had over 230 trials, way more than 70,000 patients, and they analyzed the trend over time, in 1979 to the present, the publication. There was no increase in the placebo effect.

Are they right or are other people right? Nothing is one hundred percent clear right now and we need to be able to contradict each other when we get together personally and say, I don't think that's right, maybe that's right. I think that would help us. And the last thing I want to say is that some things were missing from the conference that we need to include in the future. We need to have ethics. Placebo is about ethics. If you're a placebo researcher in placebo controlled trials, that's an important question:

What are we talking about in terms of compromising ethics? There's no discussion that we didn't have time but in the future, let's do that.

And the last thing I would say is, we need to ask patients what their experience is. I've got to say I've been around for a long time. But the first time I started asking patients what their experiences were, they were in double blind placebo or open label placebo, I did it way after they finished the trial, the trial was over, and I actually took notes and went back and talked to people. They told me things I didn't even know about. And we need to have that in conferences. What I want to say, along those lines, is I feel so much healthier because I'm an older person, and I feel with this younger crowd here is significantly younger than me.

Maybe Matt and I are the same age, I don't know, but I think this is really one of the best conferences I ever went to. It was real clear data. We need to do lots of other things in the future. So with that, Matt, feed me some questions.

MATTHEW RUDORFER: Okay. Thanks. I didn't realize you were also 35. But okay. [LAUGHTER].

MATTHEW RUDORFER: I'll start off with a question of mine. The recent emergence of intravenous ketamine for resistant depression has introduced an interesting methodologic approach that we have not seen in a long time and that is the active placebo. So where the early trials just used saline, more recently we have seen benzodiazapine midazolam, while not mimicking really the full dissociative effect that many people get from ketamine, but the idea is for people to feel something, some kind of buzz so that they might believe that they're on some active compound and not just saline. And I wonder if the panel has any thoughts about the merits of using an active placebo and is that something that the field should be looking into more?

TED KAPTCHUK: I'm going to say something. Irving Kirsch published a meta analysis of H studies that used atropine as a control in depression studies. He felt that it made it difficult to detect a placebo drug difference. But in other meta analysis said that was not true. That was common in the '80s. People started thinking about that. But I have no idea how to answer your question.

MICHAEL DETKE: I think that's a great question. And I think in the presentations yesterday about devices, Dr. Lisanby was talking about the ideal sham. And I think it's very similar, the ideal active placebo would have none of the axia of the drug, of the drug in question, but would have, you know, exactly the same side effects and all other features, and of course that's attractive, but of course we probably would never have a drug that's exactly like that. I think midazolam was a great thing to try with ketamine. It's still not exactly the same. But I'd also add that it's not black and white. It's not like we need to do this with ketamine and ignore it for all of our other drugs. All of our drugs have side effects.

Arguably, if you do really big chunks, like classes of relatively modern antidepressants, antipsychotics and the psychostimulants, those are in order of bigger effect sizes in clinical trials, psychostimulants versus anti psychotics, versus -- and they're also in the order of roughly, I would argue, of unblinding, of functional unblinding. And in terms of more magnitude, Zyprexa will make you hungry. And also speed of onset of some of the adverse effects, stimulants and some of the Type II -- the second generation and beyond -- anti psychotics, they have pretty noticeable side effects for many subjects and relatively rapidly. So I think those are all important features to consider.

CRISTINA CUSIN: Dr. Schmidt?

LIANE SCHMIDT: I think using midazolam could give, like, some sensory sensations so the patients actually can say there's some effect on the body like immediately. But this raises actually a question whether these dissociations we observe in some patients of ketamine infusions we know have, will play a role for the antidepressant response. It's still an open question. So I don't have the answer to that question. And I think with midazolam doesn't really induce dissociations. I don't know, maybe you can isolate the dissociations you get on ketamine. But maybe even patients might be educated, expecting scientific reaction experiences and basically when they don't have -- so they make the midazolam experience something negative. So yeah, just self fulfilling prophesies might come into play.

CRISTINA CUSIN: I want to add for five seconds. Because I ran a large ketamine clinic. We know very little about cyto placebo maintaining an antidepressant response while the dissociation often wears off over time. It's completely separate from the anti depressant effect. We don't have long term placebo studies. The studies are extremely short lived and we study the acute effect. But we don't know how to sustain or how to maintain, what's the role of placebo effect in long term treatments. So that's another field that really is open to investigations. Dr. Rief.

WINFRIED RIEF: Following up on the issue of active placebos. I just want to mention that we did a study comparing active placebos to passive placebos and showing that active placebos are really more powerful. And I think the really disappointing part of this news is that it questions the blinding of our typical RCTs comparing antidepressants versus placebos because many patients who are in the active group or the tracked group, they perceive these onset effects and this will further boost the placebo mechanisms in the track group that are not existing in the passive placebo group. This is a challenge that further questions the validity of our typical RCTs.

CRISTINA CUSIN: Marta.

MARTA PECINA : Just a quick follow up to what Cristina was saying, too, that we need to clarify whether we want to find an active control for the dissociative effects or for the antidepressive effects. I think the approach will be very different. And this applies to ketamine but also psychodelics because we're having this discussion as well. So when thinking about how to control for or how to blind or how we just -- these treatments are very complicated. They have multiple effects. We just need to have the discussion of what are we trying to blind because the mechanism of action of the blinding drug will be very different.

TED KAPTCHUK: Can I say something about blinding? Robertson, who is the author of the 1970 -- no -- 1993 New England Journal paper saying that there's no that the placebo effect is a myth.

In 2022, published in BMJ, the largest -- he called it a mega meta analysis on blinding. And he took 144 randomized control trials that included nonblinded evidence on the drug versus blinded evidence of the drug. I'm not going to tell you the conclusion because it's unbelievable. But you should read it because it really influences -- it would influence what we think about blinding. That study was just recently replicated on a different set of patients with procedures in JAMA Surgery three months ago. And blinding like placebo is more complicated than we think. That's what I wanted to say.

MATTHEW RUDORFER: Another clinical factor that's come up during our discussion has been the relationship of the patient to the provider that we saw data showing that a warm relationship seemed to enhance therapeutic response, I believe, to most interventions. And I wonder what the panel thinks about the rise on the one hand of shortened clinical visits now that, for example, antidepressants are mostly given by busy primary care physicians and not specialists and the so called med check is a really, kind of, quickie visit, and especially since the pandemic, the rise of telehealth where a person might not ever even meet their provider in person, and is it possible we're on our way to where a clinical trial could involve, say, mailing medication every week to a patient, having them do their weekly ratings online and eliminating a provider altogether and just looking at the pharmacologic effect?

I mean, that probably isn't how we want to actually treat people clinically, but in terms of research, say, early phase efficacy, is there merit to that kind of approach?

LUANA COLLOCA: I'll comment on this, Dr. Rudorfer. We're very interested to see how the telemedicine or virtual reality can affect placebo effects, and we're modeling in the lab placebo effects induced via, you know, in person interaction.

There's an Avatar and virtual reality. And actually we found placebo effects with both the settings. Or whether, when we look at empathy, the Avatar doesn't elicit any empathy in the relationship. We truly need the in person connection to have empathy. So that suggests that our outcome that are affected by having in person versus telemedicine/para remote interactions, but yet the placebo effects persist in both the settings. The empathy is differently modulated and the empathy mediated, interestingly in our data, placebo effects only in the in person interactions. There is still a value in telemedicine. Effects that bypass empathy completely in competence.

MATTHEW RUDORFER: Dr. Hall.

KATHRYN HALL: Several of the large studies, like the Women's Health Study, Physicians' Health Study and, more recently, Vital, they did exactly that, where they mail these pill packs. And I mean, the population, obviously, is clinicians. So they are very well trained and well behaved. And they follow them for years but there's very little contact with the providers, and you still have these giant -- I don't know if you can call them placebo effects -- but certainly many of these trials have not proven to be more effective, the drugs they're studying, than placebo.

MATTHEW RUDORFER: Dr. Atlas.

LAUREN ATLAS: I wanted to chime in briefly on this important question. I think that the data that was presented yesterday in terms of first impressions of providers is relevant for this because it suggests that even when we use things like soft dot (phonetic) to select physicians and we have head shots (phonetic), that really we're making these decisions about who to see based on these kinds of just first impressions and facial features and having the actual interactions by providers is critical for sort of getting beyond that kind of factor that may drive selection. So I think if we have situations where there's reduced chances to interact, first of all, people are bringing expectations to the table based on what they know about the provider and then you don't really have the chance to build on that without the actual kind of therapeutic alliance. That's why I think, even though our study was done in an artificial setting, it really does show how we make these choices when there are bios for physicians and things available for patients to select from. I think there's a really important expectation being brought to the table before the treatment even occurs.

MATTHEW RUDORFER: Thanks. Dr. Lisanby.

SARAH “HOLLY” LISANBY: Thanks for raising this great question, Matt. I have a little bit of a different take on it. Equity in access to mental health care is a challenge. And the more that we can leverage technology to provide and extend the reach of mental health care the better. And so telemedicine and telepsychiatry, we've been thrust into this era by the pandemic but it existed before the pandemic as well. And it's not just about telepsychotherapy or teleprescription from monitoring pharmacotherapy, but digital remote neuromodulation is also a thing now. There are neuromodulation interventions that can be done at home that are being studied, and so there have been trials on transcranial direct current stimulation at home with remote monitoring. There are challenges in those studies differentiating between active and sham. But I think you're right in that we may have to rethink how do we control remote studies when the intensity of the clinician contact is very different, but I do think that we should explore these technologies so that we can extend the reach and extend access to research and to care for people who are not able to come into the research lab setting.

TED KAPTCHUK: May I add something on this? It's also criticizing myself. In 2008, I did this very nice study showing you could increase the doctor/patient relationship. And as you increase it, the placebo effect got bigger and bigger, like a dose response. A team in Korea that I worked with replicated that. I just published that replication.

The replication came out with the exact opposite results. The less doctor/patient relationship, the less intrusive, the less empathy got better effects. We're dealing with very complicated culturally constructed issues, and I just want to put it out there, the sand is soft. I'm really glad that somebody contradicted a major study that I did.

LUANA COLLOCA: Exactly. The central conference is so critical, what we observed in one context in one country, but even within the same in group or out group can be completely different in Japan, China or somewhere else. So the Americas, South Africa. So we need larger studies and more across country collaborations.

MATTHEW RUDORFER: Dr. Schmidt.

LIANE SCHMIDT: I just wanted to raise a point not really like -- it's more like a comment, like there's also very interesting research going on in the interactions between humans and robots, and usually humans treat robots very badly. And so I wonder what could be like -- here we focus on very human traits, like empathy, competence, what we look at. But when it comes to artificial intelligence, for example, and when we have to interact with algorithms, basically, like all these social interactions might completely turn out completely different, actually, and all have different effects on placebo effects. Just a thought.

MATTHEW RUDORFER: Dr. Rief.

WINFRIED RIEF: Yesterday, I expressed a belief for showing more warmth and competence, but I'll modify it a little bit today because I think the real truth became quite visible today, and that is that there is an interaction between these non specific effect placebo effects and the track effect. In many cases, at least. We don't know whether there are exceptions from this rule, but in many cases we have an interaction. And to learn about the interaction, we instead need study designs that modulate track intake versus placebo intake, but they also modulate the placebo mechanisms, the expectation mechanisms, the context of the treatment. And only if we have these 2 by 2 designs, modulating track intake and modulating context and psychological factors, then we learn about the interaction. You cannot learn about the interaction if you modulate only one factor.

And, therefore, I think what Luana and others have said that interact can be quite powerful and effective in one context but maybe even misleading in another context. I think this is proven. We have to learn more about that. And all the studies that have been shown from basic science to application that there could be an interaction, they're all indicating this line and to this necessity that we use more complex designs to learn about the interaction.

MATTHEW RUDORFER: Yes. And the rodent studies we've seen, I think, have a powerful message for us just in terms of being able to control a lot of variables that are just totally beyond our control in our usual human studies. It always seemed to me, for example, if you're doing just an antidepressant versus placebo trial in patients, well, for some people going into the clinic once a week to get ratings, that might be the only day of the week that they get up and take a shower, get dressed, have somebody ask them how they're doing, have some human interaction. And so showing up for your Hamilton rating could be a therapeutic intervention that, of course, we usually don't account for in the pharmacotherapy trial. And the number of variables really can escalate in a hurry when we look at our trials closely.

TED KAPTCHUK: Tor wants to say something.

TOR WAGER: Thanks, Ted.

I wanted to add on to the interaction issue, which came up yesterday, which Winfried and others just commented on, because it seems like it's really a crux issue. If the psychosocial or expectation effects and other things like that are entangled with specific effects so that one can influence the other and they might interact, then, yeah, we need more studies that independently manipulate specific drug or device effects and other kinds of psychological effects independently. And I wanted to bring this back up again because this is an idea that's been out here for a long time. I think the first review on this was in the '70s, like '76 or something, and it hasn't really been picked up for a couple of reasons. One, it's hard to do the studies. But second, when I talk to people who are in industry and pharma, they are very concerned about changing the study designs at all for FDA approval.

And since we had some, you know, FDA and regulatory perspectives here yesterday, I wanted to bring that up and see what people think, because I think that's been a big obstacle. And if it is, then that may be something that would be great for NIH to fund instead of pharma companies because then there's a whole space of drugs, psychological or neurostimulation psychological interactions, that can be explored.

MATTHEW RUDORFER: We also had a question. Yesterday there was discussion in a naloxone trial in sex differences in placebo response. And wonder if there's any further thoughts on studies of sex differences or diversity in general in placebo trials. Yes.

LUANA COLLOCA: We definitely see sex differences in placebo effect, and I show also, for example, women responded to arginine vasopressin in a way that we don't observe in men.

But also you asked about diversity. Currently actually in our paper just accepted today where we look at where people are living, the Maryland states, and even the location where they are based make a difference in placebo effects. So people who live in the most distressed, either the greatest Baltimore area, tended to have lower placebo effects as compared to a not distressful location. And we define that the radius of the criteria and immediately it's a race but we take into account the education, the income and so on. So it is interesting because across studies consistently we see an impact of diversity. And in that sense, I echo, listen to the comment that we need to find a way to reach out to these people and truly improve access and the opportunity for diversity. Thank you for asking.

MATTHEW RUDORFER: Thank you. Another issue that came up yesterday had to do with the pharmacogenomics. And there was a question or a question/comment about using candidate approaches and are they problematic.

KATHRYN HALL: What approaches.

MATTHEW RUDORFER: Candidate genes.

KATHRYN HALL: I think we have to start where we are. I think that the psychiatric field has had a really tough time with genetics. They've invested a lot and, sadly, don't have as much to show for it as they would like to. And I think that that has really tainted this quest for genetic markers of placebo and related studies, these interaction factors. But it's really important to not, I think, to use that to stop us from looking forward and identifying what's there. Because when you start to scratch the surface, there are interactions. You can see them. They're replete in the literature. And what's really fascinating is everybody who finds them, they don't see them when they report their study. And even some of these vasopressin studies, not obviously, Tor, yours, but I was reading one the other day where they had seen tremendous differences by genetics in response to arginine vasopressin. And they totally ignored what they were seeing in placebo and talked about who responds to drug. And so I think that not only do we need to start looking for what's happening, we need to start being more open minded and paying attention to what we're seeing in the placebo arm and accounting for that, taking that into account to understand what we're seeing across a trial in total.

CRISTINA CUSIN: I'll take a second to comment on sufficient selection and trying to figure out, depending on the site who are the patients who went there, treatment and depression clinical trial. If we eliminate from the discussion professional patient and we think about the patients who are more desperate, patients who don't have access to care, patients who are more likely to have psychosocial stressors or the other extreme, there are patients who are highly educated. The trials above and they search out, but they're certainly not representative of the general populations we see in the clinical setting.

They are somewhat different. And then if you think about the psychedelics trial, they go from 5,000 patients applying for a study and the study ends up recruiting 20, 30. So absolutely not representative of the general population we see in terms of diversity, in terms of comorbidities, in terms of psychosocial situations. So that's another factor that adds to the complexity of differentiating what happens in the clinical setting versus artificial setting like a research study. Tor.

MATTHEW RUDORFER: The question of who enters trials and I think the larger issue of diagnosis in general has, I think, really been a challenge to the field for many years. Ted and I go back a ways, and just looking at depression, of course, has dominated a lot of our discussion these last couple of days, with good reason. Now I realize the good database, my understanding is that the good database of placebo controlled trials go back to the late '90s, is what we heard yesterday. And if you go back further, the tricyclic era not only dealt with different medications, which we don't want to go back to, but if you think about practice patterns then, on the one hand, the tricyclics, most nonspecialists steered clear of, they required a lot of hands on. They required titration slowly up. They had some concerning toxicities, and so it was typical that psychiatrists would prescribe them but family docs would not. And that also had the effect of a naturalistic screening, that is, people would have to reach a certain level of severity before they were referred to a psychiatrist to get a prescription for medication.

More mildly ill people either wound up, probably inappropriately, on tranquilizers or no treatment at all and moderately to severely ill people wound up on tricyclics, and of course inpatient stays were common in those days, which again was another kind of screening. So it was the sort of thing, I mean, in the old days I heard of people talk about, well, you could, if you go to the inpatient board, you could easily collect people to be in clinical trial and you kind of knew that they were vetted already. That they had severe depression, the general sense was that the placebo response would be low. Though there's no real evidence for that. But the thing is, once we had the SSRIs on the one hand, the market vastly expanded because they're considered more broad spectrum. People with milder illness and anxiety disorders now are appropriate candidates and they're easier to dispense. The concern about overdose is much less, and so they're mostly prescribed by nonspecialists. So it's the sort of thing where we've seen a lot of large clinical trials where it doesn't take much to reach the threshold for entry, being if I go way back and this is just one of my personal concerns over many years the finer criteria, which I think were the first good set of diagnostic criteria based on data, based on literature, those were published in 1972 to have a diagnosis of major depression, called for four weeks of symptoms. Actually, literally, I think it said one month.

DSM III came out in 1980 and it called for two weeks of symptoms. I don't know -- I've not been able to find any documentation of how the one month went to two weeks, except that the DSM, of course, is the manual that's used in clinical practice. And you can understand, well, you might not want to have too high a bar to treat people who are seeking help. But I think one of the challenges of DSM, it was not meant as a research manual. Though that's often how it's used. So ever since that time, those two weeks have gotten reified, and so my point is it doesn't take much to reach diagnostic criteria for DSM, now, 5TR, major depression. So if someone is doing a clinical trial of an antidepressant, it is tempting to enroll people who meet, honestly meet those criteria but the criteria are not very strict. So I wonder whether that contributes to the larger placebo effect that we see today.

End of soapbox. The question -- I'd like to revisit an excellent point that Dr. Lisanby raised yesterday which has to do with the research domain criteria, the RDOC criteria. I don't know if anyone on the panel has had experience in using that in any trials and whether you see any merit there. Could RDOC criteria essentially enrich the usual DSM type clinical criteria in terms of trying to more finely differentiate subtypes of depression, might respond differently to different treatments.

MODERATOR: I think Tor has been patient on the hand off. Maybe next question, Tor, I'm not sure if you had comments on previous discussion.

TOR WAGER: Sure, thanks. I wanted to make a comment on the candidate gene issue. And I think it links to what you were just saying as well, doctor, in a sense. I think it relates to the issue of predicting individual differences in placebo effects and using that to enhance clinical trials, which has been really sort of a difficult issue. And in genetics, I think what's happened, as many of us know, is that there were many findings on particular candidate genes, especially comped and other particular set of genes in Science and Nature, and none of those really replicated when larger GWA studies started being done. And the field of genetics really focused in on reproducibility and replicability in one of our sample sizes. So I think my genetics colleagues tell me something like 5,000 is a minimum for even making it into their database of genetic associations. And so that makes it really difficult to study placebo effects in sample sizes like that. And at the same time, there's been this trend in psychology and in science, really, in general, towards reproducibility and replicability that probably in part are sort of evoked by John Ioannidis's provocative claims that most findings are false, but there's something really there.

There's been many teams of people who have tried to pull together, like Brian Nosek's work with Open Science Foundation, and many lab studies to replicate effects in psychology with much higher power. So there's this sort of increasing effort to pull together consortia to really test these things vigorously. And I wonder if -- we might not have a GWA study of placebo effects in 100,000 people or something, which is what would convince a geneticist that there's some kind of association. I'm wondering what the ways forward are, and I think one way is to increasingly come together to pull studies or do larger studies that are pre registered and even registered reports which are reviewed before they're published so that we can test some of these associations that have emerged in these what we call early studies of placebo effects.

And I think if we preregister and found something in sufficiently large and diverse samples, that might make a dent in convincing the wider world that essentially there is something that we can use going forward in clinical trials. And pharma might be interested in, for example, as well. That's my take on that. And wondering what people think.

KATHRYN HALL: My two cents. I completely agree with you. I think the way forward is to pull our resources to look at this and not simply stop -- I think when things don't replicate, I think we need to understand why they don't replicate. I think there's a taboo on looking beyond, if you prespecified it and you don't see it, then it should be over. I think in at least this early stage, when we're trying to understand what's happening, I think we need to allow ourselves deeper dives not for action but for understanding.

So I agree with you. Let's pull our resources and start looking at this. The other thing I would like to point out that's interesting is when we've looked at some of these clinical trials at the placebo arm, we actually learn a lot about natural history. We just did one in Alzheimer's disease and in the placebo arm the genome wide significant hit was CETP, which is now a clinical target in Alzheimer's disease. You can learn a lot by looking at the placebo arms of these studies not just about whether or not the drug is working or how the drug is working, but what's happening in the natural history of these patients that might change the effect of the drug.

TED KAPTCHUK: Marta, did you have something to say; you had your hand up.

MARTA PECINA: Just a follow up to what everybody is saying. I do think the issue of individualability is important. I think that one thing that maybe kind of explains some of the things that was also saying at the beginning that there's a little bit of lack of consistency or a way to put all of these findings together. The fact that we think about it as a one single placebo effect and we do know that there's not one single placebo effect, but even within differing clinical conditions is the newer value placebo effect the same in depression as it is in pain?

Or are there aspects that are the same, for example, expectancy processing, but there's some other things that are very specific to the clinical condition, whether it's pain processing, mood or some others. So I think we face the reality of use from a neurobiology perspective that a lot of the research has been done in pain and still there's very little being done at least in psychiatry across many other clinical conditions that we just don't know. And we don't really even know if the placebo how does the placebo effect look when you have both pain and depression, for example?

And so those are still very open questions that kind of reflect our state, right, that we're making progress but there's a lot to do.

TED KAPTCHUK: Winfried, did you want to say something? You have your hand up.

WINFRIED RIEF: I wanted to come back to the question of whether we really understand this increase of placebo effects. I don't know whether you have (indiscernible) for that. But I'm more like a scientist I can't believe that people are nowadays more reacting to placebos than they did 20 years ago. So there might be other explanations for this effect, like we changed the trial designs. We have more control visits maybe nowadays compared to 30 years ago, but there could be also other factors like publication bias which was maybe more frequent, more often 30 years ago than it is nowadays with the need for greater registration. So there are a lot of methodological issues that could explain this increase of placebo effects or of responses in the placebo groups. I would be interested whether you think that this increase is well explained or what your explanations are for this increase.

TED KAPTCHUK: Winfried, I want to give my opinion. I did think about this issue. I remember the first time it was reported in scientists in Cleveland, 40, 50 patients, and I said, oh, my God, okay, and the newspapers had it all over: The placebo effect is increasing. There's this boogie man around, and everyone started believing it. I've been consistently finding as many papers saying there's no -- I've been collecting them. There's no change over time there are changes over time. When I read the original article, I said, of course there's differences. The patients that got recruited in 1980 were different than the patients in 1990 or 2010. They were either more chronic, less chronic.

They were recruited in different ways, and that's really an easy explanation of why things change. Natural history changes. People's health problems are different, and I actually think that the Stone's meta analysis with 70,033 patients says it very clearly. It's a flat line from 1979. And the more data you have, the more you have to believe it. That's all. That's my personal opinion. And I think we actually are very deeply influenced by the media. I mean, I can't believe this:

The mystery of the placebo. We know more about placebo effects at least compared to many drugs on the market. Thanks my opinion. Thanks, Winfried, for letting me say it.

MATTHEW RUDORFER: Thanks, Ted.

We have a question for Greg. The question is, I wonder what the magic of 90 seconds is? Is there a physiologic basis to the turning point when the mouse changes behavior?

GREGORY CORDER: I think I addressed it in a written post somewhere. We don't know. We see a lot of variability in those animals. So like in this putative placebo phase, some mice will remain on that condition side for 40 seconds, 45 seconds, 60 seconds. Or they'll stay there the entire three minutes of the test. We're not exactly sure what's driving the difference in those different animals. These are both male and females. We see the effect in both male and female C57 black six mice, a genetically inbred animal. We always try to restrict the time of day of testing. We do reverse light testing. This is the animal wake cycle.

And there are things like dominance hierarchies within the cages, alpha versus betas. They may have different levels of pain thresholds. But the breaking of whatever the anti nocioceptive effect is they're standing on a hot plate for quite a long time. At some point those nociceptors in the periphery are going to become sensitized and signal. And to some point it's to the animal's advantage to pay attention to pain. You don't want to necessarily go around not paying attention to something that's potentially very dangerous or harmful to you. We would have to scale up the number of animals substantially I think, to really start parse out what the difference is that would account for that. But that's an excellent point, though.

MATTHEW RUDORFER: Carolyn.

CAROLYN RODRIGUEZ: I want to thank all today's speakers and wonderful presentations today. I just wanted to just go back for a second to Dr. Pecina's point about thinking about a placebo effect is not a monolith and also thinking about individual disorders.

And so I'm a clinical trialist and do research in obsessive compulsive disorder, and a lot of the things that are written in the literature meta analysis is that OCD has one of the lowest placebo rates. And so, you know, from what we gathered today, I guess to turn the question on its head is, is why is that, is that the case, why is that the case, and does that say something about OCD pathology, and what about it? Right? How can we really get more refined in terms of different domains and really thinking about the placebo effect.

So just want to say thank you again and to really having a lot of food for thought.

MATTHEW RUDORFER: Thanks. As we're winding down, one of the looming questions on the table remains what are research gaps and where do you think the next set of studies should go. And I think if anyone wants to put some ideas on the table, they'd be welcome.

MICHAEL DETKE: One of the areas that I mentioned in my talk that is hard for industry to study, or there's a big incentive, which is I talked about having third party reviewers review source documents and videos or audios of the HAM D, MADRS, whatever, and that there's not much controlled evidence.

And, you know, it's a fairly simple design, you know, within our largest controlled trial, do this with half the sites and don't do it with the other half.

Blinding isn't perfect. I haven't thought about this, and it can probably be improved upon a lot, but imagine you're the sponsor who's paying the $20 million in three years to run this clinical trial. You want to test your drug as fast as you possibly can. You don't want to really be paying for this methodology.

So that might be -- earlier on Tor or someone mentioned there might be some specific areas where this might be something for NIH to consider picking up. Because that methodology is being used in hundreds of trials, I think, today, the third party remote reviewer. So there's an area to think about.

MATTHEW RUDORFER: Thanks. Holly.

SARAH “HOLLY” LISANBY: Yeah. Carolyn just mentioned one of the gap areas, really trying to understand why some disorders are more amenable to the placebo response than others and what can that teach us. That sounds like a research gap area to me.

Also, throughout these two days we've heard a number of research gap areas having to do with methodology, how to do placebos or shams, how to assess outcome, how to protect the blind, how do you select what your outcome measures should be.

And then also today my mind was going very much towards what can preclinical models teach us and the genetics, the biology of a placebo response, the biogender line, individual differences in placebo response.

There may be clues there. Carolyn, to your point to placebo response being lower in OCD, and yet there are some OCD patients who respond, what's different about them that makes them responders?

And so studies that just look at within a placebo response versus nonresponse or gradation response or durability response and the mechanisms behind that.

These are questions that I think may ultimately facilitate getting drugs and devices to market, but certainly are questions that might be helpful to answer at the research stage, particularly at the translational research stage, in order to inform the design of pivotal trials that you would ultimately do to get things to market.

So it seems like there are many stages before getting to the ideal pivotal trial. So I really appreciate everyone's input. Let me stop talking because I really want to hear what Dr. Hall has to say.

KATHRYN HALL: I wanted to just come back for one of my favorite gaps to this question increasing the placebo effect. I think it's an important one because so many trials are failing these days. And I think it's not all trials are the same.

And what's really fascinating to me is that you see in Phase II clinical trials really great results, and then what's the first thing you do as a pharma company when you got a good result? You get to put out a press release.

And what's the first thing you're going to go do when you enroll in a clinical trial? You're going to read a press release. You're going to read as much as you can about the drug or the trial you're enrolling in. And how placebo boosting is it going to be to see that this trial had amazing effects on this condition you're struggling with.

If lo and behold we go to Phase III, and you can -- we're actually writing a paper on this, how many times we see the words "unexpected results," and I think we saw them here today, today or yesterday. Like, this should not be unexpected. When your Phase III trial fails, you should not be surprised because this is what's happening time and time again.

And I think some of the -- yeah, I agree, Ted, it's like this is a modern time, but there's so much information out there, so much information to sway us towards placebo responses that I think that's a piece of the problem. And finding out what the problem is I think is a really critical gap.

MATTHEW RUDORFER: Winfried.

WINFRIED RIEF: Yeah. May I follow up in that I think it fits quite nicely to what has been said before, and I want to direct I want to answer directly to Michael Detke.

On first glance, it seems less expensive to do the trials the way we do it with one placebo group and one drug arm, and we try to keep the context constant. But this is the problem. We have a constant context without any variation, so we don't learn under which context conditions is this drug really effective and what are the context conditions the drug might not be effective at all.

And therefore I think the current strategy is more like a lottery. It's really by chance it can happen that you are in this little window where the drug can show the most positive effectivity, but it can also be that you are in this little window or the big window where the drug is not able to show its effectivity.

And therefore I think, on second glance, it's a very expensive strategy only to use one single context to evaluate a drug.

MATTHEW RUDORFER: If I have time for--

TED KAPTCHUK: Marta speak, and then Liane should speak.

MARTA PECINA: I just wanted to add kind of a minor comment here, which is this idea that we're going to have to move on from the idea that giving someone a placebo is enough to induce positive expectancies and the fact that expectancies evolve over time.

So at least in some of the data that we've shown, and it's a small sample, but still we see that 50% of those subjects who are given a placebo don't have drug assignment beliefs. And so that is a very large amount of variability there that we are getting confused with everything else.

And so I do think that it is really important, whether in clinical trials, in research, to really come up with very and really develop new ways of measuring expectancies and allow expectancies to be measured over time. Because they do change. We have some prior expectancies, and then we have some expectancies that are learned based on experience. And I do think that this is an area of improvement that the field could improve relatively easily, you know, assess expectancies better, measure expectancies better.

TED KAPTCHUK: Liane, why don't you say something, and Luana, and then Cristina.

LIANE SCHMIDT: So I wanted to -- maybe one -- another open gap is like about the cognition, like what studying placebo, how can it help us to better understand human reasoning, like, and vice versa, actually, all the biases we have, these cognitive processes like motivation, for example, or memory, and yet all the good news about optimism biases, how do they contribute to placebo effects on the patient side but also on the clinician side when the clinicians have to make diagnosis or judge, actually, treatment efficiency based on some clinical scale.

So basically using like tools from cognition, like psychology or cognitive neuroscience, to better understand the processes, the cognitive processes that intervene when we have an expectation and behavior reach out, a symptom or neural activation, what comes in between, like how is it translated, basically, from cognitive predictability.

LUANA COLLOCA: I think we tended to consider expectation as static measurement when in reality we know that what we expect at the beginning of this workshop is slightly different by the end of what we are hearing and, you know, learning.

So expectation is a dynamic phenomenon, and the assumption that we can predict placebo effects with our measurement of expectation can be very limiting in terms of, you know, applications. Rather, it is important to measure expectation over time and also realize that there are so many nuance, like Liane just mentioned, of expectations, you know.

There are people who say I don't expect anything, I try everything, or people who say, oh, I truly want, I will be I truly want to feel better. And these also problematic patients because having an unrealistic expectation can often destroy, as I show, with a violation of expectancies of placebo effects.

TED KAPTCHUK: Are we getting close? Do you want to summarize? Or who's supposed to do that? I don't know.

CRISTINA CUSIN: I think I have a couple of minutes for remarks. There's so much going on, and more questions than answers, of course.

That has been a fantastic symposium, and I was trying to pitch some idea about possibly organizing a summit with all the panelists, all the presenters, and everyone else who wants to join us, because I think that with a coffee or a tea in our hands and talking not through a Zoom video, we could actually come up with some great idea and some collaboration projects.

Anyone who wants to email us, we'll be happy to answer. And we're always open to collaborating and starting a new study, bouncing off each other new ideas. This is what we do for a living. So we're very enthusiastic about people asking difficult questions.

And some of the questions that are ongoing and I think would be future areas is what we were talking a few minutes ago, we don't know if a placebo responder in a migraine study, for example, would be a placebo responder of depression study or IBS study. We don't know if this person is going to be universal placebo responder or is the context include the type of disease they're suffering from so it's going to be fairly different, and why some disorders have lower placebo response rate overall compared to others. Is that a chronicity, a relaxing, remitting disorder, has higher chance of placebo because the system can be modulated, versus a disorder that is considered more chronic and stable? A lot of this information is not known in the natural history.

Also comes to mind the exact trial it is because we almost never have a threshold for number of prior episodes of depression to enter a trial or how chronic has it been or years of depression or other factors that can clearly change our probability of responding to a treatment.

We heard about methodology for clinical trial design and how patients could be responsive to placebo responses or sham, responsive to drug. How about patients who could respond to both? We have no idea how many of those patients are undergoing a trial, universal responders, unless we do a crossover. And we know that crossover is not a popular design for drug trials.

So we need to figure out also aspects of methodology, how to assess outcome, what's the best way to assess the outcome that we want, is it clinically relevant, how to protect the blind aspect, assess expectations and how expectations change over time.

We didn't hear much during the discussion about the role of mindfulness in pain management, and I would like to hear much more about how we're doing in identifying the areas and can we actually intervene on those areas with devices to help with pain management. That's one of the biggest problems we have in terms of clinical care.

In the eating disorder aspect, creating computational models to influence food choices. And, again, with devices or treatments specifically changing the balance about making healthier food choices, I can see an entire field developing. Because most of the medications we prescribe for psychiatric disorders affect food choices and there's weight gain, potentially leading to obesity and cardiovascular complications. So there's an entire field of research we have not touched on.

And the role of animal models in translational results, I don't know if animal researchers, like Greg, talk much with clinical trialists. I think that would be a cross fertilization that is much needed, and we can definitely learn from each other.

And just fantastic. I thank all the panelists for their willingness to work with us and their time, dedication, and just so many meetings to discuss to agree on the program and to divide and conquer different topics. Has been a phenomenal experience, and I'm very, very grateful.

And the NIMH staff has been also amazing, having to collaborate with all of them, and they were so organized. And just a fantastic panel. Thank you, everybody.

MATTHEW RUDORFER: Thank you.

TOR WAGER: Thank you.

NIMH TEAM: Thanks from the NIMH team to all of our participants here.

(Meeting adjourned)

COMMENTS

  1. A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

    Hypothesis-generating (Qualitative hypothesis-generating research) ... Research questions are used more frequently in qualitative research than objectives or hypotheses.3 These questions seek to discover, understand, explore or describe experiences by asking "What" or "How." The questions are open-ended to elicit a description rather ...

  2. PDF Research Questions and Hypotheses

    in qualitative studies, the questions are under continual review and refor-mulation (as in a grounded theory study). This approach may be problem-atic for individuals accustomed to quantitative designs, in which the research questions remain fixed throughout the study. Use open-ended questions without reference to the literature or theory

  3. How to Determine the Hypothesis in a Qualitative Study?

    For quantitative research, the hypothesis used is a statistical hypothesis, meaning that the hypothesis must be tested using statistical rules. See the link: https://www.en.globalstatistik.com ...

  4. What is a Research Hypothesis: How to Write it, Types, and Examples

    Here are some good research hypothesis examples: "The use of a specific type of therapy will lead to a reduction in symptoms of depression in individuals with a history of major depressive disorder.". "Providing educational interventions on healthy eating habits will result in weight loss in overweight individuals.".

  5. Qualitative Study

    The qualitative research will help generate the research hypothesis, which can be tested with quantitative methods. After the data is collected and analyzed with quantitative methods, a set of qualitative methods can be used to dive deeper into the data to better understand what the numbers truly mean and their implications.

  6. Qualitative Methods in Health Care Research

    Qualitative research does not require a-priori hypothesis.[13,14,15] Example: ... However, semi-structured interviews are widely used in qualitative research. Participant observations are suitable for gathering data regarding naturally occurring behaviors. Data Analysis in Qualitative Research.

  7. Publications

    In qualitative research, a hypothesis is used in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research, where hypotheses are only developed to be tested, qualitative research can lead to hypothesis-testing and hypothesis-generating outcomes.

  8. Qualitative vs. Quantitative Research

    Quantitative. Quantitative research is used to test or confirm a hypothesis. Qualitative research usually informs quantitative. You need to have enough understanding about a topic in order to develop a hypothesis you can test. Since quantitative research is highly structured, you first need to understand what the parameters are and how variable ...

  9. What Is A Research Hypothesis? A Simple Definition

    A research hypothesis (or scientific hypothesis) is a statement about an expected relationship between variables, or explanation of an occurrence, that is clear, specific and testable. ... In qualitative research, one typically uses propositions, not hypotheses. Reply. Samia on July 14, 2023 at 7:40 pm could you please elaborate it more.

  10. Chapter 1. Introduction

    This textbook is organized as a comprehensive introduction to the use of qualitative research methods. The first half covers general topics (e.g., approaches to qualitative research, ethics) and research design (necessary steps for building a successful qualitative research study). ... The positing of a hypothesis is often the first step in ...

  11. Difference Between Qualitative and Qualitative Research

    At a Glance. Psychologists rely on quantitative and quantitative research to better understand human thought and behavior. Qualitative research involves collecting and evaluating non-numerical data in order to understand concepts or subjective opinions. Quantitative research involves collecting and evaluating numerical data.

  12. Qualitative vs. Quantitative Research

    When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.

  13. 7.4 Qualitative Research

    Qualitative research is an important alternative to quantitative research in psychology. It generally involves asking broader research questions, collecting more detailed data (e.g., interviews), and using nonstatistical analyses. Many researchers conceptualize quantitative and qualitative research as complementary and advocate combining them.

  14. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems. Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data.

  15. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  16. How to Write a Strong Hypothesis

    5. Phrase your hypothesis in three ways. To identify the variables, you can write a simple prediction in if…then form. The first part of the sentence states the independent variable and the second part states the dependent variable. If a first-year student starts attending more lectures, then their exam scores will improve.

  17. Qualitative vs Quantitative Research

    This type of research can be used to establish generalisable facts about a topic. Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions. Qualitative research. Qualitative research is expressed in words. It is used to understand concepts, thoughts or experiences.

  18. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  19. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  20. Qualitative Research

    Qualitative methods are used across all types of research—from consumer behavior to education, healthcare, behavioral science, and everywhere in between! At its core, qualitative research is exploratory—rather than coming up with a hypothesis and gathering numerical data to support it, qualitative research begins with open-ended questions.

  21. Qualitative Research: Your Ultimate Guide

    While qualitative research is defined as data that supplies non-numerical information, quantitative research focuses on numerical data. In general, if you're interested in measuring something or testing a hypothesis, use quantitative research methods. If you want to explore ideas, thoughts, and meanings, use qualitative research methods.

  22. The Central Role of Theory in Qualitative Research

    The use of theory in science is an ongoing debate in the production of knowledge. Related to qualitative research methods, a variety of approaches have been set forth in the literature using the terms conceptual framework, theoretical framework, paradigm, and epistemology.

  23. Qualitative Data Analysis: A Complete Guide [2024]

    The importance of qualitative research methods cannot be overstated. In my work with companies like 3M, Dell, and Intel, I've seen how qualitative analysis can uncover insights that numbers alone simply can't reveal. ... Unlike methods that start with a hypothesis, grounded theory allows theories to emerge from the data itself. ...

  24. What Is Qualitative Research?

    Revised on 30 January 2023. Qualitative research involves collecting and analysing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which ...

  25. Entrepreneurial Empowerment through the Internet: A Qualitative

    A hypothesis concerning the relationship between Internet use and women entrepreneurial performance is proposed for future investigation. Implications for women entrepreneurs and policymakers are discussed. ... we used a research journal for the purpose of drawing lessons from each interview in a timely manner ... Basics of qualitative research ...

  26. What is Qualitative in Qualitative Research

    A fourth issue is that the "implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm" ... In addition, he introduced a negative case and discussed the null hypothesis (1963:44). His phasic career model is thus based on a research design that embraces processual work.

  27. Day Two: Placebo Workshop: Translational Research Domains and ...

    The National Institute of Mental Health (NIMH) hosted a virtual workshop on the placebo effect. The purpose of this workshop was to bring together experts in neurobiology, clinical trials, and regulatory science to examine placebo effects in drug, device, and psychosocial interventions for mental health conditions. Topics included interpretability of placebo signals within the context of ...