Research-Methodology

Observation

Observation, as the name implies, is a way of collecting data through observing. This data collection method is classified as a participatory study, because the researcher has to immerse herself in the setting where her respondents are, while taking notes and/or recording. Observation data collection method may involve watching, listening, reading, touching, and recording behavior and characteristics of phenomena.

Observation as a data collection method can be structured or unstructured. In structured or systematic observation, data collection is conducted using specific variables and according to a pre-defined schedule. Unstructured observation, on the other hand, is conducted in an open and free manner in a sense that there would be no pre-determined variables or objectives.

Moreover, this data collection method can be divided into overt or covert categories. In overt observation research subjects are aware that they are being observed. In covert observation, on the other hand, the observer is concealed and sample group members are not aware that they are being observed. Covert observation is considered to be more effective because in this case sample group members are likely to behave naturally with positive implications on the authenticity of research findings.

Advantages of observation data collection method include direct access to research phenomena, high levels of flexibility in terms of application and generating a permanent record of phenomena to be referred to later. At the same time, this method is disadvantaged with longer time requirements, high levels of observer bias, and impact of observer on primary data, in a way that presence of observer may influence the behaviour of sample group elements.

It is important to note that observation data collection method may be associated with certain ethical issues. As it is discussed further below in greater details, fully informed consent of research participant(s) is one of the basic ethical considerations to be adhered to by researchers. At the same time, the behaviour of sample group members may change with negative implications on the level of research validity if they are notified about the presence of the observer.

This delicate matter needs to be addressed by consulting with dissertation supervisor, and commencing the primary data collection process only after ethical aspects of the issue have been approved by the supervisor.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline.

John Dudovskiy

Observation

  • Privacy Policy

Research Method

Home » Observational Research – Methods and Guide

Observational Research – Methods and Guide

Table of Contents

Observational Research

Observational Research

Definition:

Observational research is a type of research method where the researcher observes and records the behavior of individuals or groups in their natural environment. In other words, the researcher does not intervene or manipulate any variables but simply observes and describes what is happening.

Observation

Observation is the process of collecting and recording data by observing and noting events, behaviors, or phenomena in a systematic and objective manner. It is a fundamental method used in research, scientific inquiry, and everyday life to gain an understanding of the world around us.

Types of Observational Research

Observational research can be categorized into different types based on the level of control and the degree of involvement of the researcher in the study. Some of the common types of observational research are:

Naturalistic Observation

In naturalistic observation, the researcher observes and records the behavior of individuals or groups in their natural environment without any interference or manipulation of variables.

Controlled Observation

In controlled observation, the researcher controls the environment in which the observation is taking place. This type of observation is often used in laboratory settings.

Participant Observation

In participant observation, the researcher becomes an active participant in the group or situation being observed. The researcher may interact with the individuals being observed and gather data on their behavior, attitudes, and experiences.

Structured Observation

In structured observation, the researcher defines a set of behaviors or events to be observed and records their occurrence.

Unstructured Observation

In unstructured observation, the researcher observes and records any behaviors or events that occur without predetermined categories.

Cross-Sectional Observation

In cross-sectional observation, the researcher observes and records the behavior of different individuals or groups at a single point in time.

Longitudinal Observation

In longitudinal observation, the researcher observes and records the behavior of the same individuals or groups over an extended period of time.

Data Collection Methods

Observational research uses various data collection methods to gather information about the behaviors and experiences of individuals or groups being observed. Some common data collection methods used in observational research include:

Field Notes

This method involves recording detailed notes of the observed behavior, events, and interactions. These notes are usually written in real-time during the observation process.

Audio and Video Recordings

Audio and video recordings can be used to capture the observed behavior and interactions. These recordings can be later analyzed to extract relevant information.

Surveys and Questionnaires

Surveys and questionnaires can be used to gather additional information from the individuals or groups being observed. This method can be used to validate or supplement the observational data.

Time Sampling

This method involves taking a snapshot of the observed behavior at pre-determined time intervals. This method helps to identify the frequency and duration of the observed behavior.

Event Sampling

This method involves recording specific events or behaviors that are of interest to the researcher. This method helps to provide detailed information about specific behaviors or events.

Checklists and Rating Scales

Checklists and rating scales can be used to record the occurrence and frequency of specific behaviors or events. This method helps to simplify and standardize the data collection process.

Observational Data Analysis Methods

Observational Data Analysis Methods are:

Descriptive Statistics

This method involves using statistical techniques such as frequency distributions, means, and standard deviations to summarize the observed behaviors, events, or interactions.

Qualitative Analysis

Qualitative analysis involves identifying patterns and themes in the observed behaviors or interactions. This analysis can be done manually or with the help of software tools.

Content Analysis

Content analysis involves categorizing and counting the occurrences of specific behaviors or events. This analysis can be done manually or with the help of software tools.

Time-series Analysis

Time-series analysis involves analyzing the changes in behavior or interactions over time. This analysis can help identify trends and patterns in the observed data.

Inter-observer Reliability Analysis

Inter-observer reliability analysis involves comparing the observations made by multiple observers to ensure the consistency and reliability of the data.

Multivariate Analysis

Multivariate analysis involves analyzing multiple variables simultaneously to identify the relationships between the observed behaviors, events, or interactions.

Event Coding

This method involves coding observed behaviors or events into specific categories and then analyzing the frequency and duration of each category.

Cluster Analysis

Cluster analysis involves grouping similar behaviors or events into clusters based on their characteristics or patterns.

Latent Class Analysis

Latent class analysis involves identifying subgroups of individuals or groups based on their observed behaviors or interactions.

Social network Analysis

Social network analysis involves mapping the social relationships and interactions between individuals or groups based on their observed behaviors.

The choice of data analysis method depends on the research question, the type of data collected, and the available resources. Researchers should choose the appropriate method that best fits their research question and objectives. It is also important to ensure the validity and reliability of the data analysis by using appropriate statistical tests and measures.

Applications of Observational Research

Observational research is a versatile research method that can be used in a variety of fields to explore and understand human behavior, attitudes, and preferences. Here are some common applications of observational research:

  • Psychology : Observational research is commonly used in psychology to study human behavior in natural settings. This can include observing children at play to understand their social development or observing people’s reactions to stress to better understand how stress affects behavior.
  • Marketing : Observational research is used in marketing to understand consumer behavior and preferences. This can include observing shoppers in stores to understand how they make purchase decisions or observing how people interact with advertisements to determine their effectiveness.
  • Education : Observational research is used in education to study teaching and learning in natural settings. This can include observing classrooms to understand how teachers interact with students or observing students to understand how they learn.
  • Anthropology : Observational research is commonly used in anthropology to understand cultural practices and beliefs. This can include observing people’s daily routines to understand their culture or observing rituals and ceremonies to better understand their significance.
  • Healthcare : Observational research is used in healthcare to understand patient behavior and preferences. This can include observing patients in hospitals to understand how they interact with healthcare professionals or observing patients with chronic illnesses to better understand their daily routines and needs.
  • Sociology : Observational research is used in sociology to understand social interactions and relationships. This can include observing people in public spaces to understand how they interact with others or observing groups to understand how they function.
  • Ecology : Observational research is used in ecology to understand the behavior and interactions of animals and plants in their natural habitats. This can include observing animal behavior to understand their social structures or observing plant growth to understand their response to environmental factors.
  • Criminology : Observational research is used in criminology to understand criminal behavior and the factors that contribute to it. This can include observing criminal activity in a particular area to identify patterns or observing the behavior of inmates to understand their experience in the criminal justice system.

Observational Research Examples

Here are some real-time observational research examples:

  • A researcher observes and records the behaviors of a group of children on a playground to study their social interactions and play patterns.
  • A researcher observes the buying behaviors of customers in a retail store to study the impact of store layout and product placement on purchase decisions.
  • A researcher observes the behavior of drivers at a busy intersection to study the effectiveness of traffic signs and signals.
  • A researcher observes the behavior of patients in a hospital to study the impact of staff communication and interaction on patient satisfaction and recovery.
  • A researcher observes the behavior of employees in a workplace to study the impact of the work environment on productivity and job satisfaction.
  • A researcher observes the behavior of shoppers in a mall to study the impact of music and lighting on consumer behavior.
  • A researcher observes the behavior of animals in their natural habitat to study their social and feeding behaviors.
  • A researcher observes the behavior of students in a classroom to study the effectiveness of teaching methods and student engagement.
  • A researcher observes the behavior of pedestrians and cyclists on a city street to study the impact of infrastructure and traffic regulations on safety.

How to Conduct Observational Research

Here are some general steps for conducting Observational Research:

  • Define the Research Question: Determine the research question and objectives to guide the observational research study. The research question should be specific, clear, and relevant to the area of study.
  • Choose the appropriate observational method: Choose the appropriate observational method based on the research question, the type of data required, and the available resources.
  • Plan the observation: Plan the observation by selecting the observation location, duration, and sampling technique. Identify the population or sample to be observed and the characteristics to be recorded.
  • Train observers: Train the observers on the observational method, data collection tools, and techniques. Ensure that the observers understand the research question and objectives and can accurately record the observed behaviors or events.
  • Conduct the observation : Conduct the observation by recording the observed behaviors or events using the data collection tools and techniques. Ensure that the observation is conducted in a consistent and unbiased manner.
  • Analyze the data: Analyze the observed data using appropriate data analysis methods such as descriptive statistics, qualitative analysis, or content analysis. Validate the data by checking the inter-observer reliability and conducting statistical tests.
  • Interpret the results: Interpret the results by answering the research question and objectives. Identify the patterns, trends, or relationships in the observed data and draw conclusions based on the analysis.
  • Report the findings: Report the findings in a clear and concise manner, using appropriate visual aids and tables. Discuss the implications of the results and the limitations of the study.

When to use Observational Research

Here are some situations where observational research can be useful:

  • Exploratory Research: Observational research can be used in exploratory studies to gain insights into new phenomena or areas of interest.
  • Hypothesis Generation: Observational research can be used to generate hypotheses about the relationships between variables, which can be tested using experimental research.
  • Naturalistic Settings: Observational research is useful in naturalistic settings where it is difficult or unethical to manipulate the environment or variables.
  • Human Behavior: Observational research is useful in studying human behavior, such as social interactions, decision-making, and communication patterns.
  • Animal Behavior: Observational research is useful in studying animal behavior in their natural habitats, such as social and feeding behaviors.
  • Longitudinal Studies: Observational research can be used in longitudinal studies to observe changes in behavior over time.
  • Ethical Considerations: Observational research can be used in situations where manipulating the environment or variables would be unethical or impractical.

Purpose of Observational Research

Observational research is a method of collecting and analyzing data by observing individuals or phenomena in their natural settings, without manipulating them in any way. The purpose of observational research is to gain insights into human behavior, attitudes, and preferences, as well as to identify patterns, trends, and relationships that may exist between variables.

The primary purpose of observational research is to generate hypotheses that can be tested through more rigorous experimental methods. By observing behavior and identifying patterns, researchers can develop a better understanding of the factors that influence human behavior, and use this knowledge to design experiments that test specific hypotheses.

Observational research is also used to generate descriptive data about a population or phenomenon. For example, an observational study of shoppers in a grocery store might reveal that women are more likely than men to buy organic produce. This type of information can be useful for marketers or policy-makers who want to understand consumer preferences and behavior.

In addition, observational research can be used to monitor changes over time. By observing behavior at different points in time, researchers can identify trends and changes that may be indicative of broader social or cultural shifts.

Overall, the purpose of observational research is to provide insights into human behavior and to generate hypotheses that can be tested through further research.

Advantages of Observational Research

There are several advantages to using observational research in different fields, including:

  • Naturalistic observation: Observational research allows researchers to observe behavior in a naturalistic setting, which means that people are observed in their natural environment without the constraints of a laboratory. This helps to ensure that the behavior observed is more representative of the real-world situation.
  • Unobtrusive : Observational research is often unobtrusive, which means that the researcher does not interfere with the behavior being observed. This can reduce the likelihood of the research being affected by the observer’s presence or the Hawthorne effect, where people modify their behavior when they know they are being observed.
  • Cost-effective : Observational research can be less expensive than other research methods, such as experiments or surveys. Researchers do not need to recruit participants or pay for expensive equipment, making it a more cost-effective research method.
  • Flexibility: Observational research is a flexible research method that can be used in a variety of settings and for a range of research questions. Observational research can be used to generate hypotheses, to collect data on behavior, or to monitor changes over time.
  • Rich data : Observational research provides rich data that can be analyzed to identify patterns and relationships between variables. It can also provide context for behaviors, helping to explain why people behave in a certain way.
  • Validity : Observational research can provide high levels of validity, meaning that the results accurately reflect the behavior being studied. This is because the behavior is being observed in a natural setting without interference from the researcher.

Disadvantages of Observational Research

While observational research has many advantages, it also has some limitations and disadvantages. Here are some of the disadvantages of observational research:

  • Observer bias: Observational research is prone to observer bias, which is when the observer’s own beliefs and assumptions affect the way they interpret and record behavior. This can lead to inaccurate or unreliable data.
  • Limited generalizability: The behavior observed in a specific setting may not be representative of the behavior in other settings. This can limit the generalizability of the findings from observational research.
  • Difficulty in establishing causality: Observational research is often correlational, which means that it identifies relationships between variables but does not establish causality. This can make it difficult to determine if a particular behavior is causing an outcome or if the relationship is due to other factors.
  • Ethical concerns: Observational research can raise ethical concerns if the participants being observed are unaware that they are being observed or if the observations invade their privacy.
  • Time-consuming: Observational research can be time-consuming, especially if the behavior being observed is infrequent or occurs over a long period of time. This can make it difficult to collect enough data to draw valid conclusions.
  • Difficulty in measuring internal processes: Observational research may not be effective in measuring internal processes, such as thoughts, feelings, and attitudes. This can limit the ability to understand the reasons behind behavior.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Methods

Research Methods – Types, Examples and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Triangulation

Triangulation in Research – Types, Methods and...

Focus Groups in Qualitative Research

Focus Groups – Steps, Examples and Guide

Explanatory Research

Explanatory Research – Types, Methods, Guide

Questionnaire

Questionnaire – Definition, Types, and Examples

Observation Method in Psychology: Naturalistic, Participant and Controlled

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The observation method in psychology involves directly and systematically witnessing and recording measurable behaviors, actions, and responses in natural or contrived settings without attempting to intervene or manipulate what is being observed.

Used to describe phenomena, generate hypotheses, or validate self-reports, psychological observation can be either controlled or naturalistic with varying degrees of structure imposed by the researcher.

There are different types of observational methods, and distinctions need to be made between:

1. Controlled Observations 2. Naturalistic Observations 3. Participant Observations

In addition to the above categories, observations can also be either overt/disclosed (the participants know they are being studied) or covert/undisclosed (the researcher keeps their real identity a secret from the research subjects, acting as a genuine member of the group).

In general, conducting observational research is relatively inexpensive, but it remains highly time-consuming and resource-intensive in data processing and analysis.

The considerable investments needed in terms of coder time commitments for training, maintaining reliability, preventing drift, and coding complex dynamic interactions place practical barriers on observers with limited resources.

Controlled Observation

Controlled observation is a research method for studying behavior in a carefully controlled and structured environment.

The researcher sets specific conditions, variables, and procedures to systematically observe and measure behavior, allowing for greater control and comparison of different conditions or groups.

The researcher decides where the observation will occur, at what time, with which participants, and in what circumstances, and uses a standardized procedure. Participants are randomly allocated to each independent variable group.

Rather than writing a detailed description of all behavior observed, it is often easier to code behavior according to a previously agreed scale using a behavior schedule (i.e., conducting a structured observation).

The researcher systematically classifies the behavior they observe into distinct categories. Coding might involve numbers or letters to describe a characteristic or the use of a scale to measure behavior intensity.

The categories on the schedule are coded so that the data collected can be easily counted and turned into statistics.

For example, Mary Ainsworth used a behavior schedule to study how infants responded to brief periods of separation from their mothers. During the Strange Situation procedure, the infant’s interaction behaviors directed toward the mother were measured, e.g.,

  • Proximity and contact-seeking
  • Contact maintaining
  • Avoidance of proximity and contact
  • Resistance to contact and comforting

The observer noted down the behavior displayed during 15-second intervals and scored the behavior for intensity on a scale of 1 to 7.

strange situation scoring

Sometimes participants’ behavior is observed through a two-way mirror, or they are secretly filmed. Albert Bandura used this method to study aggression in children (the Bobo doll studies ).

A lot of research has been carried out in sleep laboratories as well. Here, electrodes are attached to the scalp of participants. What is observed are the changes in electrical activity in the brain during sleep ( the machine is called an EEG ).

Controlled observations are usually overt as the researcher explains the research aim to the group so the participants know they are being observed.

Controlled observations are also usually non-participant as the researcher avoids direct contact with the group and keeps a distance (e.g., observing behind a two-way mirror).

  • Controlled observations can be easily replicated by other researchers by using the same observation schedule. This means it is easy to test for reliability .
  • The data obtained from structured observations is easier and quicker to analyze as it is quantitative (i.e., numerical) – making this a less time-consuming method compared to naturalistic observations.
  • Controlled observations are fairly quick to conduct which means that many observations can take place within a short amount of time. This means a large sample can be obtained, resulting in the findings being representative and having the ability to be generalized to a large population.

Limitations

  • Controlled observations can lack validity due to the Hawthorne effect /demand characteristics. When participants know they are being watched, they may act differently.

Naturalistic Observation

Naturalistic observation is a research method in which the researcher studies behavior in its natural setting without intervention or manipulation.

It involves observing and recording behavior as it naturally occurs, providing insights into real-life behaviors and interactions in their natural context.

Naturalistic observation is a research method commonly used by psychologists and other social scientists.

This technique involves observing and studying the spontaneous behavior of participants in natural surroundings. The researcher simply records what they see in whatever way they can.

In unstructured observations, the researcher records all relevant behavior with a coding system. There may be too much to record, and the behaviors recorded may not necessarily be the most important, so the approach is usually used as a pilot study to see what type of behaviors would be recorded.

Compared with controlled observations, it is like the difference between studying wild animals in a zoo and studying them in their natural habitat.

With regard to human subjects, Margaret Mead used this method to research the way of life of different tribes living on islands in the South Pacific. Kathy Sylva used it to study children at play by observing their behavior in a playgroup in Oxfordshire.

Collecting Naturalistic Behavioral Data

Technological advances are enabling new, unobtrusive ways of collecting naturalistic behavioral data.

The Electronically Activated Recorder (EAR) is a digital recording device participants can wear to periodically sample ambient sounds, allowing representative sampling of daily experiences (Mehl et al., 2012).

Studies program EARs to record 30-50 second sound snippets multiple times per hour. Although coding the recordings requires extensive resources, EARs can capture spontaneous behaviors like arguments or laughter.

EARs minimize participant reactivity since sampling occurs outside of awareness. This reduces the Hawthorne effect, where people change behavior when observed.

The SenseCam is another wearable device that passively captures images documenting daily activities. Though primarily used in memory research currently (Smith et al., 2014), systematic sampling of environments and behaviors via the SenseCam could enable innovative psychological studies in the future.

  • By being able to observe the flow of behavior in its own setting, studies have greater ecological validity.
  • Like case studies , naturalistic observation is often used to generate new ideas. Because it gives the researcher the opportunity to study the total situation, it often suggests avenues of inquiry not thought of before.
  • The ability to capture actual behaviors as they unfold in real-time, analyze sequential patterns of interactions, measure base rates of behaviors, and examine socially undesirable or complex behaviors that people may not self-report accurately.
  • These observations are often conducted on a micro (small) scale and may lack a representative sample (biased in relation to age, gender, social class, or ethnicity). This may result in the findings lacking the ability to generalize to wider society.
  • Natural observations are less reliable as other variables cannot be controlled. This makes it difficult for another researcher to repeat the study in exactly the same way.
  • Highly time-consuming and resource-intensive during the data coding phase (e.g., training coders, maintaining inter-rater reliability, preventing judgment drift).
  • With observations, we do not have manipulations of variables (or control over extraneous variables), meaning cause-and-effect relationships cannot be established.

Participant Observation

Participant observation is a variant of the above (natural observations) but here, the researcher joins in and becomes part of the group they are studying to get a deeper insight into their lives.

If it were research on animals , we would now not only be studying them in their natural habitat but be living alongside them as well!

Leon Festinger used this approach in a famous study into a religious cult that believed that the end of the world was about to occur. He joined the cult and studied how they reacted when the prophecy did not come true.

Participant observations can be either covert or overt. Covert is where the study is carried out “undercover.” The researcher’s real identity and purpose are kept concealed from the group being studied.

The researcher takes a false identity and role, usually posing as a genuine member of the group.

On the other hand, overt is where the researcher reveals his or her true identity and purpose to the group and asks permission to observe.

  • It can be difficult to get time/privacy for recording. For example, researchers can’t take notes openly with covert observations as this would blow their cover. This means they must wait until they are alone and rely on their memory. This is a problem as they may forget details and are unlikely to remember direct quotations.
  • If the researcher becomes too involved, they may lose objectivity and become biased. There is always the danger that we will “see” what we expect (or want) to see. This problem is because they could selectively report information instead of noting everything they observe. Thus reducing the validity of their data.

Recording of Data

With controlled/structured observation studies, an important decision the researcher has to make is how to classify and record the data. Usually, this will involve a method of sampling.

In most coding systems, codes or ratings are made either per behavioral event or per specified time interval (Bakeman & Quera, 2011).

The three main sampling methods are:

Event-based coding involves identifying and segmenting interactions into meaningful events rather than timed units.

For example, parent-child interactions may be segmented into control or teaching events to code. Interval recording involves dividing interactions into fixed time intervals (e.g., 6-15 seconds) and coding behaviors within each interval (Bakeman & Quera, 2011).

Event recording allows counting event frequency and sequencing while also potentially capturing event duration through timed-event recording. This provides information on time spent on behaviors.

  • Interval recording is common in microanalytic coding to sample discrete behaviors in brief time samples across an interaction. The time unit can range from seconds to minutes to whole interactions. Interval recording requires segmenting interactions based on timing rather than events (Bakeman & Quera, 2011).
  • Instantaneous sampling provides snapshot coding at certain moments rather than summarizing behavior within full intervals. This allows quicker coding but may miss behaviors in between target times.

Coding Systems

The coding system should focus on behaviors, patterns, individual characteristics, or relationship qualities that are relevant to the theory guiding the study (Wampler & Harper, 2014).

Codes vary in how much inference is required, from concrete observable behaviors like frequency of eye contact to more abstract concepts like degree of rapport between a therapist and client (Hill & Lambert, 2004). More inference may reduce reliability.

Coding schemes can vary in their level of detail or granularity. Micro-level schemes capture fine-grained behaviors, such as specific facial movements, while macro-level schemes might code broader behavioral states or interactions. The appropriate level of granularity depends on the research questions and the practical constraints of the study.

Another important consideration is the concreteness of the codes. Some schemes use physically based codes that are directly observable (e.g., “eyes closed”), while others use more socially based codes that require some level of inference (e.g., “showing empathy”). While physically based codes may be easier to apply consistently, socially based codes often capture more meaningful behavioral constructs.

Most coding schemes strive to create sets of codes that are mutually exclusive and exhaustive (ME&E). This means that for any given set of codes, only one code can apply at a time (mutual exclusivity), and there is always an applicable code (exhaustiveness). This property simplifies both the coding process and subsequent data analysis.

For example, a simple ME&E set for coding infant state might include: 1) Quiet alert, 2) Crying, 3) Fussy, 4) REM sleep, and 5) Deep sleep. At any given moment, an infant would be in one and only one of these states.

Macroanalytic coding systems

Macroanalytic coding systems involve rating or summarizing behaviors using larger coding units and broader categories that reflect patterns across longer periods of interaction rather than coding small or discrete behavioral acts. 

Macroanalytic coding systems focus on capturing overarching themes, global qualities, or general patterns of behavior rather than specific, discrete actions.

For example, a macroanalytic coding system may rate the overall degree of therapist warmth or level of client engagement globally for an entire therapy session, requiring the coders to summarize and infer these constructs across the interaction rather than coding smaller behavioral units.

These systems require observers to make more inferences (more time-consuming) but can better capture contextual factors, stability over time, and the interdependent nature of behaviors (Carlson & Grotevant, 1987).

Examples of Macroanalytic Coding Systems:

  • Emotional Availability Scales (EAS) : This system assesses the quality of emotional connection between caregivers and children across dimensions like sensitivity, structuring, non-intrusiveness, and non-hostility.
  • Classroom Assessment Scoring System (CLASS) : Evaluates the quality of teacher-student interactions in classrooms across domains like emotional support, classroom organization, and instructional support.

Microanalytic coding systems

Microanalytic coding systems involve rating behaviors using smaller, more discrete coding units and categories.

These systems focus on capturing specific, discrete behaviors or events as they occur moment-to-moment. Behaviors are often coded second-by-second or in very short time intervals.

For example, a microanalytic system may code each instance of eye contact or head nodding during a therapy session. These systems code specific, molecular behaviors as they occur moment-to-moment rather than summarizing actions over longer periods.

Microanalytic systems require less inference from coders and allow for analysis of behavioral contingencies and sequential interactions between therapist and client. However, they are more time-consuming and expensive to implement than macroanalytic approaches.

Examples of Microanalytic Coding Systems:

  • Facial Action Coding System (FACS) : Codes minute facial muscle movements to analyze emotional expressions.
  • Specific Affect Coding System (SPAFF) : Used in marital interaction research to code specific emotional behaviors.
  • Noldus Observer XT : A software system that allows for detailed coding of behaviors in real-time or from video recordings.

Mesoanalytic coding systems

Mesoanalytic coding systems attempt to balance macro- and micro-analytic approaches.

In contrast to macroanalytic systems that summarize behaviors in larger chunks, mesoanalytic systems use medium-sized coding units that target more specific behaviors or interaction sequences (Bakeman & Quera, 2017).

For example, a mesoanalytic system may code each instance of a particular type of therapist statement or client emotional expression. However, mesoanalytic systems still use larger units than microanalytic approaches coding every speech onset/offset.

The goal of balancing specificity and feasibility makes mesoanalytic systems well-suited for many research questions (Morris et al., 2014). Mesoanalytic codes can preserve some sequential information while remaining efficient enough for studies with adequate but limited resources.

For instance, a mesoanalytic couple interaction coding system could target key behavior patterns like validation sequences without coding turn-by-turn speech.

In this way, mesoanalytic coding allows reasonable reliability and specificity without requiring extensive training or observation. The mid-level focus offers a pragmatic compromise between depth and breadth in analyzing interactions.

Examples of Mesoanalytic Coding Systems:

  • Feeding Scale for Mother-Infant Interaction : Assesses feeding interactions in 5-minute episodes, coding specific behaviors and overall qualities.
  • Couples Interaction Rating System (CIRS): Codes specific behaviors and rates overall qualities in segments of couple interactions.
  • Teaching Styles Rating Scale : Combines frequency counts of specific teacher behaviors with global ratings of teaching style in classroom segments.

Preventing Coder Drift

Coder drift results in a measurement error caused by gradual shifts in how observations get rated according to operational definitions, especially when behavioral codes are not clearly specified.

This type of error creeps in when coders fail to regularly review what precise observations constitute or do not constitute the behaviors being measured.

Preventing drift refers to taking active steps to maintain consistency and minimize changes or deviations in how coders rate or evaluate behaviors over time. Specifically, some key ways to prevent coder drift include:
  • Operationalize codes : It is essential that code definitions unambiguously distinguish what interactions represent instances of each coded behavior. 
  • Ongoing training : Returning to those operational definitions through ongoing training serves to recalibrate coder interpretations and reinforce accurate recognition. Having regular “check-in” sessions where coders practice coding the same interactions allows monitoring that they continue applying codes reliably without gradual shifts in interpretation.
  • Using reference videos : Coders periodically coding the same “gold standard” reference videos anchors their judgments and calibrate against original training. Without periodic anchoring to original specifications, coder decisions tend to drift from initial measurement reliability.
  • Assessing inter-rater reliability : Statistical tracking that coders maintain high levels of agreement over the course of a study, not just at the start, flags any declines indicating drift. Sustaining inter-rater agreement requires mitigating this common tendency for observer judgment change during intensive, long-term coding tasks.
  • Recalibrating through discussion : Having meetings for coders to discuss disagreements openly explores reasons judgment shifts may be occurring over time. Consensus on the application of codes is restored.
  • Adjusting unclear codes : If reliability issues persist, revisiting and refining ambiguous code definitions or anchors can eliminate inconsistencies arising from coder confusion.

Essentially, the goal of preventing coder drift is maintaining standardization and minimizing unintentional biases that may slowly alter how observational data gets rated over periods of extensive coding.

Through the upkeep of skills, continuing calibration to benchmarks, and monitoring consistency, researchers can notice and correct for any creeping changes in coder decision-making over time.

Reducing Observer Bias

Observational research is prone to observer biases resulting from coders’ subjective perspectives shaping the interpretation of complex interactions (Burghardt et al., 2012). When coding, personal expectations may unconsciously influence judgments. However, rigorous methods exist to reduce such bias.

Coding Manual

A detailed coding manual minimizes subjectivity by clearly defining what behaviors and interaction dynamics observers should code (Bakeman & Quera, 2011).

High-quality manuals have strong theoretical and empirical grounding, laying out explicit coding procedures and providing rich behavioral examples to anchor code definitions (Lindahl, 2001).

Clear delineation of the frequency, intensity, duration, and type of behaviors constituting each code facilitates reliable judgments and reduces ambiguity for coders. Application risks inconsistency across raters without clarity on how codes translate to observable interaction.

Coder Training

Competent coders require both interpersonal perceptiveness and scientific rigor (Wampler & Harper, 2014). Training thoroughly reviews the theoretical basis for coded constructs and teaches the coding system itself.

Multiple “gold standard” criterion videos demonstrate code ranges that trainees independently apply. Coders then meet weekly to establish reliability of 80% or higher agreement both among themselves and with master criterion coding (Hill & Lambert, 2004).

Ongoing training manages coder drift over time. Revisions to unclear codes may also improve reliability. Both careful selection and investment in rigorous training increase quality control.

Blind Methods

To prevent bias, coders should remain unaware of specific study predictions or participant details (Burghardt et al., 2012). Separate data gathering versus coding teams helps maintain blinding.

Coders should be unaware of study details or participant identities that could bias coding (Burghardt et al., 2012).

Separate teams collecting data versus coding data can reduce bias.

In addition, scheduling procedures can prevent coders from rating data collected directly from participants with whom they have had personal contact. Maintaining coder independence and blinding enhances objectivity.

Data Analysis Approaches

Data analysis in behavioral observation aims to transform raw observational data into quantifiable measures that can be statistically analyzed.

It’s important to note that the choice of analysis approach is not arbitrary but should be guided by the research questions, study design, and nature of the data collected.

Interval data (where behavior is recorded at fixed time points), event data (where the occurrence of behaviors is noted as they happen), and timed-event data (where both the occurrence and duration of behaviors are recorded) may require different analytical approaches.

Similarly, the level of measurement (categorical, ordinal, or continuous) will influence the choice of statistical tests.

Researchers typically start with simple descriptive statistics to get a feel for their data before moving on to more complex analyses. This stepwise approach allows for a thorough understanding of the data and can often reveal unexpected patterns or relationships that merit further investigation.

simple descriptive statistics

Descriptive statistics give an overall picture of behavior patterns and are often the first step in analysis.
  • Frequency counts tell us how often a particular behavior occurs, while rates express this frequency in relation to time (e.g., occurrences per minute).
  • Duration measures how long behaviors last, offering insight into their persistence or intensity.
  • Probability calculations indicate the likelihood of a behavior occurring under certain conditions, and relative frequency or duration statistics show the proportional occurrence of different behaviors within a session or across the study.

These simple statistics form the foundation of behavioral analysis, providing researchers with a broad picture of behavioral patterns. 

They can reveal which behaviors are most common, how long they typically last, and how they might vary across different conditions or subjects.

For instance, in a study of classroom behavior, these statistics might show how often students raise their hands, how long they typically stay focused on a task, or what proportion of time is spent on different activities.

contingency analyses

Contingency analyses help identify if certain behaviors tend to occur together or in sequence.
  • Contingency tables , also known as cross-tabulations, display the co-occurrence of two or more behaviors, allowing researchers to see if certain behaviors tend to happen together.
  • Odds ratios provide a measure of the strength of association between behaviors, indicating how much more likely one behavior is to occur in the presence of another.
  • Adjusted residuals in these tables can reveal whether the observed co-occurrences are significantly different from what would be expected by chance.

For example, in a study of parent-child interactions, contingency analyses might reveal whether a parent’s praise is more likely to follow a child’s successful completion of a task, or whether a child’s tantrum is more likely to occur after a parent’s refusal of a request.

These analyses can uncover important patterns in social interactions, learning processes, or behavioral chains.

sequential analyses

Sequential analyses are crucial for understanding processes and temporal relationships between behaviors.
  • Lag sequential analysis looks at the likelihood of one behavior following another within a specified number of events or time units.
  • Time-window sequential analysis examines whether a target behavior occurs within a defined time frame after a given behavior.

These methods are particularly valuable for understanding processes that unfold over time, such as conversation patterns, problem-solving strategies, or the development of social skills.

observer agreement

Since human observers often code behaviors, it’s important to check reliability . This is typically done through measures of observer agreement.
  • Cohen’s kappa is commonly used for categorical data, providing a measure of agreement between observers that accounts for chance agreement.
  • Intraclass correlation coefficient (ICC) : Used for continuous data or ratings.

Good observer agreement is crucial for the validity of the study, as it demonstrates that the observed behaviors are consistently identified and coded across different observers or time points.

advanced statistical approaches

As researchers delve deeper into their data, they often employ more advanced statistical techniques.
  • For instance, an ANOVA might reveal differences in the frequency of aggressive behaviors between children from different socioeconomic backgrounds or in different school settings.
  • This approach allows researchers to account for dependencies in the data and to examine how behaviors might be influenced by factors at different levels (e.g., individual characteristics, group dynamics, and situational factors).
  • This method can reveal trends, cycles, or patterns in behavior over time, which might not be apparent from simpler analyses. For instance, in a study of animal behavior, time series analysis might uncover daily or seasonal patterns in feeding, mating, or territorial behaviors.

representation techniques

Representation techniques help organize and visualize data:
  • Many researchers use a code-unit grid, which represents the data as a matrix with behaviors as rows and time units as columns.
  • This format facilitates many types of analyses and allows for easy visualization of behavioral patterns.
  • Standardized formats like the Sequential Data Interchange Standard (SDIS) help ensure consistency in data representation across studies and facilitate the use of specialized analysis software.
  • Indeed, the complexity of behavioral observation data often necessitates the use of specialized software tools. Programs like GSEQ, Observer, and INTERACT are designed specifically for the analysis of observational data and can perform many of the analyses described above efficiently and accurately.

observation methods

Bakeman, R., & Quera, V. (2017). Sequential analysis and observational methods for the behavioral sciences. Cambridge University Press.

Burghardt, G. M., Bartmess-LeVasseur, J. N., Browning, S. A., Morrison, K. E., Stec, C. L., Zachau, C. E., & Freeberg, T. M. (2012). Minimizing observer bias in behavioral studies: A review and recommendations. Ethology, 118 (6), 511-517.

Hill, C. E., & Lambert, M. J. (2004). Methodological issues in studying psychotherapy processes and outcomes. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change (5th ed., pp. 84–135). Wiley.

Lindahl, K. M. (2001). Methodological issues in family observational research. In P. K. Kerig & K. M. Lindahl (Eds.), Family observational coding systems: Resources for systemic research (pp. 23–32). Lawrence Erlbaum Associates.

Mehl, M. R., Robbins, M. L., & Deters, F. G. (2012). Naturalistic observation of health-relevant social processes: The electronically activated recorder methodology in psychosomatics. Psychosomatic Medicine, 74 (4), 410–417.

Morris, A. S., Robinson, L. R., & Eisenberg, N. (2014). Applying a multimethod perspective to the study of developmental psychology. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (2nd ed., pp. 103–123). Cambridge University Press.

Smith, J. A., Maxwell, S. D., & Johnson, G. (2014). The microstructure of everyday life: Analyzing the complex choreography of daily routines through the automatic capture and processing of wearable sensor data. In B. K. Wiederhold & G. Riva (Eds.), Annual Review of Cybertherapy and Telemedicine 2014: Positive Change with Technology (Vol. 199, pp. 62-64). IOS Press.

Traniello, J. F., & Bakker, T. C. (2015). The integrative study of behavioral interactions across the sciences. In T. K. Shackelford & R. D. Hansen (Eds.), The evolution of sexuality (pp. 119-147). Springer.

Wampler, K. S., & Harper, A. (2014). Observational methods in couple and family assessment. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (2nd ed., pp. 490–502). Cambridge University Press.

Print Friendly, PDF & Email

Newcastle University

Observation

research tool observation

Duke University Libraries

Qualitative Research: Observation

  • Getting Started
  • Focus Groups
  • Observation
  • Case Studies
  • Data Collection
  • Cleaning Text
  • Analysis Tools
  • Institutional Review

Participant Observation

research tool observation

Photo: https://slideplayer.com/slide/4599875/

Field Guide

  • Participant Observation Field Guide

What is an observation?

A way to gather data by watching people, events, or noting physical characteristics in their natural setting. Observations can be overt (subjects know they are being observed) or covert (do not know they are being watched).

  • Researcher becomes a participant in the culture or context being observed.
  • Requires researcher to be accepted as part of culture being observed in order for success

Direct Observation

  • Researcher strives to be as unobtrusive as possible so as not to bias the observations; more detached.
  • Technology can be useful (i.e video, audiorecording).

Indirect Observation

  • Results of an interaction, process or behavior are observed (for example, measuring the amount of plate waste left by students in a school cafeteria to determine whether a new food is acceptable to them).

Suggested Readings and Film

  • Born into Brothels . (2004) Oscar winning documentary, an example of participatory observation, portrays the life of children born to prostitutes in Calcutta. New York-based photographer Zana Briski gave cameras to the children of prostitutes and taught them photography
  • Davies, J. P., & Spencer, D. (2010).  Emotions in the field: The psychology and anthropology of fieldwork experience . Stanford, CA: Stanford University Press.
  • DeWalt, K. M., & DeWalt, B. R. (2011).  Participant observation : A guide for fieldworkers .   Lanham, Md: Rowman & Littlefield.
  • Reinharz, S. (2011).  Observing the observer: Understanding our selves in field research . NY: Oxford University Press.
  • Schensul, J. J., & LeCompte, M. D. (2013).  Essential ethnographic methods: A mixed methods approach . Lanham, MD: AltaMira Press.
  • Skinner, J. (2012).  The interview: An ethnographic approach . NY: Berg.
  • << Previous: Focus Groups
  • Next: Case Studies >>
  • Last Updated: Mar 1, 2024 10:13 AM
  • URL: https://guides.library.duke.edu/qualitative-research

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Non-Experimental Research

32 Observational Research

Learning objectives.

  • List the various types of observational research methods and distinguish between each.
  • Describe the strengths and weakness of each observational research method. 

What Is Observational Research?

The term observational research is used to refer to several different types of non-experimental studies in which behavior is systematically observed and recorded. The goal of observational research is to describe a variable or set of variables. More generally, the goal is to obtain a snapshot of specific characteristics of an individual, group, or setting. As described previously, observational research is non-experimental because nothing is manipulated or controlled, and as such we cannot arrive at causal conclusions using this approach. The data that are collected in observational research studies are often qualitative in nature but they may also be quantitative or both (mixed-methods). There are several different types of observational methods that will be described below.

Naturalistic Observation

Naturalistic observation  is an observational method that involves observing people’s behavior in the environment in which it typically occurs. Thus naturalistic observation is a type of field research (as opposed to a type of laboratory research). Jane Goodall’s famous research on chimpanzees is a classic example of naturalistic observation. Dr.  Goodall spent three decades observing chimpanzees in their natural environment in East Africa. She examined such things as chimpanzee’s social structure, mating patterns, gender roles, family structure, and care of offspring by observing them in the wild. However, naturalistic observation  could more simply involve observing shoppers in a grocery store, children on a school playground, or psychiatric inpatients in their wards. Researchers engaged in naturalistic observation usually make their observations as unobtrusively as possible so that participants are not aware that they are being studied. Such an approach is called disguised naturalistic observation .  Ethically, this method is considered to be acceptable if the participants remain anonymous and the behavior occurs in a public setting where people would not normally have an expectation of privacy. Grocery shoppers putting items into their shopping carts, for example, are engaged in public behavior that is easily observable by store employees and other shoppers. For this reason, most researchers would consider it ethically acceptable to observe them for a study. On the other hand, one of the arguments against the ethicality of the naturalistic observation of “bathroom behavior” discussed earlier in the book is that people have a reasonable expectation of privacy even in a public restroom and that this expectation was violated. 

In cases where it is not ethical or practical to conduct disguised naturalistic observation, researchers can conduct  undisguised naturalistic observation where the participants are made aware of the researcher presence and monitoring of their behavior. However, one concern with undisguised naturalistic observation is  reactivity. Reactivity refers to when a measure changes participants’ behavior. In the case of undisguised naturalistic observation, the concern with reactivity is that when people know they are being observed and studied, they may act differently than they normally would. This type of reactivity is known as the Hawthorne effect . For instance, you may act much differently in a bar if you know that someone is observing you and recording your behaviors and this would invalidate the study. So disguised observation is less reactive and therefore can have higher validity because people are not aware that their behaviors are being observed and recorded. However, we now know that people often become used to being observed and with time they begin to behave naturally in the researcher’s presence. In other words, over time people habituate to being observed. Think about reality shows like Big Brother or Survivor where people are constantly being observed and recorded. While they may be on their best behavior at first, in a fairly short amount of time they are flirting, having sex, wearing next to nothing, screaming at each other, and occasionally behaving in ways that are embarrassing.

Participant Observation

Another approach to data collection in observational research is participant observation. In  participant observation , researchers become active participants in the group or situation they are studying. Participant observation is very similar to naturalistic observation in that it involves observing people’s behavior in the environment in which it typically occurs. As with naturalistic observation, the data that are collected can include interviews (usually unstructured), notes based on their observations and interactions, documents, photographs, and other artifacts. The only difference between naturalistic observation and participant observation is that researchers engaged in participant observation become active members of the group or situations they are studying. The basic rationale for participant observation is that there may be important information that is only accessible to, or can be interpreted only by, someone who is an active participant in the group or situation. Like naturalistic observation, participant observation can be either disguised or undisguised. In disguised participant observation , the researchers pretend to be members of the social group they are observing and conceal their true identity as researchers.

In a famous example of disguised participant observation, Leon Festinger and his colleagues infiltrated a doomsday cult known as the Seekers, whose members believed that the apocalypse would occur on December 21, 1954. Interested in studying how members of the group would cope psychologically when the prophecy inevitably failed, they carefully recorded the events and reactions of the cult members in the days before and after the supposed end of the world. Unsurprisingly, the cult members did not give up their belief but instead convinced themselves that it was their faith and efforts that saved the world from destruction. Festinger and his colleagues later published a book about this experience, which they used to illustrate the theory of cognitive dissonance (Festinger, Riecken, & Schachter, 1956) [1] .

In contrast with undisguised participant observation ,  the researchers become a part of the group they are studying and they disclose their true identity as researchers to the group under investigation. Once again there are important ethical issues to consider with disguised participant observation.  First no informed consent can be obtained and second deception is being used. The researcher is deceiving the participants by intentionally withholding information about their motivations for being a part of the social group they are studying. But sometimes disguised participation is the only way to access a protective group (like a cult). Further, disguised participant observation is less prone to reactivity than undisguised participant observation. 

Rosenhan’s study (1973) [2]   of the experience of people in a psychiatric ward would be considered disguised participant observation because Rosenhan and his pseudopatients were admitted into psychiatric hospitals on the pretense of being patients so that they could observe the way that psychiatric patients are treated by staff. The staff and other patients were unaware of their true identities as researchers.

Another example of participant observation comes from a study by sociologist Amy Wilkins on a university-based religious organization that emphasized how happy its members were (Wilkins, 2008) [3] . Wilkins spent 12 months attending and participating in the group’s meetings and social events, and she interviewed several group members. In her study, Wilkins identified several ways in which the group “enforced” happiness—for example, by continually talking about happiness, discouraging the expression of negative emotions, and using happiness as a way to distinguish themselves from other groups.

One of the primary benefits of participant observation is that the researchers are in a much better position to understand the viewpoint and experiences of the people they are studying when they are a part of the social group. The primary limitation with this approach is that the mere presence of the observer could affect the behavior of the people being observed. While this is also a concern with naturalistic observation, additional concerns arise when researchers become active members of the social group they are studying because that they may change the social dynamics and/or influence the behavior of the people they are studying. Similarly, if the researcher acts as a participant observer there can be concerns with biases resulting from developing relationships with the participants. Concretely, the researcher may become less objective resulting in more experimenter bias.

Structured Observation

Another observational method is structured observation . Here the investigator makes careful observations of one or more specific behaviors in a particular setting that is more structured than the settings used in naturalistic or participant observation. Often the setting in which the observations are made is not the natural setting. Instead, the researcher may observe people in the laboratory environment. Alternatively, the researcher may observe people in a natural setting (like a classroom setting) that they have structured some way, for instance by introducing some specific task participants are to engage in or by introducing a specific social situation or manipulation.

Structured observation is very similar to naturalistic observation and participant observation in that in all three cases researchers are observing naturally occurring behavior; however, the emphasis in structured observation is on gathering quantitative rather than qualitative data. Researchers using this approach are interested in a limited set of behaviors. This allows them to quantify the behaviors they are observing. In other words, structured observation is less global than naturalistic or participant observation because the researcher engaged in structured observations is interested in a small number of specific behaviors. Therefore, rather than recording everything that happens, the researcher only focuses on very specific behaviors of interest.

Researchers Robert Levine and Ara Norenzayan used structured observation to study differences in the “pace of life” across countries (Levine & Norenzayan, 1999) [4] . One of their measures involved observing pedestrians in a large city to see how long it took them to walk 60 feet. They found that people in some countries walked reliably faster than people in other countries. For example, people in Canada and Sweden covered 60 feet in just under 13 seconds on average, while people in Brazil and Romania took close to 17 seconds. When structured observation  takes place in the complex and even chaotic “real world,” the questions of when, where, and under what conditions the observations will be made, and who exactly will be observed are important to consider. Levine and Norenzayan described their sampling process as follows:

“Male and female walking speed over a distance of 60 feet was measured in at least two locations in main downtown areas in each city. Measurements were taken during main business hours on clear summer days. All locations were flat, unobstructed, had broad sidewalks, and were sufficiently uncrowded to allow pedestrians to move at potentially maximum speeds. To control for the effects of socializing, only pedestrians walking alone were used. Children, individuals with obvious physical handicaps, and window-shoppers were not timed. Thirty-five men and 35 women were timed in most cities.” (p. 186).

Precise specification of the sampling process in this way makes data collection manageable for the observers, and it also provides some control over important extraneous variables. For example, by making their observations on clear summer days in all countries, Levine and Norenzayan controlled for effects of the weather on people’s walking speeds.  In Levine and Norenzayan’s study, measurement was relatively straightforward. They simply measured out a 60-foot distance along a city sidewalk and then used a stopwatch to time participants as they walked over that distance.

As another example, researchers Robert Kraut and Robert Johnston wanted to study bowlers’ reactions to their shots, both when they were facing the pins and then when they turned toward their companions (Kraut & Johnston, 1979) [5] . But what “reactions” should they observe? Based on previous research and their own pilot testing, Kraut and Johnston created a list of reactions that included “closed smile,” “open smile,” “laugh,” “neutral face,” “look down,” “look away,” and “face cover” (covering one’s face with one’s hands). The observers committed this list to memory and then practiced by coding the reactions of bowlers who had been videotaped. During the actual study, the observers spoke into an audio recorder, describing the reactions they observed. Among the most interesting results of this study was that bowlers rarely smiled while they still faced the pins. They were much more likely to smile after they turned toward their companions, suggesting that smiling is not purely an expression of happiness but also a form of social communication.

In yet another example (this one in a laboratory environment), Dov Cohen and his colleagues had observers rate the emotional reactions of participants who had just been deliberately bumped and insulted by a confederate after they dropped off a completed questionnaire at the end of a hallway. The confederate was posing as someone who worked in the same building and who was frustrated by having to close a file drawer twice in order to permit the participants to walk past them (first to drop off the questionnaire at the end of the hallway and once again on their way back to the room where they believed the study they signed up for was taking place). The two observers were positioned at different ends of the hallway so that they could read the participants’ body language and hear anything they might say. Interestingly, the researchers hypothesized that participants from the southern United States, which is one of several places in the world that has a “culture of honor,” would react with more aggression than participants from the northern United States, a prediction that was in fact supported by the observational data (Cohen, Nisbett, Bowdle, & Schwarz, 1996) [6] .

When the observations require a judgment on the part of the observers—as in the studies by Kraut and Johnston and Cohen and his colleagues—a process referred to as   coding is typically required . Coding generally requires clearly defining a set of target behaviors. The observers then categorize participants individually in terms of which behavior they have engaged in and the number of times they engaged in each behavior. The observers might even record the duration of each behavior. The target behaviors must be defined in such a way that guides different observers to code them in the same way. This difficulty with coding illustrates the issue of interrater reliability, as mentioned in Chapter 4. Researchers are expected to demonstrate the interrater reliability of their coding procedure by having multiple raters code the same behaviors independently and then showing that the different observers are in close agreement. Kraut and Johnston, for example, video recorded a subset of their participants’ reactions and had two observers independently code them. The two observers showed that they agreed on the reactions that were exhibited 97% of the time, indicating good interrater reliability.

One of the primary benefits of structured observation is that it is far more efficient than naturalistic and participant observation. Since the researchers are focused on specific behaviors this reduces time and expense. Also, often times the environment is structured to encourage the behaviors of interest which again means that researchers do not have to invest as much time in waiting for the behaviors of interest to naturally occur. Finally, researchers using this approach can clearly exert greater control over the environment. However, when researchers exert more control over the environment it may make the environment less natural which decreases external validity. It is less clear for instance whether structured observations made in a laboratory environment will generalize to a real world environment. Furthermore, since researchers engaged in structured observation are often not disguised there may be more concerns with reactivity.

Case Studies

A  case study   is an in-depth examination of an individual. Sometimes case studies are also completed on social units (e.g., a cult) and events (e.g., a natural disaster). Most commonly in psychology, however, case studies provide a detailed description and analysis of an individual. Often the individual has a rare or unusual condition or disorder or has damage to a specific region of the brain.

Like many observational research methods, case studies tend to be more qualitative in nature. Case study methods involve an in-depth, and often a longitudinal examination of an individual. Depending on the focus of the case study, individuals may or may not be observed in their natural setting. If the natural setting is not what is of interest, then the individual may be brought into a therapist’s office or a researcher’s lab for study. Also, the bulk of the case study report will focus on in-depth descriptions of the person rather than on statistical analyses. With that said some quantitative data may also be included in the write-up of a case study. For instance, an individual’s depression score may be compared to normative scores or their score before and after treatment may be compared. As with other qualitative methods, a variety of different methods and tools can be used to collect information on the case. For instance, interviews, naturalistic observation, structured observation, psychological testing (e.g., IQ test), and/or physiological measurements (e.g., brain scans) may be used to collect information on the individual.

HM is one of the most notorious case studies in psychology. HM suffered from intractable and very severe epilepsy. A surgeon localized HM’s epilepsy to his medial temporal lobe and in 1953 he removed large sections of his hippocampus in an attempt to stop the seizures. The treatment was a success, in that it resolved his epilepsy and his IQ and personality were unaffected. However, the doctors soon realized that HM exhibited a strange form of amnesia, called anterograde amnesia. HM was able to carry out a conversation and he could remember short strings of letters, digits, and words. Basically, his short term memory was preserved. However, HM could not commit new events to memory. He lost the ability to transfer information from his short-term memory to his long term memory, something memory researchers call consolidation. So while he could carry on a conversation with someone, he would completely forget the conversation after it ended. This was an extremely important case study for memory researchers because it suggested that there’s a dissociation between short-term memory and long-term memory, it suggested that these were two different abilities sub-served by different areas of the brain. It also suggested that the temporal lobes are particularly important for consolidating new information (i.e., for transferring information from short-term memory to long-term memory).

QR code for Hippocampus & Memory video

The history of psychology is filled with influential cases studies, such as Sigmund Freud’s description of “Anna O.” (see Note 6.1 “The Case of “Anna O.””) and John Watson and Rosalie Rayner’s description of Little Albert (Watson & Rayner, 1920) [7] , who allegedly learned to fear a white rat—along with other furry objects—when the researchers repeatedly made a loud noise every time the rat approached him.

The Case of “Anna O.”

Sigmund Freud used the case of a young woman he called “Anna O.” to illustrate many principles of his theory of psychoanalysis (Freud, 1961) [8] . (Her real name was Bertha Pappenheim, and she was an early feminist who went on to make important contributions to the field of social work.) Anna had come to Freud’s colleague Josef Breuer around 1880 with a variety of odd physical and psychological symptoms. One of them was that for several weeks she was unable to drink any fluids. According to Freud,

She would take up the glass of water that she longed for, but as soon as it touched her lips she would push it away like someone suffering from hydrophobia.…She lived only on fruit, such as melons, etc., so as to lessen her tormenting thirst. (p. 9)

But according to Freud, a breakthrough came one day while Anna was under hypnosis.

[S]he grumbled about her English “lady-companion,” whom she did not care for, and went on to describe, with every sign of disgust, how she had once gone into this lady’s room and how her little dog—horrid creature!—had drunk out of a glass there. The patient had said nothing, as she had wanted to be polite. After giving further energetic expression to the anger she had held back, she asked for something to drink, drank a large quantity of water without any difficulty, and awoke from her hypnosis with the glass at her lips; and thereupon the disturbance vanished, never to return. (p.9)

Freud’s interpretation was that Anna had repressed the memory of this incident along with the emotion that it triggered and that this was what had caused her inability to drink. Furthermore, he believed that her recollection of the incident, along with her expression of the emotion she had repressed, caused the symptom to go away.

As an illustration of Freud’s theory, the case study of Anna O. is quite effective. As evidence for the theory, however, it is essentially worthless. The description provides no way of knowing whether Anna had really repressed the memory of the dog drinking from the glass, whether this repression had caused her inability to drink, or whether recalling this “trauma” relieved the symptom. It is also unclear from this case study how typical or atypical Anna’s experience was.

Figure 6.8 Anna O. “Anna O.” was the subject of a famous case study used by Freud to illustrate the principles of psychoanalysis. Source: http://en.wikipedia.org/wiki/File:Pappenheim_1882.jpg

Case studies are useful because they provide a level of detailed analysis not found in many other research methods and greater insights may be gained from this more detailed analysis. As a result of the case study, the researcher may gain a sharpened understanding of what might become important to look at more extensively in future more controlled research. Case studies are also often the only way to study rare conditions because it may be impossible to find a large enough sample of individuals with the condition to use quantitative methods. Although at first glance a case study of a rare individual might seem to tell us little about ourselves, they often do provide insights into normal behavior. The case of HM provided important insights into the role of the hippocampus in memory consolidation.

However, it is important to note that while case studies can provide insights into certain areas and variables to study, and can be useful in helping develop theories, they should never be used as evidence for theories. In other words, case studies can be used as inspiration to formulate theories and hypotheses, but those hypotheses and theories then need to be formally tested using more rigorous quantitative methods. The reason case studies shouldn’t be used to provide support for theories is that they suffer from problems with both internal and external validity. Case studies lack the proper controls that true experiments contain. As such, they suffer from problems with internal validity, so they cannot be used to determine causation. For instance, during HM’s surgery, the surgeon may have accidentally lesioned another area of HM’s brain (a possibility suggested by the dissection of HM’s brain following his death) and that lesion may have contributed to his inability to consolidate new information. The fact is, with case studies we cannot rule out these sorts of alternative explanations. So, as with all observational methods, case studies do not permit determination of causation. In addition, because case studies are often of a single individual, and typically an abnormal individual, researchers cannot generalize their conclusions to other individuals. Recall that with most research designs there is a trade-off between internal and external validity. With case studies, however, there are problems with both internal validity and external validity. So there are limits both to the ability to determine causation and to generalize the results. A final limitation of case studies is that ample opportunity exists for the theoretical biases of the researcher to color or bias the case description. Indeed, there have been accusations that the woman who studied HM destroyed a lot of her data that were not published and she has been called into question for destroying contradictory data that didn’t support her theory about how memories are consolidated. There is a fascinating New York Times article that describes some of the controversies that ensued after HM’s death and analysis of his brain that can be found at: https://www.nytimes.com/2016/08/07/magazine/the-brain-that-couldnt-remember.html?_r=0

Archival Research

Another approach that is often considered observational research involves analyzing archival data that have already been collected for some other purpose. An example is a study by Brett Pelham and his colleagues on “implicit egotism”—the tendency for people to prefer people, places, and things that are similar to themselves (Pelham, Carvallo, & Jones, 2005) [9] . In one study, they examined Social Security records to show that women with the names Virginia, Georgia, Louise, and Florence were especially likely to have moved to the states of Virginia, Georgia, Louisiana, and Florida, respectively.

As with naturalistic observation, measurement can be more or less straightforward when working with archival data. For example, counting the number of people named Virginia who live in various states based on Social Security records is relatively straightforward. But consider a study by Christopher Peterson and his colleagues on the relationship between optimism and health using data that had been collected many years before for a study on adult development (Peterson, Seligman, & Vaillant, 1988) [10] . In the 1940s, healthy male college students had completed an open-ended questionnaire about difficult wartime experiences. In the late 1980s, Peterson and his colleagues reviewed the men’s questionnaire responses to obtain a measure of explanatory style—their habitual ways of explaining bad events that happen to them. More pessimistic people tend to blame themselves and expect long-term negative consequences that affect many aspects of their lives, while more optimistic people tend to blame outside forces and expect limited negative consequences. To obtain a measure of explanatory style for each participant, the researchers used a procedure in which all negative events mentioned in the questionnaire responses, and any causal explanations for them were identified and written on index cards. These were given to a separate group of raters who rated each explanation in terms of three separate dimensions of optimism-pessimism. These ratings were then averaged to produce an explanatory style score for each participant. The researchers then assessed the statistical relationship between the men’s explanatory style as undergraduate students and archival measures of their health at approximately 60 years of age. The primary result was that the more optimistic the men were as undergraduate students, the healthier they were as older men. Pearson’s  r  was +.25.

This method is an example of  content analysis —a family of systematic approaches to measurement using complex archival data. Just as structured observation requires specifying the behaviors of interest and then noting them as they occur, content analysis requires specifying keywords, phrases, or ideas and then finding all occurrences of them in the data. These occurrences can then be counted, timed (e.g., the amount of time devoted to entertainment topics on the nightly news show), or analyzed in a variety of other ways.

Media Attributions

  • What happens when you remove the hippocampus? – Sam Kean by TED-Ed licensed under a standard YouTube License
  • Pappenheim 1882  by unknown is in the  Public Domain .
  • Festinger, L., Riecken, H., & Schachter, S. (1956). When prophecy fails: A social and psychological study of a modern group that predicted the destruction of the world. University of Minnesota Press. ↵
  • Rosenhan, D. L. (1973). On being sane in insane places. Science, 179 , 250–258. ↵
  • Wilkins, A. (2008). “Happier than Non-Christians”: Collective emotions and symbolic boundaries among evangelical Christians. Social Psychology Quarterly, 71 , 281–301. ↵
  • Levine, R. V., & Norenzayan, A. (1999). The pace of life in 31 countries. Journal of Cross-Cultural Psychology, 30 , 178–205. ↵
  • Kraut, R. E., & Johnston, R. E. (1979). Social and emotional messages of smiling: An ethological approach. Journal of Personality and Social Psychology, 37 , 1539–1553. ↵
  • Cohen, D., Nisbett, R. E., Bowdle, B. F., & Schwarz, N. (1996). Insult, aggression, and the southern culture of honor: An "experimental ethnography." Journal of Personality and Social Psychology, 70 (5), 945-960. ↵
  • Watson, J. B., & Rayner, R. (1920). Conditioned emotional reactions. Journal of Experimental Psychology, 3 , 1–14. ↵
  • Freud, S. (1961).  Five lectures on psycho-analysis . New York, NY: Norton. ↵
  • Pelham, B. W., Carvallo, M., & Jones, J. T. (2005). Implicit egotism. Current Directions in Psychological Science, 14 , 106–110. ↵
  • Peterson, C., Seligman, M. E. P., & Vaillant, G. E. (1988). Pessimistic explanatory style is a risk factor for physical illness: A thirty-five year longitudinal study. Journal of Personality and Social Psychology, 55 , 23–27. ↵

Research that is non-experimental because it focuses on recording systemic observations of behavior in a natural or laboratory setting without manipulating anything.

An observational method that involves observing people’s behavior in the environment in which it typically occurs.

When researchers engage in naturalistic observation by making their observations as unobtrusively as possible so that participants are not aware that they are being studied.

Where the participants are made aware of the researcher presence and monitoring of their behavior.

Refers to when a measure changes participants’ behavior.

In the case of undisguised naturalistic observation, it is a type of reactivity when people know they are being observed and studied, they may act differently than they normally would.

Researchers become active participants in the group or situation they are studying.

Researchers pretend to be members of the social group they are observing and conceal their true identity as researchers.

Researchers become a part of the group they are studying and they disclose their true identity as researchers to the group under investigation.

When a researcher makes careful observations of one or more specific behaviors in a particular setting that is more structured than the settings used in naturalistic or participant observation.

A part of structured observation whereby the observers use a clearly defined set of guidelines to "code" behaviors—assigning specific behaviors they are observing to a category—and count the number of times or the duration that the behavior occurs.

An in-depth examination of an individual.

A family of systematic approaches to measurement using qualitative methods to analyze complex archival data.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Neurol Res Pract

Logo of neurrp

How to use and assess qualitative research methods

Loraine busetto.

1 Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany

Wolfgang Wick

2 Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Christoph Gumbinger

Associated data.

Not applicable.

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 – 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 – 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig1_HTML.jpg

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig2_HTML.jpg

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig3_HTML.jpg

From data collection to data analysis

Attributions for icons: see Fig. ​ Fig.2, 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 – 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

An external file that holds a picture, illustration, etc.
Object name is 42466_2020_59_Fig4_HTML.jpg

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 – 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 – 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table ​ Table1. 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Take-away-points

• Assessing complex multi-component interventions or systems (of change)

• What works for whom when, how and why?

• Focussing on intervention improvement

• Document study

• Observations (participant or non-participant)

• Interviews (especially semi-structured)

• Focus groups

• Transcription of audio-recordings and field notes into transcripts and protocols

• Coding of protocols

• Using qualitative data management software

• Combinations of quantitative and/or qualitative methods, e.g.:

• : quali and quanti in parallel

• : quanti followed by quali

• : quali followed by quanti

• Checklists

• Reflexivity

• Sampling strategies

• Piloting

• Co-coding

• Member checking

• Stakeholder involvement

• Protocol adherence

• Sample size

• Randomization

• Interrater reliability, variability and other “objectivity checks”

• Not being quantitative research

Acknowledgements

Abbreviations.

EVTEndovascular treatment
RCTRandomised Controlled Trial
SOPStandard Operating Procedure
SRQRStandards for Reporting Qualitative Research

Authors’ contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

no external funding.

Availability of data and materials

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Scientific Tools in Research: A Comprehensive Guide

thumbnail

We can all agree that scientific research requires specialized tools and instrumentation to systematically investigate natural phenomena.

This guide provides a comprehensive overview of the essential scientific tools used in research, from basic equipment to advanced technologies, and how they enable the discovery and communication of new knowledge.

You will explore the core scientific tools that researchers utilize across disciplines, the key applications and capabilities of advanced instrumentation, as well as emerging tools reshaping the future of science. Additionally, we cover software, writing aids, and best practices for effectively communicating research findings using these powerful tools.

Introduction to Scientific Tools in Research Methodology

Scientific tools refer to the instruments, equipment, methodologies, and technologies used by researchers across all scientific disciplines to systematically gather data, run experiments, analyze information, and test hypotheses. As this guide will explore, they play an indispensable role across the entire scientific method.

Defining Scientific Tools and Their Uses

Scientific tools encompass a wide range of devices and methodologies, including:

Laboratory equipment used to carry out experiments, such as microscopes, scales, thermometers, and more

Field gear like telescopes, cameras, GPS, and sensors to collect observational data

Advanced technologies like particle colliders, satellites, supercomputers, and AI systems to process huge volumes of information

Statistical, computational, and visualization tools to analyze data and discern patterns

Standardized protocols and techniques to ensure consistency across tests and experiments

These tools serve diverse functions across the scientific process, like precision measurement, controlled experimentation, advanced analysis, replicable methodologies, and more. Their overarching goal is to expand the capabilities of human senses and cognition to deepen our understanding of natural phenomena through systematic investigation.

The Vital Role of Scientific Tools in Research

Modern science would be impossible without the specialized tools and technologies that empower discovery and analysis. Key roles and benefits include:

Enabling investigation at vastly different scales - from nanoscale cellular functions to the farthest cosmic reaches

Expanding sensory capabilities - allowing us to see atoms, hear gravitational waves, and detect invisible phenomena

Increasing measurement precision for accurate, granular data collection

Allowing advanced computation and analysis of massive, complex data

Facilitating controlled, replicable experiments through standardized laboratory techniques

Accelerating knowledge gathering and testing through high-throughput technologies

Without this constant evolution of sophisticated tools, our understanding of the natural world would remain extremely limited. They thus play an indispensable role across scientific domains and research initiatives.

Outline and Objectives of the Guide

This guide aims to provide researchers with a comprehensive overview of essential scientific tools and their applications across the research process. Key objectives include:

Categorizing major types of scientific tools and their use cases

Providing examples of vital tools in active research contexts

Discussing key selection criteria and best practices for applying them

Demonstrating how these tools contribute to robust, replicable science

Inspiring readers on cutting-edge tools and methodologies reshaping possibilities

With the accelerating pace of scientific progress, understanding the evolving landscape of enabling tools and technologies will only grow more valuable for the next generation of researchers. This guide seeks to illuminate their foundations and trajectories.

What are the use of scientific tools in research?

Scientific tools are essential in research as they allow scientists to make precise measurements, carry out experiments, and make detailed observations. Some key uses and examples of scientific tools in research include:

Taking Measurements

Stopwatches measure the passage of time in experiments with great accuracy. They can time chemical reactions, the growth rate of bacterial cultures, and more.

Scales precisely measure the mass of chemicals, tissues, organisms, and other materials used in experiments. Highly sensitive scales can detect tiny changes in mass.

Rulers, measuring tapes, and calipers allow researchers to quantify the physical dimensions of specimens and materials accurately. Precise measurements enable calculating volumes, growth rates, concentrations, and other metrics.

Experiments

Microscopes, from light to electron microscopy, enable viewing tiny structures like cells and molecules to study their form and function. Time-lapse microscopy tracks dynamic processes.

Spectrophotometers measure the interaction of light and matter. Researchers use them to quantify chemicals in solutions and study reaction kinetics.

Chromatography instruments separate chemical mixtures to identify their components. These tools are indispensable in biochemistry.

Gene sequencing machines rapidly decode DNA and RNA molecules. Understanding genetic blueprints provides insights into inheritance, mutations, and disease.

Observation

Telescopes gather light from astronomical objects like stars and galaxies. They expand our understanding of the cosmos and physics.

Seismometers detect vibrations in the Earth, whether from earthquakes or volcanoes. Analysis of seismic waves reveals Earth's inner structure.

In summary, scientific tools empower researchers to probe natural phenomena on scales ranging from the astronomical to molecular. By enhancing human senses, they uncover new realms of knowledge that advance science.

What are the tools for scientific observation?

Scientific observation is key to advancing research and gaining new insights. There are various tools scientists use to aid observation, measure data accurately, and draw evidence-based conclusions.

Binoculars utilize lenses to magnify images of distant objects. They allow clearer observation than the naked eye. Models with higher magnification power and wider objective lens diameters provide brighter, sharper images. Binoculars are portable, making them useful for outdoor observation.

Cameras capture images onto film or digital sensors. They provide permanent visual records and allow measurements. Specialty cameras like infrared cameras detect non-visible wavelengths. High-speed cameras capture rapid motion undetectable to the eye. Underwater cameras enable marine observation.

Microscopes

Microscopes use lenses to magnify tiny objects invisible to our eyes. Advanced microscopes like scanning electron microscopes and atomic force microscopes allow nanoscale observation. Microscopes reveal microscopic processes, enabling insights from cell biology to materials science.

Telescopes gather and focus light using curved mirrors or lenses. Allowing observation of astronomical objects like stars, galaxies, and exoplanets. Radio telescopes detect radio waves emitted from cosmic sources. Adaptive optics corrects for atmospheric distortion. Space telescopes like Hubble avoid atmospheric interference altogether.

Overall, technology expands scientific observation capabilities - from nanometers to lightyears. But fundamentals like careful technique and analytical thinking remain integral to maximizing what we can learn. The right tools help scientists push boundaries.

What are some examples of scientific instruments?

Scientific instruments are essential tools that allow researchers to make quantitative measurements, gather data, and test hypotheses. Here are some common examples across scientific disciplines:

Physics and Engineering

Accelerometer : Measures acceleration forces and motion. Used in fields like physics, engineering, transportation, and biomechanics.

Ammeter : Measures electric current in units of amperes. Used in physics labs and electronics.

Anemometer : Measures wind speed. Important for meteorology and weather stations.

Chemistry and Biology

Calorimeter : Measures heat flow and thermal energy. Critical lab equipment for chemistry experiments.

DNA Sequencer : Analyzes DNA molecules and determines their sequence. Core tool for genetics, molecular biology, forensics.

Spectrometer : Measures spectra of electromagnetic radiation. Used to examine chemicals, materials, and astronomical objects. Many sub-types like mass spectrometers.

Cross-Disciplinary Tools

Microscope : Magnifies tiny objects for analysis. Fundamental instrument across disciplines like biology, materials science, and nanotechnology.

Thermometer : Measures temperature using various techniques. Used ubiquitously in science and industry.

This list highlights just a small sample of the diverse array of scientific instruments that empower researchers to quantify and analyze natural phenomena through measurement. As science and technology continues advancing, so too do the instrumentation capabilities that drive innovation across scientific fields.

Who used scientific tools to study matter?

Chemists and physicists rely on scientific tools and controlled experiments to study the properties of matter. These tools allow them to make precise measurements and observations to further scientific understanding.

Key Scientific Tools Used

Microscopes - Allow chemists and physicists to visualize matter at the microscopic level, studying shape, structure, and interactions. Different types like optical, electron, and scanning probe microscopes have different capabilities.

Spectrometers - Used to analyze the interaction between matter and electromagnetic radiation. Help determine chemical composition, structure, and properties. Common types include mass spectrometers, NMR spectrometers, and optical spectrometers.

Chromatography instruments - Separate chemical mixtures and analyze composition. Help identify elements, isotopes, molecules, and more in a sample. Common forms include gas chromatography and liquid chromatography.

Calorimeters - Measure heat flows during chemical reactions and phase changes. Used to determine thermodynamic values like enthalpy, heat capacity, reaction kinetics, and more.

Labware - Various glassware, instruments, and tools used to handle chemicals, make measurements, run reactions, etc. Includes beakers, flasks, pipettes, burettes, stirring rods, and more standardized equipment.

Applications in Research

These tools are routinely used by chemists and physicists in lab research to:

Quantify and characterize chemical samples

Understand reaction mechanisms and kinetics

Determine physical properties like conductivity, viscosity, density

Analyze molecular structure and bonding

Study thermodynamic processes

Investigate material properties

Advance discovery and knowledge in chemistry and physics

Ongoing innovations in scientific instrumentation continue to push the boundaries of what can be measured, visualized, and understood about the fundamental nature of matter.

Comprehensive List of Scientific Tools in Research

Scientific tools are essential for conducting research across disciplines. From basic measurement devices to advanced instrumentation, these tools enable scientists to observe, analyze, quantify, and elucidate natural phenomena. This guide provides an exhaustive inventory of key scientific tools and their diverse applications.

Basic Tools of Science and Their Significance

Fundamental scientific tools like the metric system , thermometers , and microscopes establish standardized systems of measurement and magnification that enable quantitative analysis and comparisons between studies. These tools form the bedrock upon which more advanced technologies are built.

Other elementary tools include:

Clocks and chronometers for tracking time. Understanding durations and sequences is crucial for studying cause-and-effect relationships.

Gravity measurements quantify the gravitational force, shedding light on astronomical objects and cosmological questions.

Computers and software facilitate calculations, statistical analysis, data visualization, and complex modeling.

Overall, these basic tools supply the raw data for deriving scientific insights. Their standardization across disciplines also facilitates collaboration, reproducibility, and incremental improvements upon previous findings.

Advanced Instrumentation and Equipment

Cutting-edge tools provide finer observations, access previously hidden aspects of nature, and exponentially increase the possibilities for discovery.

Notable examples include:

Particle colliders accelerate atomic particles to nearly the speed of light, enabling the study of subatomic particles.

Radio telescopes collect long wavelength cosmic radio signals emitted by stars and galaxies.

Microscopes and tools for microscale and nanoscale research reveal microscopic biological processes and enable astonishing technological innovation.

These instruments continue expanding the frontiers of human knowledge about the universe and our place within it.

Laboratory and Field Equipment Essentials

In applied scientific research, an array of equipment collects, measures, and analyzes specimens and phenomena. Laboratory and field equipment includes:

Measurement tools like rulers, scales, and calipers for quantitative analysis.

Assay equipment such as spectrometers, sequencers, and chromatographs identify chemical components.

Electronic sensors such as accelerometers, thermocouples, and voltmeters capture environmental data.

Microfluidic devices manipulate tiny fluid volumes, enabling biochemical tests.

Laboratory information systems track samples, instrumentation, and data.

This equipment facilitates hands-on research central to chemistry, biology, materials science, and more.

Electronic Test Equipment and Measurement Devices

Sophisticated electronic tools verify circuit designs, troubleshoot electronics, test electromagnetic signals, and make precision measurements.

Oscilloscopes visualize electrical signal changes over time.

Signal generators produce test signals.

Logic analyzers capture digital signals between integrated circuits.

Network analyzers characterize electronic networks.

These and similar tools drive innovation in telecommunications, aviation, medicine, and beyond.

Emerging Tools in Biotechnology and Nanotechnology

Cutting-edge research leverages custom-engineered nanoscale tools and molecular biology techniques.

Bioelectronics integrate electronics with biological components, enabling electric sensing of living tissues.

Nanobots are tiny robots built from biological materials and electronic parts. Their small size allows interaction with human cells.

DNA sequencing and genetic engineering characterize and modify genetic code.

The capacity to directly manipulate matter on the molecular scale heralds a new scientific revolution. These tools show immense promise for applications from disease treatment to computing.

In summary, scientific tools span a vast spectrum - from elementary measurement standards to sophisticated large-scale instruments and nanoscale biotechnology. Together, this toolkit facilitates quantitative, reproducible research and drives discovery across every scientific field.

Scientific Writing and Communication Tools

Effective communication is key in research. This section will discuss tools and practices for scientific writing and publishing.

Scientific Writing Citation Style and Format

When writing a scientific paper, following proper citation style and formatting guidelines is essential for upholding academic integrity and enabling readers to verify claims. Key citation styles used in scientific writing include:

APA Style - Commonly used in psychology, education, and social sciences. Includes in-text citations and full references.

MLA Style - Used in humanities and liberal arts. Features brief in-text citations pointing to full references.

Chicago/Turabian Style - Flexible style allowing notes + bibliography or author-date citations.

ACS Style - Developed by American Chemical Society for chemistry fields. Uses numbered endnotes for citations.

AMA Style - Created for medical and health sciences literature by the American Medical Association.

Adhering to the guidelines of the selected citation style ensures proper attribution and facilitates literature review. Formatting elements like font, margins, headings, and file formatting also impact clarity. Using reference managers like Zotero, Mendeley, and EndNote streamlines citing sources.

Software for Writing Scientific Papers

Specialized software can enhance scientific writing:

Reference managers assist in organizing sources, annotating PDFs, and generating citations and bibliographies for manuscripts. Popular options are Zotero, Mendeley, EndNote, and Papers.

Note-taking tools like Evernote and OneNote help collect research data, thoughts, and citations while investigating a topic.

Outlining programs like Scapple allow flexible brainstorming and outlining to structure scientific papers.

Writing tools like Grammarly, Hemingway Editor and Ginger Software help improve readability.

LaTeX document preparation systems facilitate formatting and equations in technical documents.

Graphing and data visualization software like Matplotlib and DataWrapper provide publication-quality figures.

Language translation services like DeepL and Google Translate assist non-native writers.

Text-to-speech software reads papers aloud to identify awkward phrasings.

Bibliography generators like Cite This For Me easily create references.

Writing a Scientific Review and Manuscript

Scientific reviews analyze, evaluate, and synthesize the existing literature on a topic. Structuring reviews clearly using IMRaD format (Introduction, Methods, Results and Discussion) improves comprehension. State the motivation and scope in the introduction section. Detail the systematic review protocol in the methods section. Objectively present major findings in the results section. Analyze the discoveries and their implications in the discussion section.

When preparing a scientific manuscript for publication, carefully selecting an appropriate journal and adhering to its formatting guidelines are vital initial steps. The manuscript should succinctly present the central research question, investigative methods, key findings, and conclusions supported by the data while using discipline-specific language and conventions. Tables and figures should effectively illustrate results. The writing should promote reproducibility and uphold ethical standards. Ensuring completeness, consistency and accuracy through self-editing and peer-review heightens quality.

AI and Scientific Writing: Enhancing Productivity

AI scientific writing tools boost researcher productivity:

Automated literature discovery tools like Iris.ai and Semantic Scholar expedite finding relevant papers.

Smart literature review systems like Scinapse summarize related research.

Paraphrasing software like QuillBot streamlines presenting other authors' findings.

Writing assistance tools like INK provide contextual grammar and style corrections.

Manuscript screening systems like StatReviewer identify deficiencies.

Intelligent writing assistants like GPT-3 generate initial drafts and outlines.

Automated data visualization platforms like Datacopia produce high-quality graphics.

Translation services seamlessly translate manuscripts into other languages.

By automating tedious tasks, AI writing assistants allow researchers to focus their time on higher-value experimental design, analysis, and communication of novel findings.

The Science Writer's Handbook: A Resource Guide

For science writers, key resources include:

Style manuals like the ACS Style Guide detail discipline-specific publishing conventions.

Academic phrases guides, such as "Writing in the Biological Sciences", help instill proper scientific style.

Online courses on science writing from institutions like Stanford and MIT communicate best practices.

Science blogs demonstrate practical ways to make complex topics engaging for broader audiences.

Science writing organizations offer training programs, networking, and career development opportunities.

Communities like the National Association of Science Writers connect writers for idea exchange and support.

Academic journals showcase exemplars of impactful science communication across fields.

Following science writing guides increases methodicality. Immersing oneself in well-written scientific publications illuminates techniques for precision and purpose. Mentorship accelerates capability gains. Committing to continual upskilling empowers impactful science communication.

Integrating Technology in Research Writing

Technology plays an integral role in modern scientific research and writing. From streamlining documentation to enhancing language precision, various tools empower researchers to organize ideas, access literature, ensure accuracy, and effectively communicate findings. This section explores key technologies advancing research writing.

Academic Writing Software: Enhancing Efficiency

Specialized writing software boosts productivity for publishing scholars. Reference managers like Zotero, Mendeley, and EndNote help organize sources and properly format citations. LaTeX facilitates formatting and typesetting scientific papers with math equations. Programs like Scrivener provide outlines, notecards, and editing assistance when drafting complex documents. By automating tedious tasks, software lets researchers focus efforts on the quality of ideas.

AI-Powered Writing Assistants

AI writing assistants utilize natural language processing to support authoring scientific manuscripts. Applications like wisio suggest contextually relevant papers to cite, check for grammar issues, and help non-native speakers translate technical terminology. Some emerging systems even attempt generating passages or entire papers from keywords and outlines. Though unable to fully replace human creativity, AI promises to augment scientific writing.

Apps and Services for Scientific Writing

Specialized apps and services aid scientists through publication. Manuscript editor Overleaf facilitates real-time collaboration for writing and peer review. Reference scanner Sciwheel extracts scholarly metadata to expedite bibliography creation. Startups like SciNote offer integrated lab management platforms with modules for authoring, sharing, and discussing scientific documents. Such tools provide needed support for research groups to organize projects and draft high-quality manuscripts.

Scientific Writing Style and English Language Mastery

Technical scientific writing demands proper style and grammar to ensure precision. Resources like AMA Manual of Style provide editorial guidelines tailored for medical and scientific papers. For non-native speakers, services like Editage assist in copyediting and translation to convey complex ideas clearly in English. Mastering conventions and language usage is vital for scientific discourse.

Scientific Writing Classes and Textbooks

Aspiring research writers can develop skills through formal instruction. University courses teach fundamentals like formatting, peer review, and research ethics. Massive open online courses on platforms like Coursera offer introductory scientific writing training. Field-specific textbooks like Writing Papers in the Biological Sciences detail best practices for manuscript preparation. Education equips scholars to contribute high-quality studies.

With technological aids and dedicated training, scientists can effectively document and communicate discoveries to advance collective knowledge. The writing process itself furthers precision of thought, benefiting both authors and audiences.

Utilizing Scientific Tools in the Research Process

Scientific tools are essential for conducting rigorous research across disciplines. Selecting the right tools and properly utilizing them throughout the research process enables scientists to effectively test hypotheses, collect and analyze data, and draw meaningful conclusions.

Selecting the Right Tools for Your Research

When embarking on a research project, it's important to identify the scientific tools that align with your methodology and available resources. Consider aspects such as:

Research goals and hypotheses : Tools should enable testing theories and assumptions. For quantitative research, tools like surveys, sensors, and statistical software may be appropriate. For qualitative research, tools like interviews, focus groups, and coding software help collect and examine non-numerical data.

Data collection needs : Determine if you need tools for lab experiments, field observations, surveys, interviews, etc. Select instruments that can capture the required data types and volumes.

Analytical capabilities : The right tools should enable analyzing data to test hypotheses and derive insights. Statistical software, coding programs, and data visualization platforms help with analysis.

Budget and access : Evaluate costs, availability, and ease of access. Open source and free tools can provide value at lower resource overhead.

Skill requirements : Tools should match researchers' expertise levels. Assess the learning curve and training required before adoption.

Choosing the right scientific tools upfront ensures an efficient, streamlined research process.

The Scientific Method: A Toolkit for Discovery

The scientific method provides researchers a proven framework for discovery through iterative hypothesis testing. Aligning tools to each step enables rigor:

Asking questions : Tools like literature reviews and focus groups help define meaningful research questions.

Formulating hypotheses : Based on observations and available data, state an expected outcome to be tested.

Designing experiments : Controlled, repeatable tests are designed to prove or disprove the hypothesis. Tools facilitate data collection.

Analyzing data : Software, statistical tests, and other analytical tools process and interpret findings.

Drawing conclusions : Determine if the hypothesis was correct or incorrect to refine theories. Additional experiments may be needed.

Adopting this toolkit mindset ensures scientific principles anchor the research.

Data Collection Techniques and Tools

Effective data collection tools align with the research methodology. Common techniques include:

Lab experiments : Controlled tests using tools like microscopes, sensors, and measurement instruments.

Observational studies : Field work observations aided by cameras, voice recorders, tracking systems.

Surveys : Questionnaires using online survey software or paper-based data capture.

Interviews : Individual conversations using voice recorders and transcription software.

Focus groups : Group discussions via video conferencing tools and smart boards.

Literature reviews : Library databases, reference managers (Mendeley, EndNote), keyword harvesting tools.

Choose data collection tools that generate quality datasets to support robust analysis.

Analysis and Interpretation of Findings

Key analysis techniques and tools include:

Statistical analysis : Software like SAS, SPSS, R for testing hypotheses and deriving insights from numbers.

Data visualization : Platforms like Tableau, Power BI, and MATLAB to represent data graphically.

Qualitative analysis : Coding using software like NVivo and Atlas.ti to identify themes.

Benchmarking : Compare findings against existing research or standards data.

Modeling : Data-driven simulations to test different scenarios.

Using the right analytical tools helps accurately interpret findings to draw meaningful conclusions aligned to research questions.

Scientific Tools in Action: Case Studies and Examples

Real-world examples that showcase scientific tools in research:

Public health : Researchers used sensor devices and surveys to map air pollution levels across communities. Statistical software helped correlate exposure to health risks.

Climate science : Scientists rely on satellites, atmospheric sensors, and computer models to study complex climate change patterns over time. Data feeds improved predictive analytics.

Pharmaceutical R&D : High throughput screening using automated lab equipment enables rapid drug testing. Bioinformatics tools help identify promising compounds for further testing.

Gene editing : The CRISPR-Cas9 system offers precise, efficient genome editing using guide RNA and enzymes.

In practice, scientific tools amplify researchers' capabilities, enabling complex, large-scale projects.

Conclusion: Harnessing the Power of Scientific Tools in Research

Key takeaways on scientific tools in research.

Scientific tools encompass a wide range of instruments and technologies that empower researchers to make groundbreaking discoveries. As covered in this guide, key categories of tools include measurement devices, data collection equipment, analysis instruments, and information systems.

Proper selection and application of tools is critical. Researchers must choose instruments suited to their discipline and research goals. Precision, accuracy, reliability, and practicality should guide tool selection. Training on usage best practices is essential.

Best Practices for Future Research Endeavors

Looking ahead, researchers should:

Continually evaluate emerging tools and adopt those applicable to their work

Master tools currently employed and seek training on new acquisitions

Participate in collaborative networks to share techniques, findings and tool insights

Publish detailed documentation on tools utilized to enable reproducibility

Develop standardized protocols for tool usage within their field

Continual Learning and Adaptation

The scientific toolkit continues expanding. Researchers must actively educate themselves on technological advances through conferences, publications, vendors.

Willingness to master new tools and replace outdated techniques is critical for pushing boundaries. An adaptive mindset separates innovative teams from stagnant ones.

The Evolving Landscape of Scientific Research Tools

Horizons keep broadening across scientific instrumentation and methodology. Future tool landscapes will likely see:

Broader adoption of AI and automation

Miniaturization and portability

Increased computational power and data storage

Greater interdisciplinary tool usage

Wider accessibility and cost reductions

Specialized tools for precision interventions

Researchers who flexibly adapt to an evolving scientific toolkit will make the discoveries that define the decades ahead.

Avatar of Antonio Carlos Filho

Antonio Carlos Filho @acfilho_dev

educational research techniques

Research techniques and education.

research tool observation

Observation in Qualitative Research

Observation is one of several forms of data collection in qualitative research. It involves watching and recording, through the use of notes, the behavior of people at the research site. In this post, we will cover the following

  • Different observational roles
  • The guidelines for observation
  • Problems with observation

Observational Roles

research tool observation

The other extreme is a participant observer. In this role, a researcher takes part in the activities of the group. For example, if you are serving as a teacher in a lower income community and are observing the students while you teach and interact with them this is participant observer.

Between these two extremes of non-participation and participation are several other forms of observation.  For example, a a non-participant observer can be an observer-as-participant or a complete observer. Furthermore, a participant observer can be a participant-as-observer or complete participant. The difference between these is whether or not the the group being studied knows the identity of the researcher.

Guidelines for Observation

  • Decide your role-What type of observer are you
  • Determine what you are observing-The observation must support what you are trying to learn about the central phenomenon 
  • Observe the subject multiple times-This provides a deeper understanding of the subjects
  • Take notes-An observer should of some way of taking notes. These notes are called fieldnotes and provide a summary of what was seen during the observation.

Problems with Observation

Common problems that are somewhat related when doing observations are observer effect, observer bias, observer expectations. The observer effect is how the people being observed change their behavior because of the presence of an outsider. For example, it is common for students to behave differently when the principal comes to observe the teacher. They modify their behavior because of the presence of the principal. In addition, if the students are aware of the principal’s purpose, they may act extra obedient for the sake of their teacher.

Observer bias is the potential that a researchers viewpoint may influence what they see. For example, if a principal is authoritarian he may view a democratic classroom with a laid back teacher as chaotic when the students may actually be learning a great deal.

Observer expectation is the observer assuming beforehand what they are going to see. For example, if a researcher is going to observe students in a lower income school, he will expect to see low performing unruly students. This becomes a self-fulfilling prophecy as the researcher sees what they expected to see.

Observation is one of the forms of data collection in qualitative research. Keeping in mind the types of observation, guidelines, and problems can help a researcher to succeed.

Share this:

2 thoughts on “ observation in qualitative research ”.

Pingback: Interviews in Qualitative Research | educationalresearchtechniques

Pingback: Observation in Qualitative Research | education...

Leave a Reply Cancel reply

Discover more from educational research techniques.

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research tool observation

Home Market Research

Observational Research: What is, Types, Pros & Cons + Example

Observational research is a qualitative, non-experimental examination of behavior. This helps researchers understand their customers' behavior.

Researchers can gather customer data in a variety of ways, including surveys, interviews, and research. But not all data can be collected by asking questions because customers might not be conscious of their behaviors. 

It is when observational research comes in. This research is a way to learn about people by observing them in their natural environment. This kind of research helps researchers figure out how people act in different situations and what things in the environment affect their actions.

This blog will teach you about observational research, including types and observation methods. Let’s get started.

What is observational research?

Observational research is a broad term for various non-experimental studies in which behavior is carefully watched and recorded.

The goal of this research is to describe a variable or a set of variables. More broadly, the goal is to capture specific individual, group, or setting characteristics.

Since it is non-experimental and uncontrolled, we cannot draw causal research conclusions from it. The observational data collected in research studies is frequently qualitative observation , but it can also be quantitative or both (mixed methods).

Types of observational research

Conducting observational research can take many different forms. There are various types of this research. These types are classified below according to how much a researcher interferes with or controls the environment.

Naturalistic observation

Taking notes on what is seen is the simplest form of observational research. A researcher makes no interference in naturalistic observation. It’s just watching how people act in their natural environments. 

Importantly, there is no attempt to modify factors in naturalistic observation, as there would be when comparing data between a control group and an experimental group.

Case studiesCase studies

A case study is a sort of observational research that focuses on a single phenomenon. It is a naturalistic observation because it captures data in the field. But case studies focus on a specific point of reference, like a person or event, while other studies may have a wider scope and try to record everything that happens in the researcher’s eyes. 

For example, a case study of a single businessman might try to find out how that person deals with a certain disease’s ups and down or loss.

Participant observation

Participant observation is similar to naturalistic observation, except that the researcher is a part of the natural environment they are studying. In such research, the researcher is also interested in rituals or cultural practices that can only be evaluated by sharing experiences. 

For example, anyone can learn the basic rules of table Tennis by going to a game or following a team. Participant observation, on the other hand, lets people take part directly to learn more about how the team works and how the players relate to each other.

It usually includes the researcher joining a group to watch behavior they couldn’t see from afar. Participant observation can gather much information, from the interactions with the people being observed to the researchers’ thoughts.

Controlled observation

A more systematic structured observation entails recording the behaviors of research participants in a remote place. Case-control studies are more like experiments than other types of research, but they still use observational research methods. When researchers want to find out what caused a certain event, they might use a case-control study.

Longitudinal observation

This observational research is one of the most difficult and time-consuming because it requires watching people or events for a long time. Researchers should consider longitudinal observations when their research involves variables that can only be seen over time. 

After all, you can’t get a complete picture of things like learning to read or losing weight in a single observation. Longitudinal studies keep an eye on the same people or events over a long period of time and look for changes or patterns in behavior.

Observational research methods

When doing this research, there are a few observational methods to remember to ensure that the research is done correctly. Along with other research methods, let’s learn some key research methods of it:

research tool observation

Have a clear objective

For an observational study to be helpful, it needs to have a clear goal. It will help guide the observations and ensure they focus on the right things.

Get permission

Get permission from your participants. Getting explicit permission from the people you will be watching is essential. It means letting them know that they will be watched, the observation’s goal, and how their data will be used.

Unbiased observation

It is important to make sure the observations are fair and unbiased. It can be done by keeping detailed notes of what is seen and not putting any personal meaning on the data.

Hide your observers

In the observation method, keep your observers hidden. The participants should be unaware of the observers to avoid potential bias in their actions.

Documentation

It is important to document the observations clearly and straightforwardly. It will allow others to examine the information and confirm the observational research findings.

Data analysis

Data analysis is the last method. The researcher will analyze the collected data to draw conclusions or confirm a hypothesis.

Pros and cons of observational research

Observational studies are a great way to learn more about how your customers use different parts of your business. There are so many pros and cons of observational research. Let’s have a look at them.

  • It provides a practical application for a hypothesis. In other words, it can help make research more complete.
  • You can see people acting alone or in groups, such as customers. So, you can answer a number of questions about how people act as customers.
  • There is a chance of researcher bias in observational research. Experts say that this can be a very big problem.
  • Some human activities and behaviors can be difficult to understand. We are unable to see memories or attitudes. In other words, there are numerous situations in which observation alone is inadequate.

Example of observational research

The researcher observes customers buying products in a mall. Assuming the product is soap, the researcher will observe how long the customer takes to decide whether he likes the packaging or comes to the mall with his decision already made based on advertisements.

If the customer takes their time making a decision, the researcher will conclude that packaging and information on the package affect purchase behavior. If a customer makes a quick decision, the decision is likely predetermined. 

As a result, the researcher will recommend more and better advertisements in this case. All of these findings were obtained through simple observational research.

How to conduct observational research with QuestionPro?

QuestionPro can help with observational research by providing tools to collect and analyze data. It can help in the following ways:

Define the research goals and question types you want to answer with your observational study . Use QuestionPro’s customizable survey templates and questions to do a survey that fits your research goals and gets the necessary information. 

You can distribute the survey to your target audience using QuestionPro’s online platform or by sending a link to the survey. 

With QuestionPro’s real-time data analysis and reporting features, you can collect and look at the data as people fill out the survey. Use the advanced analytics tools in QuestionPro to see and understand the data and find insights and trends. 

If you need to, you can export the data from QuestionPro into the analysis tools you like to use. Draw conclusions from the collected and analyzed data and answer the research questions that were asked at the beginning of the research.

To summarize, observational research is an effective strategy for collecting data and getting insights into real-world phenomena. When done right, this research can give helpful information and help people make decisions. 

QuestionPro is a valuable tool that can help with observational research by letting you create online surveys, analyze data in real time, make surveys your own, keep your data safe, and use advanced analytics tools.

To do this research with QuestionPro, researchers need to define their research goals, do a survey that matches their goals, send the survey to participants, collect and analyze the data, visualize and explain the results, export data if needed, and draw conclusions from the data collected.

By keeping in mind what has been said above, researchers can use QuestionPro to help with their observational research and gain valuable data. Try out QuestionPro today!

LEARN MORE         FREE TRIAL

Frequently Asked Questions (FAQ)

Observational research is a method in which researchers observe and systematically record behaviors, events, or phenomena without directly manipulating them.

There are three main types of observational research: naturalistic observation, participant observation, and structured observation.

Naturalistic observation involves observing subjects in their natural environment without any interference.

MORE LIKE THIS

research tool observation

Why You Should Attend XDAY 2024

Aug 30, 2024

Alchemer vs Qualtrics

Alchemer vs Qualtrics: Find out which one you should choose

target population

Target Population: What It Is + Strategies for Targeting

Aug 29, 2024

Microsoft Customer Voice vs QuestionPro

Microsoft Customer Voice vs QuestionPro: Choosing the Best

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Field Research: 4 Powerful Observation Methods and How to Use Them

Partner

by Nate Norman, Partner

Apr. 13, 2023 / Frameworks & methodologies , Strategy

research tool observation

Here’s a familiar feeling for L&D pros: the desire to jump in and start developing a learning solution right away. It makes sense—after you’ve received a directive and heard from your SMEs, what’s stopping you from getting started right away?

But when we move too quickly on a learning project, we’re often running on assumptions and missing out on finding the right solution for our learner audience. Field research tools allow you to slow down and spend more time thinking about the problem so you can find the right solution.

Field research tools help you get to know your learners and the problems they face much more intimately. That way you know you’re delivering learning that they’re willing to spend more time with and will drive business results.

How do you do that? We use a matrix from Nielsen Norman Group to plot out our empathy-building field research tools. The X-axis indicates the kind of data you’re collecting (qualitative or quantitative) and the Y-axis indicates the focus of your research (behavioral or attitudinal).

research tool observation

The left side of the matrix focuses on direct, qualitative field research methods in which you’ll be visible to your audience and even directly engaged with them. The right side of the matrix focuses on passive methods in which your audience may be unaware you’re collecting data.

The top half of the matrix helps us collect data on the behaviors of our audience: what they do, how they operate, where they get stuck, and why. The bottom half of the matrix is about their attitudes and beliefs and helps build a better understanding of the learner audience.

Although there are many field research tools out there, these are the core tools we use for our learning-design process. Let’s take a closer look at the four field research tools in this matrix and how to use them.

1. Observational studies

research tool observation

The top left quadrant brings us to observational studies. This qualitative tool is critical for bringing your field research together and gaining a clear, accurate understanding of your learner audience.

The reality is that your audience isn’t always honest. Observational tools take you beyond the interview for a firsthand look at the actual behaviors of your learner audience. Here, you’re strictly observing your learners on the job or in the learning environment. Take note of what they’re trying to do and anywhere they struggle.

Another option is to go through the learning experience yourself—participant observation is a powerful tool. Observe the experience as you go: what’s difficult for you? Where did you get stuck? Take notes and synthesize your findings right away.

2. A/B testing

research tool observation

The top right quadrant focuses on quantitative data that assesses learners’ behaviors. These field research tools are indirect and often work best for digital experiences like eLearning.

A/B testing is hugely beneficial for custom HTML learning experiences. A/B testing is a familiar experiment for anyone working in digital spaces: you split your audience into two groups to test a small number of variations to determine which performs better. You show version A to one half of your audience and version B to another. From there, you can track the data and observe how behaviors differ between the two versions. Pretty quickly, you’ll have the information you need to optimize the experience for your learner audience.

Every tool has its downside, and it’s worth noting that A/B testing is difficult to pull off inside an LMS. Another tip: if you’re looking to conduct A/B testing within ILT, be sure the program is robust enough to run the experiment several times and with control and experimental groups.

3. Interviews & focus groups

research tool observation

In the bottom left quadrant, the focus is on understanding the attitudes and beliefs of learners through interviews and focus groups. Think of this as the natural next step after you’ve gathered quantitative data from field research tools like surveys or A/B testing. These types of field research tools create space for you to spend time with learners and ask questions to understand their feelings and beliefs and learn what’s on their hearts and minds.

Field research techniques such as interviews and focus groups are qualitative in nature and help inform and flesh out your quantitative data. Typically, focus groups and interviews have one interviewer and a small group of several interviewees. This broader-scale approach helps you reach more learners more quickly than doing a series of one-on-one interviews.

research tool observation

The bottom right quadrant focuses on indirect, quantitative data that assesses learners’ attitudes and beliefs. Surveys are our go-to tool for gaining insight into how learners are feeling. They also help challenge assumptions about learners. Maybe a learner shared an anecdote with you. Through a survey, you can find out if the feelings of one learner are representative of the larger learner audience.

Crafting an effective survey is all about asking the right questions. Place a consistent measurement system around your questions and ask their feelings and beliefs on a variety of topics. Once you’ve gathered responses, you can then evaluate if you’ve accurately assessed the problem you’re trying to solve. Are other problems rising to the surface?

Pro tip: put extra care into getting the right sample size. Often, those who raise their hands to take surveys are already eager and enthusiastic in their roles. That could skew your results—you’ll want a wide range of perspectives to draw from.

How to conduct a Learning Environment Analysis

In this guide, you’ll find an in-depth overview of a Learning Environment Analysis, a powerful framework for accurately understanding adult learners’ previous knowledge, current challenges, and needs so that you can design the right learning solution. Plus, get worksheets and templates for each stage of the process.

Dos and don’ts for field research tools

By now you’ve probably noticed that direct research methods (observational studies, focus groups, and interviews) are more involved than indirect methods. There’s also more room for inconsistency, bias, and human error. Let’s take a closer look at how to get direct research methods right.

Field research tools dos:

Start with a hypothesis or objective.

It’s essential to go into field research with a specific intent for what you want to learn. List out what you think you know and then immediately challenge it. Are you making an assumption or leap of logic? Form your hypothesis and be prepared to validate it.

Generate a question set that is designed to fulfill your objectives

Once your hypothesis is set, use it to derive the questions you want to ask your audience. Which questions will help you uncover the truth? Create a list of topics and questions to guide your research.

Take detailed notes

Be sure to take thorough notes during in the field research . It’ll likely be hard to keep up with your notes, so if possible, get consent from learners to record your sessions. That way, you can refer back to the recordings later and avoid problems of faulty memory. If you can, capture photos and video during observations—it’ll be helpful to bring back to your team during brainstorming.

Translate your notes into findings right away

Don’t wait to synthesize your findings. Schedule time to regroup immediately after each stage of research. Try to avoid waiting until the very end of your research to reflect. Instead, take an interval approach and plan time to summarize your findings after every third or fourth interview. Do it as you go and try to reconcile your findings at each stage.

Field research tools don’ts:

Fall victim to the hawthorne effect.

Here’s some field study research you should know. The Hawthorne effect occurs when people behave differently because they know they are being watched. It’s human nature, right? Our presence during in the field research can impact our audience and influence them to change their behavior, obscuring the very truths we’re there to observe. When interacting with learners, be vague about why you’re there and avoid asking leading questions.

Get caught up in the yellow Walkman trap

Don’t just ask learners what they think, force them to make a choice. It’s the best way to find out what learners really think and avoid the yellow Walkman trap . In the 1980s, Sony did a focus group about a new, yellow version of the Walkman. During interviews, everyone seemed to love the sporty yellow color—they thought the old, black version was boring. On their way out, participants were offered a free Walkman in the color of their choice—everyone took a black Walkman.

Avoid your own yellow Walkman trap by using tools for field research that involve ranking. That way, it forces learners to prioritize and put things in first and last places.

Focus on solutions over problems

Remember: our intent is to understand root causes and problems. It’s tempting to jump into solutioning when you’re still in the research stage. But that limits you to one solution—focusing on the problem keeps you open to all possible solutions.

Field research tools increase learning effectiveness

These four field research tools help assess your learners’ existing understanding so you can create experiences that keep learners engaged and deliver stronger results.

A commitment to thorough and structured learner field research is an act of empathy. Effective learning meets people where they are and takes a tailored approach to best serve their needs.

What do your learners really think about your learning experiences?

Find out with learner feedback tools.

Training the next generation of crypto traders

We’re developing a first-of-its-kind learning experience that gives OKX’s new traders the confidence to succeed.

Our experiential, consumer-grade crypto learning academy is designed to deliver a comprehensive introduction to crypto trading for OKX’s 20 million users.

VetBloom A blockchain-based credentialing platform for veterinary specialties

Acist medical systems lifelike 3d product training to guide service techs on the job, best western hotels & resorts helping transform brand culture with fresh, energizing ilt.

  • The Simplest Way to Avoid the Research Fail of the Yellow Walkman
  • 5 Sets of Tried-and-True Resources for Instructional Designers
  • Are You Skipping the Validation Step? Why Piloting Your Learning Matters

Participant Observation as Research Methodology: Assessing the Validity of Qualitative Observational Data as Research Tools

Myasar Qaddo at American University in the Emirates (AUE)

  • American University in the Emirates (AUE)

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

NSSL NOAA National Severe Storms Laboratory

Mobile mesonet in the field

  • Research Tools
  • Observation

Research Tools: Observation

Science relies on observations to develop theories about nature, and ultimately to evaluate and validate these theories. These observations come from our natural senses and from instruments we create. The sustained development of advanced instrumentation continues to open new horizons in our understanding about how nature, including the multitude of processes in our atmosphere, really operates.

A significant part of our specialized instrumentation is built and maintained by an experienced Field Observing Facilities Support team, FOFS , who work hard to come up with innovative ways to support NSSL's storm research efforts.

Field Observing Systems

Mobile doppler radar.

NSSL researchers teamed up with The University of Oklahoma to build the first mobile Doppler radar in 1993. Current versions of mobile radars (for example, NSSL's NOXP ) can be parked very close to storms, observing details that are typically below the beam of distant WSR-88D radars. NSSL has also used mobile radars to study tornadoes, hurricanes, dust storms, winter storms, mountain rainfall, and even swarms of bats.

More about mobile radars →

Collaborative Lower Atmospheric Mobile Profiling System (CLAMPS)

NSSL has a mobile, trailer-based boundary layer profiling facility using commercially available sensors. CLAMPS contains a Doppler lidar, a multi-channel microwave radiometer (MWRP), and an Atmospheric Emitted Radiance Interferometer (AERI). CLAMPS meets a NOAA/NWS operational and research need for profiles of temperature, humidity, and winds near the surface of the earth. CLAMPS also has the ability to support weather balloon launches and measure surface weather.

Learn more about CLAMPS :: View CLAMPS realtime data

Mobile Mesonet and Mobile Sounding

Observations on the Go: Mobile Mesonet. Watch video about NSSL's Mobile Mesonet vehicles and more on the NOAA Weather Partners YouTube Channel»

The mobile mesonet is a vehicle intended to take surface observations of temperature, pressure, humidity, wind, and even solar radiation in and around storms and storm environments. Originally designed in 1992 by scientists and technicians from NSSL and The University of Oklahoma, these “probes” have undergone significant improvements over the years. Now, highly modified trucks are used to take these observations using a custom-designed roof rack and a complex of computer and communication equipment. Given that operating in severe weather comes with severe weather hazards, researchers added a hail cage to the vehicles to protect the windshields from damage.

In addition to surface observations, upper air observations are a critical component of meteorological observations. The National Weather Service launches weather balloons twice a day from locations across the country to measure vertical profiles of temperature, pressure, humidity, and winds. These observations form the backbone of numerical weather prediction and give us a picture of what the vertical component of the atmosphere looks like. Given this importance, the NSSL mobile mesonets now have the capability to launch soundings from any location at any time. The vehicles carry helium tanks in the back, and researchers can launch a balloon within a few minutes after arriving at a location.

2-Dimensional Video Distrometer (2DVD)

NSSL's 2DVD takes high speed video pictures, from two different angles, of anything falling from the sky through its viewing area, such as raindrops, hail or snow. It measures rain rate, drop shape and size distribution, which is used in polarimetric radar studies to refine precipitation identification algorithms.

Portable In situ Precipitation Station (PIPS)

The Portable In situ Precipitation Stations are small portable weather platforms built by NSSL in collaboration with The University of Oklahoma and Purdue University. Each PIPS has sensors that measure temperature, pressure, humidity, wind speed and direction. In addition, the PIPS determines the distribution of particle sizes by using an instrument called a Parsivel (PARticle, SIze, VELocity) disdrometer to measure the number and size of any object that falls through it (similar to the 2DVD). These can be deployed quickly in the field in any condition, and have even been used in hurricanes!

Weather balloons

NSSL launches special research weather balloon systems into thunderstorms. Measurements from the sensor packages attached to the balloons provide data about conditions inside the storm where it is often too dangerous for research aircraft to fly.

Particle Size Image and Velocity Probe (PASIV)

NSSL has built a one-of-a-kind, balloon-borne instrument called the Particle Size Image and Velocity probe, designed to capture high-definition images of water and ice particles as it is launched into, and rises up through, a thunderstorm. The instrument is flown as part of a “train” of other instruments connected one after another to a large balloon. These other instruments measure electrical field strength and direction, and other important atmospheric variables such as temperature, dew point, pressure and winds. Data from these systems helps researchers understand the relationships between the many macro and microphysical properties in thunderstorms.

Electric Field Meters (EFM)

NSSL's Field Observing Facilities and Support group (FOFS) is responsible for a device called an Electric Field Meter, EFM, that is attached, along with other instruments, to a special research balloon and launched into thunderstorms. As they are carried up through electrified storms, these EFMs are designed to measure the strength and direction of the electric fields that build up before lightning strikes occur. Data from this instrument helps researchers learn more about the electrical structure of storms. Read more about it

Mobile laboratories

NSSL operates two mobile laboratories (custom built by an ambulance company) called NSSL6 and NSSL7, outfitted with computer and communication systems, balloon launching equipment, and weather instruments. These mobile labs can be driven anywhere to collect data or coordinate field operations.

Fixed Observing Systems

Oklahoma lightning mapping array (oklma).

NSSL installed, operates and maintains the OKLMA. Thousands of points can be mapped for an individual lightning flash to reveal its location and the development of its structure. NSSL scientists hope to learn more about how storms produce intra-cloud and cloud-to-ground flashes and how each type is related to tornadoes and other severe weather.

Read more about OKLMA

NSSL researchers are working on products that use GOES satellite data to identify rapidly growing clouds that might indicate a developing thunderstorm. They are also working on products that estimate wind shear and stability in the surrounding environment to forecast the future severity of the storm.

GOES-R Proving Ground website

NSSL researchers are looking at the climatology of cloud cover to look for trends that will help predict flooding and improve seasonal forecasting worldwide.

Boundary layer profilers

NSSL uses special instruments mounted on the top of the National Weather Center to measure the thermodynamic properties of the lowest one to two kilometers of the atmosphere, known as the boundary layer. Researchers study the data to learn more about the structure of the boundary layer, shallow convective cloud processes, the interaction between clouds, aerosols, radiation, precipitation and the thermodynamic environment, mixed phase clouds, and more. Numerical models, such as those used for climate and weather prediction, have large uncertainties in all of these areas. Researchers also use these observations to improve our understanding and representation of these processes.

Public Reports

NSSL uses observations from people too! In the Meteorological Phenomena Identification Near the Ground (mPING) project , volunteers can report on the precipitation that is reaching the ground at their location through mobile apps (iOS and Android). Researchers compare the reports of precipitation with what is detected by the dual-polarized radar data to refine precipitation identification algorithms.

Another way NSSL has used public observations was through the mostly student-run NSSL/CIMMS (now CIWRO) Severe Hazards Analysis and Verification Experiment (SHAVE) project. SHAVE workers collected hail, wind damage and flash flooding reports through phone surveys. SHAVE reports, when combined with the voluntary reports collected by the NWS, created a unique and comprehensive database of severe and non-severe weather events and enhanced climatological information about severe storm threats in the U.S.

Past Research Highlights

Totable tornado observatory (toto).

The TOtable TOrnado Observatory (TOTO), named after Dorothy's little dog from the movie “The Wizard of Oz,” was a 300 lb aluminum barrel outfitted with anemometers, pressure sensors, and humidity sensors, along with equipment to record the data. In theory, a team would roll TOTO out of the back of the pickup in the path of a tornado, switch on the instruments, and get out of the way. Several groups tried to deploy TOTO over the years, but never scored a direct hit. The closest TOTO ever came to success was in 1984 when it was sideswiped by the edge of a weak tornado and was knocked over. TOTO was retired in 1984.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative to broader populations. .
Quantitative .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary . methods.
Secondary

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive . .
Experimental

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Research methods for analyzing data
Research method Qualitative or quantitative? When to use
Quantitative To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyze the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyze data collected from interviews, , or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

Get unlimited documents corrected

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Site logo

  • Research Tools
  • Learning Center

Essential Research Tools in M&E

Discover Essential Research Tools for Effective Monitoring and Evaluation (M&E) | Boost Your M&E Process with Reliable Data Collection, Analysis, and Reporting Tools | Find the Right Instruments, Software, and Techniques to Enhance Your M&E Methods | Enhance Decision-making and Program Improvement in M&E with Powerful Research Tools.

Table of Contents

  • What are research tools?x
  • Research tools in M&E

Essential research tools commonly used across disciplines

How do i choose a research tool, what are methods vs tools in research, future trends and innovations in research tools for m&e, what are research tools.

Research tools refer to a wide range of resources, methods, instruments, software, or techniques that researchers use to collect , analyze , interpret, and communicate data and information during the research process.

These tools are designed to facilitate and enhance various aspects of research, such as data collection , organization, analysis, visualization, collaboration, and documentation. Research tools can be both physical (e.g., laboratory equipment, survey instruments) and digital (e.g., software, online databases).

They are essential for conducting research effectively, efficiently, and rigorously across different disciplines and research domains. Examples of research tools include laboratory equipment, survey questionnaires, statistical software, data visualization tools, literature databases, collaboration platforms, and more.

The choice of research tools depends on the specific research objectives, methods, and requirements of the study.

Optimize your resume to get more interviews: Try Our FREE Resume Scanner!

Optimize your resume for ATS with formatting, keywords, and quantified experience.

  • Compare job description keywords with your resume.
  • Tailor your resume to match as many keywords as possible.
  • Grab the recruiter’s attention with a standout resume.

Resume Scanner Dashboard

Scan your resume now

Research Tools in M&E

Monitoring and Evaluation (M&E) is a crucial component of research and program evaluation . Here are some essential research tools commonly used in the field of M&E:

  • Logic Models and Results Frameworks: Logic models or results frameworks are visual tools that help clarify the theory of change and establish the logical connections between project activities, outputs, outcomes, and impacts. They provide a framework for designing M&E systems and identifying key indicators.
  • Key Performance Indicators (KPIs): KPIs are measurable indicators that track progress and performance toward project or program goals. They help monitor the effectiveness and efficiency of interventions. Examples of KPIs can include the number of beneficiaries reached, percentage of target achieved, or cost per output.
  • Surveys and Questionnaires: Surveys and questionnaires are useful tools for collecting quantitative data in M&E. They allow you to gather information from a large number of respondents and measure variables and indicators systematically. Online survey tools like SurveyMonkey or Google Forms can simplify data collection and analysis.
  • Interviews and Focus Groups: Qualitative data collection methods, such as interviews and focus groups, can provide in-depth insights into participants’ experiences, perceptions, and attitudes. These methods are particularly valuable for understanding the contextual factors and mechanisms underlying program outcomes.
  • Observations and Field Notes: Direct observations and field notes are often used to collect qualitative data in real-time. They help capture detailed information about program implementation, participant behaviors, and contextual factors that might not be evident through other methods.
  • Data Analysis Software: Statistical software packages like SPSS, Stata, or R are commonly used for quantitative data analysis in M&E. These tools enable researchers to clean, analyze, and interpret large datasets efficiently. Qualitative data analysis software such as NVivo or Atlas.ti can assist with organizing and analyzing qualitative data.
  • Data Visualization Tools: Tools like Excel, Tableau, or Power BI allow you to create visual representations of M&E data. Visualizations help communicate complex information and findings in a clear and compelling manner to stakeholders and decision-makers.
  • Geographic Information Systems (GIS): GIS tools like ArcGIS or QGIS enable researchers to analyze and visualize spatial data. They can help identify geographical patterns, hotspot analysis, and map program impact or reach.
  • Evaluation Management Systems: Evaluation management systems like DevResults or DHIS2 provide a centralized platform for managing M&E data, including data entry, analysis, reporting, and visualization. These systems streamline data management processes and facilitate collaboration among evaluation team members.
  • Theory-Based Evaluation Approaches: Theory-based evaluation approaches , such as the Theory of Change or Contribution Analysis, help guide the evaluation process by explicitly linking program activities to intended outcomes and impacts. These approaches provide a framework for designing evaluations and analyzing the causal mechanisms at work.

It’s important to note that the selection of research tools in M&E should align with the specific objectives, scope, and resources of the evaluation. Tailor the choice of tools to the needs of the evaluation design and ensure that they provide reliable and valid data to inform decision-making.

There are numerous research tools available to support various types of research, and the choice of tools depends on the specific field of study and research goals. However, here are some essential research tools commonly used across disciplines:

  • Library Databases: Online databases such as PubMed (biomedical literature), IEEE Xplore (engineering and computer science), JSTOR (humanities and social sciences), and Scopus (multidisciplinary) provide access to a vast collection of academic journals, articles, conference papers, and other scholarly resources.
  • Google Scholar : This search engine specifically focuses on scholarly literature. It allows you to find academic papers, theses, books, and conference proceedings. It’s a useful tool for accessing both open access and subscription-based scholarly content.
  • ResearchGate : ResearchGate is a social networking platform for researchers. It enables collaboration, networking, and access to research publications, preprints, and datasets. Researchers can also ask and answer questions related to their field of expertise.
  • Reference Management Software: Tools like Zotero, Mendeley, and EndNote help researchers organize and manage bibliographic references. They allow you to collect, store, annotate, and cite references, making the citation process more efficient and streamlined.
  • Data Analysis Tools: Depending on your research field, you may need specific data analysis tools. For statistical analysis, software such as SPSS, R, or Stata is commonly used. For qualitative research, NVivo and Atlas.ti assist with analyzing textual data.
  • Collaboration and Communication Tools: Tools like Slack, Microsoft Teams, or Google Workspace facilitate collaboration and communication among research teams. They provide features like file sharing, real-time editing, video conferencing, and project management.
  • Data Visualization Tools: Tools like Tableau, Plotly, or Excel can help create visual representations of data. These tools make it easier to present and interpret complex data sets, enabling researchers to communicate their findings effectively.
  • Online Survey Tools: Platforms like SurveyMonkey, Google Forms, or Qualtrics allow researchers to design and distribute online surveys. These tools simplify the data collection process and provide features for analyzing and visualizing survey responses.
  • Reference Search and Document Delivery: Tools like interlibrary loan systems, WorldCat, or services like Unpaywall can help you access research articles and resources that may not be available in your institution’s library.
  • Academic Social Networks: Platforms like Academia.edu or LinkedIn can help researchers showcase their work, connect with peers, and discover potential collaborators or mentors.

Remember that the choice of research tools may vary depending on your specific research field and requirements. It’s essential to explore and evaluate the available options to find the tools that best align with your research goals and needs.

Choosing the right research tool in Monitoring and Evaluation (M&E) requires careful consideration of various factors.

Here’s a step-by-step process to help you choose a research tool for your M&E study:

  • Define Your Research Objectives: Clearly articulate the purpose and goals of your M&E study. Determine what specific information you need to collect, analyze, and communicate through the evaluation process.
  • Identify Data Needs: Identify the types of data you will be working with (quantitative, qualitative, spatial) and the specific indicators or variables you need to measure. Consider the level of detail, precision, and reliability required for your data.
  • Assess Available Resources: Evaluate the resources available to you, including budget, time constraints, technical expertise, and access to technology or specialized equipment. Consider the level of support you may need in terms of training, technical assistance, or collaboration.
  • Research Tool Options: Conduct research to explore the range of research tools available in M&E. Consult academic literature, practitioner resources, online forums, and professional networks to identify commonly used tools in your specific field or context.
  • Evaluate Tool Suitability: Evaluate each research tool option against your specific needs and constraints. Consider factors such as ease of use, data quality, scalability, compatibility with existing systems, and cost-effectiveness. Assess whether the tool aligns with the type of data you are working with and the analysis and reporting requirements of your M&E study.
  • Seek Recommendations and Feedback: Consult with experts, colleagues, or M&E professionals who have experience with the tools you are considering. Seek recommendations and feedback on their effectiveness, limitations, and user-friendliness. Their insights can provide valuable perspectives in selecting the most appropriate tool.
  • Trial and Testing: If feasible, conduct small-scale trials or pilot tests with a subset of your data or research participants. This allows you to assess the usability and functionality of the tool, identify any potential issues, and gain practical experience in its implementation.
  • Consider Integration and Compatibility: Consider the compatibility of the research tool with other tools or systems you may be using in your M&E process. Evaluate how well the tool integrates with existing data management, analysis, or reporting systems to ensure smooth workflows and data interoperability.
  • Training and Support: Assess the availability of training resources, user guides, tutorials, and technical support for the research tool. Consider the level of training required for you and your team to effectively utilize the tool and ensure proper implementation.
  • Make an Informed Decision: Based on the evaluation and assessment of the above factors, make an informed decision on the research tool that best meets your M&E objectives, data requirements, available resources, and user needs.

Remember, the choice of a research tool should be driven by the specific context, research objectives, and resources available to you. It’s important to consider trade-offs and select a tool that maximizes the quality and efficiency of your M&E study.

In the context of Monitoring and Evaluation (M&E), methods and tools have similar meanings as in general research, but they are applied specifically to the M&E process :

  • M&E Methods: M&E methods refer to the systematic approaches and frameworks used to assess, measure, and evaluate the effectiveness, efficiency, and impact of programs, projects, or interventions. These methods provide a structured and rigorous approach to collecting and analyzing data to inform decision-making. M&E methods may include baseline studies, surveys, interviews, focus groups, case studies, statistical analysis, impact evaluation designs, and more. They guide the overall evaluation design and determine the data collection and analysis techniques used in M&E.
  • M&E Tools: M&E tools are the specific resources, instruments, software, or techniques used within the M&E methods to support the data collection, management, analysis, visualization, and reporting processes. These tools provide practical means to implement M&E methods effectively. Examples of M&E tools include data collection templates, survey questionnaires, data analysis software (e.g., SPSS, Stata, R), visualization tools (e.g., Excel, Tableau), logic models, results frameworks, evaluation management systems (e.g., DevResults, DHIS2), and more. M&E tools assist in streamlining and enhancing the efficiency and accuracy of the M&E process.

In M&E, methods establish the overall approach to evaluating and assessing programs or interventions, while tools are the specific resources or techniques used within those methods to facilitate data collection, analysis, and reporting. M&E methods guide the evaluation design and data analysis, while M&E tools provide the means to execute those methods effectively. Both methods and tools are crucial in conducting rigorous and effective M&E, ensuring that data is collected, analyzed, and interpreted in a systematic and reliable manner to inform decision-making and program improvement.

As the field of Monitoring and Evaluation (M&E) continues to evolve, researchers and practitioners are exploring new trends and innovations in research tools to enhance the effectiveness and efficiency of evaluations. Here are some emerging trends and future directions in research tools for M&E:

  • Integrated Data Platforms: With the increasing volume and complexity of data generated in M&E, there is a growing need for integrated data platforms that streamline data collection, management, analysis, and reporting processes. These platforms bring together various tools and functionalities into a unified system, allowing for seamless data flow and collaboration among stakeholders.
  • Artificial Intelligence (AI) and Machine Learning: AI and machine learning technologies hold great potential for automating data analysis, identifying patterns and trends, and generating insights from large datasets in M&E. By leveraging AI algorithms, researchers can gain deeper insights into program performance, identify predictive indicators, and make data-driven decisions more efficiently.
  • Mobile Data Collection Tools: Mobile data collection tools are becoming increasingly popular for conducting surveys, collecting field data, and monitoring program activities in real-time. These tools enable researchers to capture data using smartphones or tablets, allowing for faster data collection, improved data quality, and enhanced accessibility in remote or resource-constrained settings.
  • Blockchain Technology: Blockchain technology offers opportunities for enhancing the transparency, security, and integrity of M&E data. By leveraging blockchain-based platforms, researchers can ensure the immutability and traceability of data, reduce the risk of data manipulation or fraud, and enhance trust and accountability in the evaluation process.
  • Open Data and Data Sharing Platforms: There is a growing movement towards open data and data sharing in M&E, driven by the desire for transparency, collaboration, and knowledge exchange. Open data platforms facilitate the sharing of evaluation data, findings, and resources among stakeholders, enabling greater reproducibility, accountability, and innovation in the field.
  • Citizen Science and Participatory Approaches: Citizen science and participatory approaches involve engaging community members and stakeholders in the research process, from data collection to interpretation and decision-making. By involving local communities in M&E efforts, researchers can gather diverse perspectives, foster ownership, and ensure the relevance and sustainability of evaluation initiatives.
  • Ethical Considerations and Data Privacy: With the increasing use of digital technologies and data-driven approaches in M&E, there is a growing awareness of the need to address ethical considerations and data privacy concerns. Researchers must prioritize ethical principles such as informed consent, data confidentiality, and protection of vulnerable populations to ensure responsible and ethical conduct of evaluations.

By embracing these emerging trends and innovations in research tools, M&E practitioners can enhance the quality, rigor, and impact of evaluations, ultimately contributing to more effective and evidence-based decision-making in development and humanitarian efforts.

Research tools play a crucial role in the field of Monitoring and Evaluation (M&E) by supporting data collection, analysis, visualization, and reporting processes . The choice of research tools should be guided by the specific objectives, context, and data requirements of the evaluation.

Essential research tools in M&E include data collection instruments (surveys, interviews, observation checklists), data analysis software (SPSS, Stata, R), data visualization tools (Excel, Tableau), logic models, KPI frameworks, GIS software, evaluation management systems, and collaboration platforms.

By selecting and utilizing appropriate research tools, M&E practitioners can enhance the efficiency , accuracy, and effectiveness of their evaluations, leading to evidence-based decision-making and program improvement.

It is important to evaluate and choose tools that align with the evaluation design, data type, available resources, and technical expertise to ensure rigorous and meaningful evaluation outcomes in M&E.

' data-src=

Nguen B. Riak

I found this article on Qualitative research- on research tools is so essential. I will benefit from as student taking up his academic studies.

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

research tool observation

Jobs for You

College of education: open-rank, evaluation/social research methods — educational psychology.

  • Champaign, IL, USA
  • University of Illinois at Urbana-Champaign

Deputy Director – Operations and Finance

  • United States

Energy/Environment Analyst

Climate finance specialist, call for consultancy: evaluation of dfpa projects in kenya, uganda and ethiopia.

  • The Danish Family Planning Association

Project Assistant – Close Out

  • United States (Remote)

Global Technical Advisor – Information Management

  • Belfast, UK
  • Concern Worldwide

Intern- International Project and Proposal Support – ISPI

Budget and billing consultant, manager ii, budget and billing, usaid/lac office of regional sustainable development – program analyst, team leader, senior finance and administrative manager, data scientist.

  • New York, NY, USA
  • Everytown For Gun Safety

Energy Evaluation Specialist

Services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

research tool observation

Affiliate 💸

Get started free

Literature Review

12 Best Tools For Perfect Research Summary Writing

Discover the 12 best tools to streamline your research summary writing, ensuring clarity and precision every time.

Aug 29, 2024

person making new notes - Research Summary

Consider you finally find the time to tackle that research paper for your class. You pull up your literature search and see dozens of articles and studies staring back at you. As you scroll through the titles and abstracts, you realize you need to figure out how to summarize the research to get started on your paper. 

Writing a practical research summary can feel daunting, but it doesn’t have to. In this guide, we’ll break down what a research summary is, why it’s essential, and how to write one. This information lets you confidently write your research summary and finish your paper. 

Otio’s AI research and writing partner can help you write efficient research summaries and papers. Our tool can summarize academic articles so you can understand the material and finish your writing.

Table Of Contents

What is a research summary, purpose of a research summary, how do you write a research summary in 10 simple steps, what is a phd research summary, examples of research summary, supercharge your researching ability with otio — try otio for free today.

man with notes infront of him - Research Summary

A research summary is a piece of writing that summarizes your research on a specific topic. Its primary goal is to offer the reader a detailed study overview with critical findings. A research summary generally contains the structure of the article. 

You must know the goal of your analysis before you launch a project. A research overview summarizes the detailed response and highlights particular issues. Writing it may be troublesome. You want to start with a structure in mind to write a good overview. 

Related Reading

• Systematic Review Vs Meta Analysis • Impact Evaluation • How To Critique A Research Article • How To Synthesize Sources • Annotation Techniques • Skimming And Scanning • Types Of Literature Reviews • Literature Review Table • Literature Review Matrix • How To Increase Reading Speed And Comprehension • How To Read Research Papers • How To Summarize A Research Paper • Literature Gap

woman focused on completing work - Research Summary

A research summary provides a brief overview of a study to readers. When searching for literature, a reader can quickly grasp the central ideas of a paper by reading its summary. It is also a great way to elaborate on the significance of the findings, reminding the reader of the strengths of your main arguments. 

Having a good summary is almost as important as writing a research paper. The benefit of summarizing is showing the "big picture," which allows the reader to contextualize your words. In addition to the advantages of summarizing for the reader, as a writer, you gain a better sense of where you are going with your writing, which parts need elaboration, and whether you have comprehended the information you have collected. 

man sitting alone in his room - Research Summary

1. Read The Entire Research Paper

Before writing a research summary , you must read and understand the entire research paper. This may seem like a time-consuming task, but it is essential to write a good summary. Make sure you know the paper's main points before you begin writing.

2. Take Notes As You Read

As you read, take notes on the main points of the paper. These notes will come in handy when you are writing your summary. Be sure to note any necessary information, such as the main conclusions of the author's writing. This helpful tip will also help you write a practical blog summary in less time.

3. Organize Your Thoughts

Once you have finished reading and taking notes on the paper, it is time to start writing your summary. Before you begin, take a few minutes to organize your thoughts. Write down the main points that you want to include in your summary. Then, arrange these points in a logical order.

4. Write The Summary

Now that you have organized your thoughts, it is time to start writing the summary. Begin by stating the author’s thesis statement or main conclusion. Then, briefly describe each of the main points from the paper. Be sure to write clearly and concisely. When you finish, reread your summary to ensure it accurately reflects the paper's content.

5. Write The Introduction

After you have written the summary, it is time to write the introduction. The introduction should include an overview of the paper and a summary description. It should also state the main idea.

6. Introduce The Report's Purpose

The summary of a research paper should include a brief description of the paper's purpose. It should state the paper's thesis statement and briefly describe each of the main points of the paper.

7. Use Keywords To Introduce The Report

When introducing the summary of a research paper, use keywords familiar to the reader. This will help them understand the summary and why it is essential.

8. State The Author's Conclusions

The summary of a research paper should include a brief statement of the author's conclusions. This will help your teacher understand what the paper is trying to achieve.

9. Keep It Concise

A summary should be concise and to the point. It should not include any new information or arguments. It should be one paragraph long at maximum.

10. Edit And Proofread

After you have written the summary, edit and proofread it to ensure it is accurate and precise. This will help ensure that your summary is effective and free of any grammar or spelling errors.

person using top tools - Research Summary

1. Otio: Your AI Research Assistant  

Knowledge workers, researchers, and students today need help with content overload and are left to deal with it using fragmented, complex, and manual tooling. Too many settle for stitching together complicated bookmarking, read-it-later, and note-taking apps to get through their workflows. Now that anyone can create content with a button, this problem will only worsen. Otio solves this problem by providing researchers with one AI-native workspace. It helps them: 

1. Collect a wide range of data sources, from bookmarks, tweets, and extensive books to YouTube videos. 

2. extract key takeaways with detailed ai-generated notes and source-grounded q&a chat. , 3. create draft outputs using the sources you’ve collected. .

Otio helps you to go from a reading list to the first draft faster. Along with this, Otio also enables you to write research papers/essays faster. Here are our top features that researchers love: AI-generated notes on all bookmarks (Youtube videos, PDFs, articles, etc.), Otio enables you to chat with individual links or entire knowledge bases, just like you chat with ChatGPT, as well as AI-assisted writing. 

Let Otio be your AI research and writing partner — try Otio for free today ! 

2. Hypotenuse AI: The Versatile Summarizer  

Like all the AI text summarizers on this list, Hypotenuse AI can take the input text and generate a short summary. One area where it stands out is its ability to handle various input options: You can simply copy-paste the text, directly upload a PDF, or even drop a YouTube link to create summaries. 

You can summarize nearly 200,000 characters (or 50,000 words) at once. 

Hypotenuse AI summarizes articles, PDFs, paragraphs, documents, and videos. 

With the AI tool, you can create engaging hooks and repurpose content for social media. 

You'll need a paid plan after the 7-day free trial. 

There needs to be a free plan available. 

The AI tool majorly focuses on generating eCommerce and marketing content. 

3. Scalenut: The Beginner-Friendly AI Summarizer  

Scalenut is one of the powerful AI text summarizers for beginners or anyone starting out. While it's not as polished as some other business-focused apps, it's significantly easier to use — and the output is just as good as others. If you want a basic online text summarizer that lets you summarize the notes within 800 characters (not words), Scalenut is your app. 

With Scalenut, you get a dedicated summary generation tool for more granular control. 

The keyword planner available helps build content directly from the short and sweet summaries. 

The AI tool integrates well with a whole suite of SEO tools, making it a more SEO-focused summarizer. 

You only get to generate one summary per day. 

Scalenut's paid plans are expensive compared to other AI tools. 

You must summarize long-form articles or blogs at most the limit of 800 characters. 

4. SciSummary: The Academic AI Summarizer  

SciSummary is an AI summarizer that helps summarize single or multiple research papers. It combines and compares the content summaries from research papers, article links, etc. 

It can save time and effort for scientists, students, and enthusiasts who want to keep up with the latest scientific developments. 

It can provide accurate and digestible summaries powered by advanced AI models that learn from feedback and expert guidance. 

It can help users read between the lines and understand complex scientific texts' main points and implications. 

It may only capture some nuances and details of the original articles or papers, which may be necessary for some purposes or audiences. 

Some types of scientific texts, such as highly technical, specialized, or interdisciplinary, may require more domain knowledge or context. 

Some sources of scientific information, such as websites, videos, or podcasts not in text format, may need help summarizing. 

5. Quillbot: The AI Summarizer for Academic Papers  

QuillBot uses advanced neural network models to summarize research papers accurately and effectively. The tool leverages cutting-edge technology to condense lengthy papers into concise and informative summaries, making it easier for users to navigate vast amounts of literature. 

You can upload the text for summarization directly from a document. 

It's excellent for summarizing essays, papers, and lengthy documents. 

You can summarize long texts up to 1200 words for free. 

The free plan is limited to professionals. 

There could have been some more output types. 

QuillBot's Premium plan only gives you 6000 words for summaries per month. 

6. Scribbr: The Research Paper Assistant  

Scribbr is an AI-driven academic writing assistant with a summarization feature tailored for research papers. The tool assists users in the research paper writing process by summarizing and condensing information from various sources, offering support in structuring and organizing content effectively. 

7. TLDR This: The Online Article Summarizer  

TLDR This uses advanced AI to effectively filter out unimportant arguments from online articles and provide readers only with vital takeaways. Its streamlined interface eliminates ads and distractions while summarizing key points, metadata, images, and other crucial article details. 

TLDR This condenses even very lengthy materials into compact summaries users can quickly consume, making it easier to process a vast range of internet content efficiently. 

Ten free "AI" summaries 

Summarization of long text 

Basic summary extraction 

Premium option cost 

No significant improvement in premium options 

8. AI Summarizer: The Text Document Summarizer  

AI Summarizer harnesses artificial intelligence to summarize research papers and other text documents automatically. The tool streamlines the summarization process, making it efficient and accurate, enabling users to extract essential information from extensive research papers efficiently. 

Easy-to-understand interface 

1500-word limit 

Multiple language support 

Contains advertisements 

Requires security captcha completion 

Struggles with lengthy content summarization 

9. Jasper: The Advanced Summarizer  

Jasper AI is a robust summarizing tool that helps users generate AI-powered paper summaries quickly and effectively. The tool supports the prompt creation of premium-quality summaries, assisting researchers in distilling complex information into concise and informative outputs. 

Jasper offers some advanced features, like generating a text from scratch and even summarizing it. 

It integrates well with third-party apps like Surfer, Grammarly, and its own AI art generator. 

It's versatile and can be used to create summaries of blogs, articles, website copy, emails, and even social media posts. 

There's no free plan available — though you get a 7-day free trial. 

You'll need to have a flexible budget to use Jasper AI. 

The Jasper app has a steep learning curve. 

10. Resoomer: The Summary Extractor  

Resoomer rapidly analyzes textual documents to determine the essential sentences and summarizes these key points using its proprietary semantic analysis algorithm. 

By automatically identifying what information matters most, Resoomer can condense elaborate texts across diverse subjects into brief overviews of their core message. With swift copy-and-paste functionality requiring no signup, this specialized tool simplifies the reading experience by extracting only vital details from complex writings. 

Clear and accurate summaries 

Creative sentence combining 

Variety of modes and options 

Lengthy text summarization without word limit in premium mode 

Confusing interface with irrelevant features 

Long-winded summaries spread across multiple pages 

11. Anyword: The Marketing-Focused Summarizer  

When I saw Anyword's summary, I could easily state that the content was unique and worth sharing, making this AI tool an excellent choice for marketers. Plus, it's very easy to use.  

Once you've copied-pasted the text and chosen a summary type, paragraph, keywords, or TL;DR, it generates a summary in minutes. Approve it; you can share the text directly without worrying about plagiarized content. 

You can test the AI tool with the 7-day free trial. 

The Anyword's text generator and summarizer are perfect for creating long-form pieces like blog posts with snippets. 

You can give detailed prompts to the AI tool to customize the generated text. 

Any word is expensive for a more limited set of features than other AI summarizers. 

It can sometimes be slower to use. 

There is no free Anyword plan available. 

12. Frase: The SEO Summarizer  

Frase is a powerful AI-powered summarizer that focuses on SEO. This means it can generate summaries that attract audiences and rank higher. Its proprietary model stands out, providing more flexibility, competitive pricing, and custom features. 

Frase uses BLUF and Reverse Pyramid techniques to generate summaries, improving ranking chances. 

It's free to use Frase's summary generator. 

Instead of GPT-3.5 or GPT-4, Frase uses its proprietary model. 

There's no way to add links to the blog or article to generate a summary. 

You can input up to 600-700 words for summarization. 

It might not be an ideal article summarizer for those who don't care about SEO. 

man working with Research Summary

A research summary for a PhD is called a research statement . The research statement (or statement of research interests) is included in academic job applications. It summarizes your research accomplishments, current work, and future direction and potential. The statement can discuss specific issues such as funding history and potential requirements for laboratory equipment and space and other resources, possible research and industrial collaborations, and how your research contributes to your field's future research direction. 

The research statement should be technical but intelligible to all department members, including those outside your subdiscipline. So keep the “big picture” in mind. The strongest research statements present a readable, compelling, and realistic research agenda that fits well with the department's needs, facilities, and goals. Research statements can be weakened by: overly ambitious proposals lack of apparent direction lack of big-picture focus, and inadequate attention to the needs and facilities of the department or position. 

• Literature Search Template • ChatGPT Prompts For Research • How To Find Gaps In Research • Research Journal Example • How To Find Limitations Of A Study • How To Do A Literature Search • Research Concept Map • Meta-Analysis Methods • How To Identify Bias In A Source • Search Strategies For Research • Literature Search Template • How To Read A Research Paper Quickly • How To Evaluate An Article • ChatGPT Summarize Paper • How To Take Notes For A Research Paper

person sitting alone - Research Summary

Research Summary Example 1: A Look at the Probability of an Unexpected Volcanic Eruption in Yellowstone 

Introduction  .

If the Yellowstone supervolcano erupted massively , the consequences would be catastrophic for the United States. The importance of analyzing the likelihood of such an eruption cannot be overstated.  

Hypothesis  

An eruption of the Yellowstone supervolcano would be preceded by intense precursory activity manifesting a few weeks up to a few years in advance.  

Results     

Statistical data from multiple volcanic eruptions happening worldwide show activity that preceded these events (in particular, how early each type of activity was detected).   

Discussion and Conclusion  

Given that scientists continuously monitor Yellowstone and that signs of an eruption are normally detected much in advance, at least a few days in advance, the hypothesis is confirmed. This could be applied to creating emergency plans detailing an organized evacuation campaign and other response measures.     

Research Summary Example 2: The Frequency of Extreme Weather Events in the US from 2000-2008 as Compared to the ‘50s

Weather events bring immense material damage and cause human victims.    

Extreme weather events are significantly more frequent nowadays than in the ‘50s.   

Several categories of extreme events occur regularly now and then: droughts and associated fires, massive rainfall/snowfall and associated floods, hurricanes, tornadoes, Arctic cold waves, etc.   

Discussion and Conclusion 

Several extreme events have become significantly more frequent recently, confirming this hypothesis. This increasing frequency correlates reliably with rising CO2 levels in the atmosphere and growing temperatures worldwide. 

In the absence of another recent significant global change that could explain a higher frequency of disasters, and knowing how growing temperature disturbs weather patterns, it is natural to assume that global warming (CO2) causes this increase in frequency. This, in turn, suggests that this increased frequency of disasters is not a short-term phenomenon but is here to stay until we address CO2 levels.  

Researchers, students, and knowledge workers have long struggled with the initial stages of research projects. The early steps of gathering and organizing information , taking notes, and synthesizing the material into a coherent summary are vital for establishing a solid foundation for any research endeavor. 

These steps can be tedious, overwhelming, and time-consuming. Otio streamlines this process so you can go from the reading list to the first draft faster. Along with this, Otio also helps you write research papers/essays faster. Here are our top features that researchers love: 

AI-generated notes on all bookmarks (Youtube videos, PDFs, articles, etc.), Otio enables you to chat with individual links or entire knowledge bases, just like you chat with ChatGPT, as well as AI-assisted writing. 

• Sharly AI Alternatives • AI For Summarizing Research Papers • Literature Review Tools • How To Identify Theoretical Framework In An Article • Graduate School Reading • Research Tools • AI For Academic Research • Research Paper Organizer • Best AI Tools For Research • Zotero Alternatives • Zotero Vs Endnote • ChatGPT For Research Papers • ChatGPT Literature Review • Mendeley Alternative • Unriddle AI Alternatives • Literature Matrix Generator • Research Assistant • Research Tools • Research Graphic Organizer • Good Websites for Research • Best AI for Research • Research Paper Graphic Organizer

Person Writing - Best AI For Research

Aug 31, 2024

15 Best AI For Research (Quick and Efficient)

woman with laptop - Good Websites For Research

Aug 28, 2024

22 Good Websites For Research Papers and Academic Articles

Join over 50,000 researchers changing the way they read & write

research tool observation

Chrome Extension

© 2024 Frontdoor Labs Ltd.

Terms of Service

Privacy Policy

Refund Policy

Join thousands of other scholars and researchers

Try Otio Free

© 2023 Frontdoor Labs Ltd.

IMAGES

  1. 4 Observation Tools and How to Use Them

    research tool observation

  2. Quick Read: Using observation techniques to record student behaviour

    research tool observation

  3. Observation Tools

    research tool observation

  4. how to do observation in research

    research tool observation

  5. Observational Research

    research tool observation

  6. Quantitative Observation in Research

    research tool observation

VIDEO

  1. observation tools / tools of evaluation part

  2. Classroom Observation Tool for Odisha Govt Schools

  3. ELECTROLAB Dissolution Tester with iDisso

  4. HELICS

  5. USING THE CRITICAL CARE PAIN OBSERVATION TOOL (CPOT)

  6. Orientation on Teach Classroom Observation Process

COMMENTS

  1. Observation

    Observation. Observation, as the name implies, is a way of collecting data through observing. This data collection method is classified as a participatory study, because the researcher has to immerse herself in the setting where her respondents are, while taking notes and/or recording. Observation data collection method may involve watching ...

  2. Observational Research

    Definition: Observation is the process of collecting and recording data by observing and noting events, behaviors, or phenomena in a systematic and objective manner. It is a fundamental method used in research, scientific inquiry, and everyday life to gain an understanding of the world around us.

  3. Observations in Qualitative Inquiry: When What You See Is Not What You

    Observation in qualitative research "is one of the oldest and most fundamental research methods approaches. This approach involves collecting data using one's senses, especially looking and listening in a systematic and meaningful way" (McKechnie, 2008, p. 573).Similarly, Adler and Adler (1994) characterized observations as the "fundamental base of all research methods" in the social ...

  4. Direct observation methods: A practical guide for health researchers

    Health research study designs benefit from observations of behaviors and contexts. •. Direct observation methods have a long history in the social sciences. •. Social science approaches should be adapted for health researchers' unique needs. •. Health research observations should be feasible, well-defined and piloted.

  5. (PDF) Observation Methods

    2.1 Introduction. Observation is one of the most important research methods in social sci-. ences and at the same time one of the most diverse. e term includes. several types, techniques, and ...

  6. Observation Methods: Naturalistic, Participant and Controlled

    In general, conducting observational research is relatively inexpensive, but it remains highly time-consuming and resource-intensive in data processing and analysis. ... Time series analysis is another powerful tool, especially for studies that involve continuous observation over extended periods. This method can reveal trends, cycles, or ...

  7. What Is Qualitative Observation?

    Qualitative observation is a type of observational study, often used in conjunction with other types of research through triangulation. It is often used in fields like social sciences, education, healthcare, marketing, and design. This type of study is especially well suited for gaining rich and detailed insights into complex and/or subjective ...

  8. PDF Structured Methods: Interviews, Questionnaires and Observation

    182 DOING RESEARCH Learning how to design and use structured interviews, questionnaires and observation instruments is an important skill for research-ers. Such survey instruments can be used in many types of research, from case study, to cross-sectional survey, to experiment. A study of this sort can involve anything from a short

  9. What Is Participant Observation?

    Participant observation is a common research method in social sciences, with findings often published in research reports used to inform policymakers or other stakeholders. Example: Rural community participant observation. You are studying the social dynamics of a small rural community located near where you grew up.

  10. Direct observation methods: A practical guide for health researchers

    In developing research using observation, the first step is determining if observation is appropriate. Observation is ideal for studies about naturally occurring behaviors, actions, or events. ... Data collection tools enable systematic observations, codifying what to observe and record. These tools vary from open-ended to highly structured ...

  11. Observation

    Observational research methods are the deliberate, organised, and systematic observation and description of a phenomenon. ... Rodríguez, JB. (2016) An interview with Paul Seedhouse on video enhanced observation (VEO): a new tool for teacher training, professional development and classroom research. Bellaterra Journal of Teaching & Learning ...

  12. Observation

    A way to gather data by watching people, events, or noting physical characteristics in their natural setting. Observations can be overt (subjects know they are being observed) or covert (do not know they are being watched). Participant Observation. Researcher becomes a participant in the culture or context being observed.

  13. Observational Research

    Naturalistic observation is an observational method that involves observing people's behavior in the environment in which it typically occurs. Thus naturalistic observation is a type of field research (as opposed to a type of laboratory research). Jane Goodall's famous research on chimpanzees is a classic example of naturalistic observation ...

  14. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  15. (PDF) Observation as a tool for collecting data

    Obs ervation is a data. collection tool which is used b y the r esearcher for collecting li ve data with the. help of his/her senses of observation in the controlled or naturalistic situations of ...

  16. Scientific Tools in Research: A Comprehensive Guide

    Defining Scientific Tools and Their Uses. Scientific tools encompass a wide range of devices and methodologies, including: Laboratory equipment used to carry out experiments, such as microscopes, scales, thermometers, and more. Field gear like telescopes, cameras, GPS, and sensors to collect observational data.

  17. Observation in Qualitative Research

    Observation is one of several forms of data collection in qualitative research. It involves watching and recording, through the use of notes, the behavior of people at the research site. In this post, we will cover the following Different observational roles The guidelines for observation Problems with observation Observational Roles The role you play as an observer…

  18. Observational Research: What is, Types, Pros & Cons + Example

    Observational research is a broad term for various non-experimental studies in which behavior is carefully watched and recorded. The goal of this research is to describe a variable or a set of variables. More broadly, the goal is to capture specific individual, group, or setting characteristics. Since it is non-experimental and uncontrolled, we ...

  19. What Is an Observational Study?

    Published on March 31, 2022 by Tegan George. Revised on June 22, 2023. An observational study is used to answer a research question based purely on what the researcher observes. There is no interference or manipulation of the research subjects, and no control and treatment groups. These studies are often qualitative in nature and can be used ...

  20. 4 Field Research Tools and How to Use Them

    Let's take a closer look at the four field research tools in this matrix and how to use them. 1. Observational studies. The top left quadrant brings us to observational studies. This qualitative tool is critical for bringing your field research together and gaining a clear, accurate understanding of your learner audience.

  21. (PDF) Participant Observation as Research Methodology: Assessing the

    This paper explores the validity of qualitative observational research methods, specifically participant observation. Through an exploration of the relevant literature and a critical review of a ...

  22. Research Tools: Observation

    Research Tools: Observation. Science relies on observations to develop theories about nature, and ultimately to evaluate and validate these theories. These observations come from our natural senses and from instruments we create. The sustained development of advanced instrumentation continues to open new horizons in our understanding about how ...

  23. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  24. Research Tools

    Research Tool Options: Conduct research to explore the range of research tools available in M&E. Consult academic literature, practitioner resources, online forums, and professional networks to identify commonly used tools in your specific field or context. ... observation checklists), data analysis software (SPSS, Stata, R), data visualization ...

  25. 12 Best Tools For Perfect Research Summary Writing

    12 Best Tools For Perfect Research Summary Writing 1. Otio: Your AI Research Assistant . Knowledge workers, researchers, and students today need help with content overload and are left to deal with it using fragmented, complex, and manual tooling. Too many settle for stitching together complicated bookmarking, read-it-later, and note-taking ...