U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Behav Anal Pract
  • v.12(2); 2019 Jun

Systematic Protocols for the Visual Analysis of Single-Case Research Data

Katie wolfe.

1 Department of Educational Studies, University of South Carolina, 820 Main St, Columbia, SC 29208 USA

Erin E. Barton

2 Department of Special Education, Vanderbilt University, Box 228 GPC, Nashville, TN 37203 USA

Hedda Meadan

3 Department of Special Education, University of Illinois at Urbana–Champaign, 1310 South Sixth Street, Champaign, IL 61820 USA

Researchers in applied behavior analysis and related fields such as special education and school psychology use single-case designs to evaluate causal relations between variables and to evaluate the effectiveness of interventions. Visual analysis is the primary method by which single-case research data are analyzed; however, research suggests that visual analysis may be unreliable. In the absence of specific guidelines to operationalize the process of visual analysis, it is likely to be influenced by idiosyncratic factors and individual variability. To address this gap, we developed systematic, responsive protocols for the visual analysis of A-B-A-B and multiple-baseline designs. The protocols guide the analyst through the process of visual analysis and synthesize responses into a numeric score. In this paper, we describe the content of the protocols, illustrate their application to 2 graphs, and describe a small-scale evaluation study. We also describe considerations and future directions for the development and evaluation of the protocols.

Single-case research (SCR) is the predominant methodology used to evaluate causal relations between interventions and target behaviors in applied behavior analysis and related fields such as special education and psychology (Horner et al., 2005 ; Kazdin, 2011 ). This methodology focuses on the individual case as the unit of analysis and is well suited to examining the effectiveness of interventions. SCR facilitates a fine-grained analysis of data patterns across experimental phases, allowing researchers to identify the conditions under which a given intervention is effective for particular participants (Horner et al., 2005 ; Ledford & Gast, 2018 ). In addition, the dynamic nature of SCR allows the researcher to make adaptations to phases and to conduct component analyses of intervention packages with nonresponders to empirically identify optimal treatment components (Barton et al., 2016 ; Horner et al., 2005 ).

Visual analysis is the primary method by which researchers analyze SCR data to determine whether a causal relation (i.e., functional relation, experimental control) is documented (Horner et al., 2005 ; Kratochwill et al., 2013 ). Visual analysis involves examining graphed data within and across experimental phases. Specifically, researchers look for changes in the level, trend, or variability of the data across phases that would not be predicted to occur without the active manipulation of the independent variable. Level is the amount of behavior that occurs in a phase relative to the y -axis (Barton, Lloyd, Spriggs, & Gast, 2018 ). Trend is the direction of the data over time, which may be increasing, decreasing, or flat (Barton et al., 2018 ). Variability is the spread or fluctuation of the data around the trend line (Barton et al., 2018 ). A change in the level, trend, or variability of the data between adjacent phases is a basic effect; to determine whether there is a causal relation, the researcher looks for multiple replications of the effect at different and temporally related time points (Kratochwill et al., 2013 ).

Despite this reliance on visual analysis, there have been long-standing concerns about interrater agreement , or the extent to which two visual analysts evaluating the same graph make the same determination about functional relations and the magnitude of change. In general, these concerns have been borne out by empirical research (e.g., Brossart, Parker, Olson, & Mahadevan, 2006 ; DeProspero & Cohen, 1979 ; Wolfe, Seaman, & Drasgow, 2016 ). In one study, Wolfe et al. ( 2016 ) asked 52 experts to report whether each of 31 published multiple-baseline design graphs depicted (a) a change in the dependent variable from baseline to intervention for each tier of the graph and (b) an overall functional relation for the entire multiple-baseline design graph. Interrater agreement was just at or just below minimally acceptable standards for both types of decisions (intraclass correlation coefficient [ICC] = .601 and .58, respectively). The results of this study are generally representative of the body of literature on interrater agreement among visual analysts (cf. Kahng et al., 2010 ). Given that visual analysis is integral to the evaluation of SCR data (Horner & Spaulding, 2010 ; Kazdin, 2011 ), research indicating that it is unreliable under many circumstances presents a significant challenge for the field—particularly the acceptance of SCR as a credible and rigorous research methodology.

Many researchers have argued that poor agreement among visual analysts may be due to the absence of formal guidelines to operationalize the process (Furlong & Wampold, 1982 ), which leaves the analysis vulnerable to idiosyncratic factors and individual variability related to “history, training, experience, and vigilance” (Fisch, 1998 , p. 112). Perhaps due to the lack of formal guidelines, single-case researchers rarely identify, let alone describe, the methods by which they analyze their data. Smith ( 2012 ) reported that authors in fewer than half of the SCR studies published between 2000 and 2010 ( n = 409) identified the analytic method they used; only 28.1% explicitly stated that they used visual analysis. Even less frequently do authors describe the specific procedure by which visual analysis was conducted. In a review of SCR articles published in 2008 ( n = 113), Shadish and Sullivan ( 2011 ) found that only one study reported using a systematic procedure for visual analysis (Shadish, 2014 ). Barton, Meadan, and Fettig ( 2019 ) found similar results in a review of parent-implemented functional assessment interventions; study authors rarely and inconsistently used visual analysis terms and procedures across SCR studies and were most likely to discuss results using only mean, median, and average rather than level, trend, or variability. Overall, it is difficult to identify specifically how single-case researchers are conducting visual analysis of their data, which might lead to high rates of disagreement and adversely impact interpretations of results and syntheses across SCR. In other words, unreliable data analysis may impede the use of SCR to identify evidence-based practices, which has important and potentially adverse practical and policy implications.

There have been a few recent efforts to produce and disseminate standards that may promote greater consistency in visual analysis. The What Works Clearinghouse (WWC) Single-Case Design Standards (Kratochwill et al., 2013 ; WWC, 2017 ) describe four steps for conducting visual analysis that consider six data characteristics (i.e., level, trend, variability, immediacy, overlap, and consistency). However, the WWC standards were not designed to provide a systematic, step-by-step protocol to guide the visual analysis process (Hitchcock et al., 2014 ) and do not assist researchers in synthesizing information about the data characteristics and across experimental phases. For example, the four steps do not explain the relative importance of the data characteristics in making determinations about basic effects and experimental control. This ambiguity could introduce subjectivity into the analysis and result in two visual analysts reaching different conclusions about the same graph despite using the same procedures.

To increase agreement among visual analysts working on reviews of SCR literature, Maggin, Briesch, and Chafouleas ( 2013 ) developed a visual analysis protocol based on the WWC SCR standards (Kratochwill et al., 2013 ). Using this protocol, the analyst answers a series of questions about the graph and then uses these responses to determine the number of basic effects and the level of experimental control demonstrated by the graph (Maggin et al., 2013 ). Maggin et al. ( 2013 ) reported high agreement between the three authors following training on the protocol (e.g., 86% agreement), which suggests that structured, step-by-step protocols could be an effective way to increase consistency among visual analysts. Their protocol guides researchers through visual analysis procedures; however, it does not assist the researcher in synthesizing the six data characteristics within and across phases to make determinations about basic effects, experimental control, or weighing conflicting data patterns for making a judgment about functional relations. This introduces potential variability that could produce inconsistencies across different individuals and studies. The study by Wolfe et al. ( 2016 ) provides empirical evidence of this variability. They found that experts vary in the minimum number of effects they require to identify a functional relation. Some experts identified functional relations when there were three basic effects, but other experts identified a functional relation with only two basic effects. In other words, two experts may come to the same conclusions about the presence of basic effects in a particular graph, but they may translate that information into different decisions about the presence of a functional relation. Structured criteria that systematize the process of translating the within- and across-phase analysis into a decision about the overall functional relation may reduce this variability and improve agreement.

Researchers have developed structured criteria for the analysis of a specific type of SCR design used for a specific purpose. Hagopian et al. ( 1997 ) developed criteria for evaluating multielement graphs depicting the results of a functional analysis. The criteria consist of a step-by-step process that leads to a conclusion about the function of the behavior depicted in the graph. Hagopian et al. ( 1997 ) evaluated the effects of the criteria with three predoctoral interns in a multiple-baseline design and showed that participants’ agreement with the first author increased from around 50% in baseline to an average of 90% following training in the use of the structured criteria. The work of Hagopian et al. ( 1997 ) demonstrates that structured criteria can be developed for SCR that synthesize the user’s responses and lead directly to a conclusion about the data. Further, the use of the criteria improved agreement between raters and experts. However, the Hagopian et al. ( 1997 ) criteria apply only to multielement graphs used for a specific purpose and cannot be applied to other SCR designs.

To address the shortcomings of current practice and standards in visual analysis, we developed systematic, web-based protocols for the visual analysis of A-B-A-B and multiple-baseline design SCR data that consist of a series of questions for the analyst to answer that synthesizes the analyst’s responses to produce a numerical rating of experimental control for the graph. We designed our protocols to emphasize the six data characteristics outlined in the WWC ( 2017 ) SCR standards (i.e., level, trend, variability, immediacy, overlap, and consistency) and to support single-case researchers in making decisions about data patterns based on these characteristics. Further, our protocols guide the researchers in systematically making decisions about data patterns within and across phases and tiers to make judgments about functional relations. In this paper we describe the protocols, illustrate their application to two SCR graphs, and discuss findings from an initial evaluation study.

Content and Structure of the Protocols

We developed two step-by-step protocols, one for A-B-A-B designs, and one for multiple-baseline designs, to guide the analyst through the process of evaluating SCR data. The protocols are accessible as web-based surveys and as Google Sheets; both formats can be accessed from https://sites.google.com/site/scrvaprotocols/ . Each protocol consists of a series of questions with dichotomous response options (i.e., yes or no) about each phase and phase contrast in the design. The questions in each protocol are based on current published standards for SCR (Kratochwill et al., 2013 ), as well as guidelines for visual analysis published in textbooks on SCR (e.g., Cooper, Heron, & Heward, 2007 ; Kazdin, 2011 ; Ledford & Gast, 2018 ). Table ​ Table1 1 lists the relevant sources that support the inclusion of the questions in each protocol and also provides evidence of the protocols’ content validity. Each question in the protocols includes instructions and graphic examples illustrating potential “yes” and “no” responses. In the web-based survey, these instructions appear when the user hovers over a question. In Google Sheets, the instructions are accessed by clicking on a link in the spreadsheet.

Alignment of protocol content with published recommendations for visual analysis

Protocol ContentCooper et al. ( )Ledford and Gast ( )Kazdin ( )Kratochwill et al. ( )
A-B-A-B and Multiple-Baseline Design Protocols
Documentation of a predictable within-phase data patternXXXX
Comparison of projected pattern to actual pattern in adjacent phasesXXX
Level, trend, or variability change between adjacent phasesXXXX
Immediacy of change between adjacent phasesXXXX
Overlap between adjacent phasesXXXX
Consistency between similar phasesXXX
Multiple-Baseline Design Protocol Only
Staggering of introduction of treatment across tiersXXXX
Vertical analysisXXXX

X = item is referenced in source

The basic process for assessing each phase using the protocols includes examining both within- and between-phase data patterns (Kratochwill et al., 2013 ). First, the protocol prompts the visual analyst to evaluate the stability of the data within a given phase. Second, if there is a predictable pattern, the visual analyst projects the trend of the data into the subsequent phase and determines whether the level, trend, or variability of the data in this subsequent phase differs from the pattern predicted from the previous phase. If there was a change in the data between the two phases, then the analyst identifies if that change was immediate and measures the data overlap between the two phases. If there is not a change between the two phases, the analyst is directed to proceed to the next phase contrast. If multiple data paths are depicted on an A-B-A-B or multiple-baseline design graph, the data paths typically represent different dependent variables. In these cases, each data path should be analyzed with a separate application of the protocol to determine the presence of a functional relation between the independent variable and each dependent variable.

The protocols are response guided (i.e., responsive to the analyst’s input) and route the analyst through the process based on responses to previous questions. For example, if there are not sufficient data in the baseline phase to predict the future pattern of behavior, then the analyst cannot project the trend of the baseline data into the intervention phase to evaluate whether the data changed from the predicted pattern. In this case, the protocol skips ahead to questions about the next phase. Likewise, if the analyst responds that there is not a change in the dependent variable from one phase to the next, the protocol skips questions about immediacy and overlap, which are not relevant if the data did not change. The protocols are dynamic—some questions act as gatekeepers, making other questions available or unavailable based on the user’s response.

Unlike other systematic guidelines for visual analysis (e.g., Maggin et al., 2013 ), the protocols generate an experimental control score for the graph based on the analyst’s responses to the questions. Specific questions in the protocols have weighted values based on their importance to demonstrating a functional relation, and the sum of these values produces the experimental control score for the graph. Scores generated by the protocols range from 0 (no functional relation) to 5 (functional relation with large behavioral change), with 3 being the minimum score for evidence of a functional relation. Published guidelines for the analysis of SCR suggest that three basic effects, or changes in the level, trend, or variability of the dependent variable from one phase to the next, are required to demonstrate a functional relation (Barton et al., 2018 ; Kratochwill et al., 2013 ). Therefore, the questions pertaining to changes between adjacent phases (i.e., phase contrast questions) have a value of 1 in the protocols. As a result, a study depicting three basic effects would earn a minimum score of 3, which is the minimum criterion for demonstrating a functional relation based on our proposed interpretation guidelines.

Other questions may not be critical to the demonstration of a functional relation but strengthen the evidence of a functional relation if one is present. For example, depending on the nature of the dependent variable, it may not be essential that the data change immediately after the introduction of the intervention (i.e., within 3–5 data points) to demonstrate a functional relation (Kazdin, 2011 ). However, an immediate change increases the analyst’s confidence that the intervention caused the change in the dependent variable. Therefore, questions about the immediacy of the effect have a smaller weight (e.g., 0.25; A-B-A-B protocol) compared to questions about identifying basic effects.

Similarly, minimal overlap between the data paths in adjacent phases is generally considered desirable but not necessary nor always meaningful (e.g., data might have substantial overlap but contrasting trends) for demonstrating functional relations (Barton et al., 2018 ). Therefore, the overlap item also has a smaller weight (e.g., 0.25; A-B-A-B protocol). Phase contrasts must have 30% or fewer overlapping data points to receive points for this item in the protocol. This criterion is based on the interpretive guidelines proposed for the percentage of nonoverlapping data (Scruggs & Mastropieri, 1998 ), which suggest that 70% of nonoverlapping data between phases indicates an effective intervention (note that the protocol asks the analyst to calculate the inverse, or the amount of overlapping data, and thus the criterion is set at 30%).

In the multiple-baseline design protocol, we assigned the questions pertaining to vertical analysis a negative value. Vertical analysis refers to the examination of the data in tiers that remain in baseline when the intervention is introduced to a previous tier (Horner, Swaminathan, Sugai, & Smolkowski, 2012 ). Other sources refer to this same feature as verification of the change in the previous tier (Cooper et al., 2007 ). If the baseline data for any tiers still in baseline change markedly when the intervention is introduced to another tier, this indicates a potential alternative explanation for any observed change (e.g., behavioral covariation, history, maturation) and decreases confidence that the intervention was causally related to the change in the dependent variable. This question has a negative value because if the analyst answers “yes,” it detracts from the overall experimental control score for the graph.

Although we have proposed interpretation guidelines for the scores generated by the protocols, the score should be interpreted within the context of the study’s overall methodological quality and rigor; if the study has strong internal validity, minimizing plausible alternative explanations, then the score produced by the protocol can indicate the presence and strength of a functional relation. However, if the study is poorly designed or executed or is missing key features (e.g., interobserver agreement [IOA], procedural fidelity), or if key features are insufficient to rule out threats to internal validity (e.g., IOA is less than 80%, missing data), then the score produced by the protocol may be misleading because the methodological rigor limits interpretations of the data.

Application of the Protocols

Although we cannot demonstrate the dynamic and responsive nature of the protocols in this article, we will walk through two examples to illustrate how they are applied to SCR data. Both of the graphs used to illustrate the application of the protocols were used in our reliability and validity evaluations of the protocols. We encourage the reader to access the protocols in one or both formats to explore the content, structure, routing, and scoring that will be illustrated in the next sections.

A-B-A-B Design Protocol

Figure ​ Figure1 1 depicts a hypothetical A-B-A-B graph showing the number of talk-outs within a session, and Fig. ​ Fig.2 2 shows the completed protocol for this graph. Use of the protocol involves comparing the first baseline phase to the first treatment phase (A1 to B1), the first treatment phase to the second baseline phase (B1 to A2), and the second baseline phase to the second treatment phase (A2 to B2). We also compare the data patterns in similar phases (i.e., A1 to A2 and B1 to B2).

An external file that holds a picture, illustration, etc.
Object name is 40617_2019_336_Fig1_HTML.jpg

Sample A-B-A-B graph

An external file that holds a picture, illustration, etc.
Object name is 40617_2019_336_Fig2_HTML.jpg

Completed protocol for sample A-B-A-B graph

The protocol starts by prompting the visual analyst to examine the first baseline phase. There are three data points, and those data are stable—we predicted that if baseline continued, the data would continue to decrease—so we answered “yes” to the first question. The second question asks us to evaluate the first treatment phase in the same manner, and given the number of data points and the overall decreasing trend, we answered “yes” to this question as well. Next, we are directed to project the trend of the first baseline phase into the first treatment phase and evaluate whether the level, trend, or variability of the treatment data is different from our prediction. The level is different from our prediction, so we answered “yes,” identifying a basic effect between these phases. The identification of a basic effect for this phase contrast makes the next two questions available.

Regarding immediacy, the level of the data did change from the last three data points in baseline to the first three data points in treatment, so we selected “yes.” To identify the amount of overlap between the two phases, we drew a horizontal line extending from the highest baseline datum point into the first treatment phase because the goal of the intervention was to increase the behavior. Next, we counted the number of data points in the first treatment phase that are the same or “worse” than this line. Whether “worse” data are higher or lower than the line will depend on the targeted direction of behavior change. In this case, the goal was to increase the behavior, so treatment data points that are the same as or below the line would be considered worse. There are no treatment data points below the line, so there is no overlapping data between these two phases. If there were data points below the line, we would divide the number of data points below the line by the total number of data points in the treatment phase to get the percentage of overlapping data. We answered “yes” because less than 30% of the data overlaps between the two phases.

The majority of the remaining A-B-A-B protocol involves answering this same series of questions about the remaining phases and phase contrasts; however, it is important to note that in the second phase contrast (i.e., the comparison from the first treatment phase to the second baseline phase), a basic effect would be demonstrated by a decrease in the number of talk-outs relative to our prediction from the treatment phase. Because the expected direction of behavior change is different for this particular phase contrast, the procedure for calculating overlapping data differs slightly as well (see instructions for this question in the protocol). The A-B-A-B protocol also includes two questions about the consistency of the data patterns across like phases. These questions involve examining the similarity of the level, trend, or variability of the data across (a) both baseline phases and (b) both treatment phases to evaluate if any of these characteristics are similar. For this graph, the data in the first baseline phase have a low level, little variability, and a decreasing trend. The data in the second baseline phase have a medium level, medium variability, and no clear trend. Therefore, we answered “no” to the question about consistency between the baseline phases. Based on our dichotomous responses to the questions in the protocol, the overall score for experimental control for this graph is 2.75, which does not provide evidence of a functional relation. To see answers and scoring for the complete protocol for this graph, as well as details about how the protocol routes the user through relevant questions based on responses, we encourage the reader to examine Fig. ​ Fig.2 2 in detail.

Multiple-Baseline Design Protocol

Similar to the A-B-A-B protocol, the multiple-baseline design protocol requires that the analyst examine each phase and phase contrast in the design. However, consistent with the logic of a multiple-baseline design, use of this protocol involves both comparing baseline to treatment for each tier (i.e., A to B) and determining if the introduction of the intervention was staggered in time across tiers and whether the dependent variable changed when and only when the intervention was applied (i.e., vertical analysis).

Figure ​ Figure3 3 shows a hypothetical multiple-baseline design depicting the percentage of steps of a hygiene routine completed independently, and Fig. ​ Fig.4 4 is the completed protocol for this graph. The first question in the protocol involves the stability of the baseline data in the first tier. The phase does have three data points, but the variability of the data makes it difficult to project the overall pattern of the behavior, and as a result, we answered “no” to this question. This made the next four questions unavailable; if we cannot predict the future pattern of the baseline data, then we cannot project the trend into the treatment phase and make a confident determination about the presence of a basic effect. The next available question is about the stability of the baseline data in the second tier. This phase has more than three data points, and they are fairly stable around 10–20%, so we answered “yes.” Next, we looked at whether the baseline data in Tier 2 changed when the intervention began with Tier 1, which was after Session 3. The data in Tier 2 remain stable during and immediately after that session, so we answered “no” for this question. The next question asks if the treatment was introduced to Tier 2 after it was introduced to Tier 1; it was, so we answered “yes.” Had this question been answered “no,” the remaining questions for Tier 2 would become unavailable.

An external file that holds a picture, illustration, etc.
Object name is 40617_2019_336_Fig3_HTML.jpg

Sample multiple-baseline design graph

An external file that holds a picture, illustration, etc.
Object name is 40617_2019_336_Fig4_HTML.jpg

Completed protocol for sample multiple-baseline design graph

We continue by examining the stability of the Tier 2 treatment phase, and we have more than three data points and a clear upward trend, so we answered “yes.” Projecting the trend of the baseline phase into the treatment phase for Tier 2, we see there is a change in both the level and trend of the treatment data compared to our prediction from baseline, so we answered “yes.” That change was immediate (i.e., within the first 3–5 data points of treatment), so we answered “yes” to the next question about immediacy. Calculating overlap as previously described, we calculated 13% overlap between the two phases (1 overlapping datum point out of 8 total treatment data points), which is less than 30%, so we answered “yes.” The last question about this tier asks us to examine the similarity of data patterns between the treatment phases for Tier 1 and Tier 2. The tiers have similar levels, trends, and variability, so our response was “yes.”

The remainder of the multiple-baseline design protocol includes these same questions about the third tier in the design. Notably, the Tier 3 baseline data did change after Session 3, when the treatment was introduced to Tier 1, so we answered “yes” to the question about vertical analysis for Tier 3. Based on our dichotomous responses to the questions in the protocol, our overall score for experimental control for this graph was 2.32. To see answers and scoring for the complete protocol for this graph, as well as details about how the protocol routes the user through relevant questions based on responses, examine Fig. ​ Fig.4 4 in detail.

Evaluation of the Protocols

We conducted an initial evaluation of the reliability and validity of the protocols. We evaluated the reliability of the protocols by comparing the interrater agreement produced by the protocols to interrater agreement produced by a visual analysis rating scale. We evaluated the validity of the protocols by comparing scores produced by the protocols to scores assigned to the graphs by expert visual analysts using a rating scale.

Reliability Evaluation

To evaluate the reliability of the protocols, we recruited 16 attendees at an international early childhood special education conference held in a large city in the Southeastern United States. Attendees had to have taken a graduate-level course in SCR to participate in the evaluation. Nine participants reported that their terminal degree was a doctorate and designated their primary roles as university faculty or researchers, and seven reported that their terminal degree was a master’s and indicated that they were students. Participants were randomly assigned to the rating scale group ( n = 8) or the protocol group ( n = 8) and were split fairly evenly between the two groups based on highest degree earned (e.g., the protocol group consisted of three participants with doctorates and five with master’s degrees).

Each of the three authors independently used the protocols with 48 randomly selected published SCR graphs (24 A-B-A-B; 24 multiple-baseline design) during the iterative development process. From this set, we identified four A-B-A-B graphs and four multiple-baseline graphs with (a) ratings across the range of the protocol (i.e., 0–5) and (b) differences of 0.5 to 1.5 in our expert ratings based on our independent applications of the protocol. These criteria were used to ensure that we included diverse graphs in terms of both (a) the presence and absence of basic effects and functional relations and (b) graph difficulty (e.g., graphs with data with more variability or smaller changes might be difficult to visually analyze). We quantified difficulty using the range of scores produced by our independent applications of the protocol, such that graphs with more disparate scores between the authors were considered more difficult.

All study materials (i.e., graphs, rating scale, protocol) were uploaded into an online survey platform, and participants accessed the survey from the web browser on their personal laptop or tablet. All participants took a pretest on which they scored the eight graphs using a rating scale from 0 to 5. All points on the rating scale were defined as illustrated in Table ​ Table2, 2 , and the terms basic effect and functional relation were defined on each page of the pretest. Then, based on their random group assignments, participants rated the same eight graphs using either the rating scale or the systematic protocols.

Visual analysis rating scale

ScoreAnchor
0No basic effects; does NOT demonstrate a functional relation
1One basic effect; does NOT demonstrate a functional relation
2Two basic effects; does NOT demonstrate a functional relation
3Three basic effects; DOES demonstrate a functional relation with small behavioral change
4Three basic effects; DOES demonstrate a functional relation with medium behavioral change
5Three basic effects; DOES demonstrate a functional relation with large behavioral change

To evaluate interrater agreement, we calculated the ICC (Shrout & Fleiss, 1979 ) on the scores produced by the rating scale and the protocols (i.e., 0–5). The ICC is an index of agreement across multiple judges making multiple decisions that takes into account the magnitude of difference between judges’ decisions, unlike other agreement indices that are calculated based on exact agreement (Hallgren, 2012 ). Suggested interpretation guidelines for ICCs are as follows: Values below .40 are considered poor, values between .41 and .59 are considered fair, values between .60 and .74 are considered good, and values at .75 and above are considered excellent (Cicchetti, 1994 ). We calculated the ICC for each group at each time point, which enabled us to evaluate (a) if the use of the protocols improved agreement compared to the use of the rating scale and (b) if we could attribute improvements in agreement to the protocols rather than to the evaluation of the same graphs a second time. We collected social validity data from the participants regarding the utility of each method for understanding the data and the extent to which each reflected how the analyst would typically analyze SCR data. We also asked the protocol group which method (i.e., rating scale or protocol) they would be more likely to use to conduct visual analysis and to teach others to conduct visual analysis.

Figure ​ Figure5 5 shows the pretest and posttest ICCs for each group. Both groups had similar interrater agreement at pretest when using the rating scale (rating scale group ICC = .60; protocol group ICC = .58). However, the agreement of the protocol group improved at posttest (ICC = .78), whereas the agreement of the rating scale group remained relatively stable (ICC = .63). Based on the proposed guidelines for interpreting ICCs (Cicchetti, 1994 ), the agreement of the protocol group improved from fair at pretest when using the rating scale to excellent at posttest when using the protocol.

An external file that holds a picture, illustration, etc.
Object name is 40617_2019_336_Fig5_HTML.jpg

Intraclass correlation coefficients for the rating scale group ( n = 8) and the protocol group ( n = 8) at pretest and posttest

We also examined percentage agreement across protocol questions, displayed in Table ​ Table3, 3 , to identify the types of questions that produced the most disagreement among participants. Participants disagreed most often about questions pertaining to phase stability, followed by questions about the presence of basic effects. Questions about immediacy, overlap, consistency, and staggered treatment introduction (multiple-baseline designs) produced the highest agreement. Most participants in the protocol group rated the protocol as easy or very easy to understand ( n = 6), whereas half as many participants in the rating scale group reported the same about the rating scales ( n = 3). Similarly, most participants who used the protocol rated it as either mostly or very reflective of how they would typically conduct visual analysis, whereas one participant in the rating scale group reported the same about the rating scale. Finally, almost all participants in the protocol group reported that they would choose the protocol over the rating scale to conduct visual analysis ( n = 6) and to teach others to conduct visual analysis ( n = 7).

Percentage agreement on protocols by question type across graphs

Question typeA-B-A-BMultiple baseline
Number of data points (stability)462%0.21667%0.20
Basic effect371%0371%0.28
Immediacy of effect373%0.14373%0.28
Overlap374%0.23373%0.28
Consistency380%0.22391%0.22
Vertical analysis277%0.25
Staggered introduction2100%0

n refers to the number of questions per graph

Validity Evaluation

We also evaluated the validity of the protocols by comparing decisions produced by it to decisions made by expert visual analysts. We recruited eight researchers with expertise in SCR, which we defined as having a doctorate and being an author on at least five SCR publications (Wolfe et al., 2016 ), to participate. All experts identified their current position as faculty member or researcher and reported that they were an author on an average of 21 SCR publications (range = 5–65; median = 10).

Using the graphs from the reliability evaluation, we asked the experts (a) to make a dichotomous judgment about whether there was a functional relation and (b) to use the rating scale in Table ​ Table2 2 for each graph. Experts accessed the materials from a link sent via e-mail, and we allowed 10 days for experts to participate in the validity evaluation. We told the experts that we were evaluating the validity of systematic protocols for visual analysis, but they did not have knowledge of or access to the protocols.

To evaluate the validity of the protocols, we calculated the percentage of experts who said there was a functional relation and the percentage of participants whose protocol score converted to a functional relation (i.e., ≥3) for each graph. Although we asked the experts to answer “yes” or “no” about the presence of a functional relation and then use the rating scale for each graph, the experts’ dichotomous decisions always aligned with their score on the rating scale. There was some disagreement among the experts on their ratings and dichotomous decisions, so we calculated the mean score of the experts using the rating scale and compared it to the mean score of the participants using the protocols.

The ICC for the experts using the rating scale was .73, which is considered good according to interpretive guidelines for the statistic. Table ​ Table4 4 displays the percentage of experts who said there was a functional relation for each graph and the percentage of participants whose protocol score indicated a functional relation for each graph, as well as the mean scores for each graph for each group. These results indicate similar levels of agreement among experts using the rating scale and among participants using the protocol.

Percentage agreement and mean ratings for experts and protocol group

GraphPercentage indicating functional relationMean rating
ExpertsProtocol groupExpertsProtocol group
175673.43.3
263222.81.6
3000.80.4
4001.31.3
550443.73.2
613221.82.5
738221.82.2
8001.21.5

Figure ​ Figure6 6 shows the mean scores for each graph for both groups of raters. Graphs 1–4 were multiple-baseline designs, and Graphs 5–8 were A-B-A-B designs. Across all graphs, the correlation between the mean scores produced by the experts using the rating scale and by the participants using the protocol was strong ( r = 0.83). The mean difference between the expert rating scale score and the participant protocol score was 0.5, with a range of 0–1.2. For most of the graphs (63%), the difference between the scores was less than 0.5. Although the average difference score was 0.5 for both multiple-baseline designs and A-B-A-B designs, there was a larger range of difference scores for the multiple-baseline designs (0–1.2) than for the A-B-A-B designs (0.3–0.7). We dichotomized the mean scores for each group for each graph to obtain one “decision” for each group with respect to the presence or absence of a functional relation for the graph. The mean decision produced by the experts using the rating scale agreed with the mean decision produced by the participants using the protocol for all eight graphs. As shown in Fig. ​ Fig.6, 6 , the mean participant protocol score tended to be below the mean expert rating scale score for multiple-baseline designs, but the reverse was true for A-B-A-B designs. The lower score for the use of the protocol for multiple-baseline designs may be due to the question on vertical analysis, which subtracts a point if the participant indicated that the data in a tier that was still in baseline changed when the intervention was introduced to a previous tier.

An external file that holds a picture, illustration, etc.
Object name is 40617_2019_336_Fig6_HTML.jpg

Mean scores for each graph on the rating scale (expert visual analysis) and on the protocol (participant visual analysis). The dotted line indicates the criterion for demonstrating a functional relation

Further Development and Evaluation of the Protocols

Visual analysis of SCR data is the primary evaluative method to identify functional relations between experimental variables (Horner et al., 2005 ; Kazdin, 2011 ). However, visual analysis procedures are not standardized, subjective judgments about behavior change and magnitude of effects can be idiosyncratic, and interpretations often result in low agreement across analysts, all of which has led to criticism of the method (Kazdin, 2011 ; Lieberman, Yoder, Reichow, & Wolery, 2010 ). We developed our protocols to address these issues and provide standardized and systematic procedures to guide visual analysts through the comprehensive processes involved in making judgments about two common SCR designs: A-B-A-B and multiple baseline. Our initial evaluation of the protocols indicates that they improved reliability among visual analysts from fair to excellent, and the correspondence with expert visual analysis provides evidence of criterion validity. In addition, participants reported that they found the protocols easy to understand and navigate, supporting the social validity of the tools. These preliminary results are promising and highlight several areas for future research.

First, we plan to continue to examine the protocols’ reliability in a number of ways. Our results support the use of transparent and consistent visual analysis procedures for improving reliability. However, we did include a small sample of participants, which impacts the interpretation of our results. Specifically, the limited number of participants in each group may influence the accuracy of the ICCs, and we were unable to statistically compare the ICCs between the two groups to identify whether the differences were likely due to chance. Evaluating the protocols across a larger pool of raters will increase the precision of our reliability estimates and provide important information about the utility of the protocols.

In addition, we only included eight graphs in this investigation, and only two of these received mean scores at or above 3, which is the cutoff for demonstrating a functional relation using either method. Although we did not purposefully select graphs that did not depict a functional relation, we did attempt to include graphs with a range of difficulty and may have eliminated graphs with large, obvious effects as a result. Thus, this evaluation provides more compelling evidence of the reliability and validity of the tool for graphs that do not demonstrate a functional relation than for those that do. Additional investigations of the protocols with graphs that demonstrate functional relations are warranted. The application of the protocols to a larger sample of graphs will allow us to (a) examine the validity of the scoring procedures for additional and varied data patterns and (b) evaluate the appropriateness of individual item weights and the proposed interpretation guidelines for the overall experimental control score. The scores produced by the protocols could also be compared to other analytical approaches, such as statistics, to expand on the evaluation of the protocols’ validity.

In future investigations, we plan to compare the protocols to other methods of visual analysis with similar sensitivity. In the current study, we compared the protocols, which can produce scores with decimals (i.e., 2.5), to a rating scale, which could only produce integer-level scores (i.e., 2). It is possible that this differential sensitivity may have impacted our reliability estimates. There is some evidence that correlation coefficients increase but percentage agreement decreases when comparing reliability of a more sensitive rubric to a less sensitive version of the same rubric (Penny, Johnson, & Gordon, 2000a , 2000b ). However, because these studies compared different versions of the same measure, it is not clear that their findings apply to the current results given the distinct structures of the protocols and the rating scale. Nonetheless, we could mitigate this factor in future studies by allowing raters using the rating scale to select a score on a continuum from 0 to 5 (i.e., including decimals).

Second, we developed the protocols to be comprehensive, transparent, and ubiquitous. We intend for visual analysts at any level of training to be able to use the protocols to make reliable and sound decisions about data patterns and functional relations. Thus, we plan to continue to test agreement across different groups, including single-case researchers with expertise in visual analysis, practitioners, and students in SCR coursework who are learning to conduct visual analysis.

Third, the usability of the protocols is critical. The results of the social validity survey suggest that participants found the protocols to be user-friendly; however, all participants in the evaluation had already completed a course on SCR. Although even expert visual analysts are continually improving their visual analysis skills, we designed the protocols to support novice visual analysts who are acquiring their visual analysis knowledge and skills. Future research should involve testing the use of the protocols as an instructional tool for individuals who are learning how to visually analyze SCR data.

Fourth, we plan to continue the iterative development of the protocols. This pilot investigation identified questions that were likely to produce discrepant responses among users; future versions of the protocols could address this by providing more explicit instructions for how to examine the data to answer those questions. Additional examples embedded in the instructions for these questions could also improve agreement. We plan to update the protocols as additional information is published on the process of visual analysis and on the variables that influence agreement among visual analysts. For example, Barton et al. ( 2018 ) recommend that visual analysts examine the scaling of the y -axis to determine whether it is appropriate for the dependent variable and, in multiple-baseline designs, whether it is consistent across tiers. This initial step of the visual analysis process could be included in the next version of the protocol to ensure that it remains up-to-date with current recommended practices.

In conclusion, there is a clear need for standardized visual analysis procedures that improve consistency and agreement across visual analysts with a range of professional roles (e.g., researchers, practitioners). We developed and evaluated protocols for two common SCR designs and plan to use an iterative process to continue to test and refine our protocols to improve their reliability, validity, and usability. Improved consistency of visual analysis also might improve SCR syntheses, which is important for ensuring aggregate findings from SCR can be used to identify evidence-based practices.

Compliance with Ethical Standards

Katie Wolfe declares that she has no conflict of interest. Erin E. Barton declares that she has no conflict of interest. Hedda Meadan declares that she has no conflict of interest.

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The University of South Carolina Institutional Review Board approved the procedures in this study.

Informed consent was obtained from all individual participants included in the study.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Barton EE, Ledford JR, Lane JD, Decker J, Germansky SE, Hemmeter ML, Kaiser A. The iterative use of single case research designs to advance the science of EI/ECSE. Topics in Early Childhood Special Education. 2016; 36 (1):4–14. doi: 10.1177/0271121416630011. [ CrossRef ] [ Google Scholar ]
  • Barton EE, Lloyd BP, Spriggs AD, Gast DL. Visual analysis of graphic data. In: Ledford JR, Gast DL, editors. Single-case research methodology: Applications in special education and behavioral sciences. New York, NY: Routledge; 2018. pp. 179–213. [ Google Scholar ]
  • Barton EE, Meadan H, Fettig A. Comparison of visual analysis, non-overlap methods, and effect sizes in the evaluation of parent implemented functional assessment based interventions. Research in Developmental Disabilities. 2019; 85 :31–41. doi: 10.1016/j.ridd.2018.11.001. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brossart DF, Parker RI, Olson EA, Mahadevan L. The relationship between visual analysis and five statistical analyses in a simple AB single-case research design. Behavior Modification. 2006; 30 :531–563. doi: 10.1177/0145445503261167. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychological Assessment. 1994; 6 (4):284–290. doi: 10.1037/1040-3590.6.4.284. [ CrossRef ] [ Google Scholar ]
  • Cooper CO, Heron TE, Heward WL. Applied behavior analysis. St. Louis: Pearson Education; 2007. [ Google Scholar ]
  • DeProspero A, Cohen S. Inconsistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis. 1979; 12 (4):573–579. doi: 10.1901/jaba.1979.12-573. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fisch GS. Visual inspection of data revisited: Do the eyes still have it? The Behavior Analyst. 1998; 21 (1):111–123. doi: 10.1007/BF03392786. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Furlong MJ, Wampold BE. Intervention effects and relative variation as dimensions in experts’ use of visual inference. Journal of Applied Behavior Analysis. 1982; 15 (3):415–421. doi: 10.1901/jaba.1982.15-415. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hagopian LP, Fisher WW, Thompson RH, Owen-DeSchryver J, Iwata BA, Wacker DP. Toward the development of structured criteria for interpretation of functional analysis data. Journal of Applied Behavior Analysis. 1997; 30 (2):313–326. doi: 10.1901/jaba.1997.30-313. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hallgren KA. Computing inter-rater reliability for observational data: An overview and tutorial. Tutorial in Quantitative Methods for Psychology. 2012; 8 (1):23–34. doi: 10.20982/tqmp.08.1.p023. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hitchcock JH, Horner RH, Kratochwill TR, Levin JR, Odom SL, Rindskopf DM, Shadish WM. The what works Clearinghouse single-case design pilot standards: Who will guard the guards? Remedial and Special Education. 2014; 35 (3):145–152. doi: 10.1177/0741932513518979. [ CrossRef ] [ Google Scholar ]
  • Horner RH, Carr EG, Halle J, McGee G, Odom SL, Wolery M. The use of single-subject research to identify evidence-based practices in special education. Exceptional Children. 2005; 71 :165–179. [ Google Scholar ]
  • Horner RH, Spaulding SA. Single-subject designs. In: Salkind NE, editor. The encyclopedia of research design. Thousand Oaks: Sage Publications; 2010. pp. 1386–1394. [ Google Scholar ]
  • Horner RH, Swaminathan H, Sugai G, Smolkowski K. Considerations for the systematic analysis and use of single-case research. Education and Treatment of Children. 2012; 35 (2):269–290. doi: 10.1353/etc.2012.0011. [ CrossRef ] [ Google Scholar ]
  • Kahng SW, Chung KM, Gutshall K, Pitts SC, Kao J, Girolami K. Consistent visual analyses of intrasubject data. Journal of Applied Behavior Analysis. 2010; 43 (1):35–45. doi: 10.1901/jaba.2010.43-35. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kazdin AE. Single-case research designs: Methods for clinical and applied settings. 2. New York: Oxford University Press; 2011. [ Google Scholar ]
  • Kratochwill TR, Hitchcock JH, Horner RH, Levin JR, Odom SL, Rindskopf DM, Shadish WR. Single-case intervention research design standards. Remedial and Special Education. 2013; 34 :26–38. doi: 10.1177/0741932512452794. [ CrossRef ] [ Google Scholar ]
  • Ledford JR, Gast DL. Single case research methodology: Applications in special education and behavioral sciences. New York: Routledge; 2018. [ Google Scholar ]
  • Lieberman RG, Yoder PJ, Reichow B, Wolery M. Visual analysis of multiple baseline across participants graphs when change is delayed. School Psychology Quarterly. 2010; 25 (1):28–44. doi: 10.1037/a0018600. [ CrossRef ] [ Google Scholar ]
  • Maggin DM, Briesch AM, Chafouleas SM. An application of the what works Clearinghouse standards for evaluating single subject research: Synthesis of the self-management literature base. Remedial and Special Education. 2013; 34 (1):44–58. doi: 10.1177/0741932511435176. [ CrossRef ] [ Google Scholar ]
  • Penny J, Johnson RL, Gordon B. The effect of rating augmentation on inter-rater reliability: An empirical study of a holistic rubric. Assessing Writing. 2000; 7 (2):143–164. doi: 10.1016/S1075-2935(00)00012-X. [ CrossRef ] [ Google Scholar ]
  • Penny J, Johnson RL, Gordon B. Using rating augmentation to expand the scale of an analytic rubric. Journal of Experimental Education. 2000; 68 (3):269–287. doi: 10.1080/00220970009600096. [ CrossRef ] [ Google Scholar ]
  • Scruggs TE, Mastropieri MA. Summarizing single-subject research: Issues and applications. Behavior Modification. 1998; 22 (3):221–242. doi: 10.1177/01454455980223001. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shadish WR. Statistical analyses of single-case designs: The shape of things to come. Current Directions in Psychological Science. 2014; 23 (2):139–146. doi: 10.1177/0963721414524773. [ CrossRef ] [ Google Scholar ]
  • Shadish WR, Sullivan KJ. Characteristics of single-case designs used to assess intervention effects in 2008. Behavior Research Methods. 2011; 43 (4):971–980. doi: 10.3758/s13428-011-0111-y. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shrout PE, Fleiss JL. Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin. 1979; 86 (2):420–428. doi: 10.1037/0033-2909.86.2.420. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith JD. Single-case experimental designs: A systematic review of published research and current standards. Psychological Methods. 2012; 17 :510–550. doi: 10.1037/a0029312. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • What Works Clearinghouse. (2017). Procedures and standards handbook (Version 4.0). Retrieved from https://ies.ed.gov/ncee/wwc/Docs/referenceresources/wwc_standards_handbook_v4.pdf . Accessed 9 Jan 2018.
  • Wolfe K, Seaman MA, Drasgow E. Interrater agreement on the visual analysis of individual tiers and functional relations in multiple baseline designs. Behavior Modification. 2016; 40 (6):852–873. doi: 10.1177/0145445516644699. [ PubMed ] [ CrossRef ] [ Google Scholar ]

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2 Visual and Contextual Analysis

J. Keri Cronin and Hannah Dobbie

A hazy scene showing a bridge over a body of water. There are buildings in the background indicating that this is a cityscape. Blues and pinks convey the fog that covers the golden light from the sun.

The study of visual culture relies on two key skill sets: visual analysis and contextual analysis.

Visual Analysis

Visual Analysis is just a fancy way of saying “give a detailed description of the image.” It is easy to assume that visual analysis is easy or that it isn’t necessary because anyone can just look at the image and see the same thing you see. But is it really that simple?

As individual viewers we all bring our own background, perspective, education, and ideas to the viewing of an image. What you notice right away in an image may not be the same thing your classmate (or your grandmother or your neighbour) notices. And this is perfectly fine!

What do you see when you look at the images below?

In all three cases we have pictures of cows, but there are some important similarities and differences. What do you think is important to note about these images?

a black and white graphic image of a very large cow. The cow is impossibly big, in real life the cow’s legs probably couldn’t support her body. The animal has horns and behind her is a grove of trees

Reflection Exercise

Take a 5-10 minutes to jot down a detailed description (visual analysis) for each of the images above.

  • What do you notice?
  • What do you see?
  • What part of the image is your eye drawn to first?
  • How are these images similar? How are they different?

Contextual Analysis

Contextual analysis is another very important skill for studying images. This is a fancy way of saying “we need more information about this picture.” You will often have to do external research to build and support your contextual analysis. There is an old saying that “a picture is worth a thousand words,” but we need to think carefully and critically about this. A picture can not tell us everything we might want to know about it! Sometimes it is very important to dig deeper through research to learn more about an image in order to understand how it participates in the meaning making process.

Here is a list of some questions that are useful for guiding contextual analysis. This is not an exhaustive list and not all questions will apply in all cases:

  •   Who made this image? Why?
  •   Where was the image made? (In a different part of the world? In a laboratory? On the beach?)
  •   Who was the intended audience for this image?
  •   Where was the image meant to be viewed? (A textbook? A gallery? As part of a movie set? In a family photo album?)
  •   When was this image made? How do you know?
  •   What kinds of technologies were used to make this image? What kinds of limitations were there on this technology at this time?
  •   Is there text in the image? If so, how does it shape our understanding of what we are looking at? What about the image caption? How does it shape our understanding of what we are looking at?

Sometimes you can get clues from the image that can help you answer these kinds of questions, but often you will have to branch out and turn to books, articles, websites, documentary films, and other resources to help build and develop your contextual analysis.

In our examples above the captions give us quite a bit of information. We learn, for instance, who made the pictures (and, in one case, we learn that this information isn’t known). We learn when the images were made and the type of pictures they are–although we may need to look up what an etching , stereograph , or an albumen print is. The titles are fairly descriptive in that they provide us some basic information about what we are looking at.

Reflection Exercise – Part II

The visual analysis we just did combined with the information provided in the image captions gives us a place to start with our investigation into these images. But are many things that we still don’t know about these pictures.

What other things might we want to know if we were going to write about these pictures? Take a few moments and jot down a list of questions you have about these images.

As we generate questions based on these images and then start to do the research to find out the answers to those questions we are starting to build our contextual analysis. Through research we would learn, for instance, that the firm of Underwood & Underwood was a leading manufacturer of stereograph cards in the 19th century and that stereograph cards had a massive public and commercial appeal . The two images, when viewed through a special device known as a stereoscope , merge together to form an image that looks 3-D. Imagine how exciting this would be for viewers in an age before television, movies, and video games. Some have even described this as an early form of virtual reality !

Further research will show us that Edward H. Hacker was a printmaker in Britain in the 19th century and that he was best known for creating engravings of animal pictures. In an era when it wasn’t easy to reproduce paintings, this allowed multiple copies of an image to be shared and circulated. In our example, above he is reproducing a painting by William Henry Davis , an artist who specialised in portraits of livestock.

Today it might seem odd to us that people would want pictures painted of their cows and we might even wonder why someone would hire a printmaker to make reproductions of these images. Why would people want images of their cows? And further, why does the cow in the first picture above look so strange? She is so enormous that her little tiny, skinny legs couldn’t possibly support her body. What is going on here? Did Davis now know how to paint cows?

In fact, Davis was a well-respected artist. The answer to this question can be discovered through a bit of research (more contextual analysis). As we dig into this investigation, we would soon learn that this type of picture was part of a larger 19th trend for creating images of livestock that exaggerated their features as a way to advertise certain breeds and breeders . In other words, the farmers that were commissioning these images were using these pictures to try and prove that their animals were better than the animals owned by competing farmers. These pictures can not be separated out from the economics of 18th and 19th century British farming practices.

In 2018 the Museum of English Rural Life posted a photograph of a very large ram with the words “look at this absolute unit.” This Twitter post went viral and brought a lot of attention to the history behind these kinds of images. Having a picture like this circulate on social media brought a new layer of meaning to the photograph . It didn’t replace the original context, but it added to the discussions about it.

When an image is taken out of its original context new meanings can be generated. Take, for example, a controversial advertising campaign launched in the spring of 2023 by the Italian government . It features the very recognizable central figure from Sandro Botticelli’s 15th century painting known as “ Birth of Venus .” But in this campaign she is out and about enjoying the tourist sites in Italy, playing the role of Instagram influencer. This campaign provoked a strong reaction and many people criticised what they saw as trivialising and making a mockery of a beloved work of art. The associations people have with this painting–that it is a “masterpiece” to be admired and venerated–have fueled this criticism. If the central figure in these advertisements was not a recognizable figure it is unlikely that there would have been any controversy at all. By taking this figure out of context and putting her in AI generated scenes of Italian tourism, some feel it changes the meaning of the original picture. Love it or hate it, the one thing everyone agrees on is that this campaign has generated much discussion!

Visual and Contextual Analysis Exercise

Find a picture that you think expresses something about who you are. It can be from your childhood, a photograph of your dorm room, or a picture of the aunt who taught you how to read. Perhaps it is a picture of you cheering on your favourite sports team or of a special dinner shared with close friends. It doesn’t matter what the subject is as long as it is an example of a picture that you think says something about you.

Step 1 (Visual Analysis): Write a description of this picture. Try to stick to only description in this step, really look at the picture carefully and consider things like:

  • What medium is it (e.g.: is it a photograph, a painting, etc.)?
  • What colours are used?
  • How is it composed? How big is it?
  • Are there people in the image?
  • Is the image dark or light?
  • What is in the background?
  • Is there anything blurry or unclear?

*Note: This is not an exhaustive list of questions. Rather, they are given as examples to help you think about what kinds of things to focus on.

Step 2 (Contextual Analysis): Imagine you are going to show this picture to a complete stranger, someone who doesn’t know you at all. Make a list of everything you think that person needs to know about the picture in order to learn a bit about you? What information might help that person understand why this picture is meaningful for you? For example, was this photograph taken on your birthday? Is it a picture of your first pet? Is the person who is blurry in the background your best friend who moved away when you were 11? Then think about why these things are important to you. In other words, what do you know about this picture that wouldn’t be obvious to someone else?

a faded, vintage photograph of a little kid in a red snowsuit and a pink and white winter hat. She wears white shoes. She is standing face-to-face with a fluffy white dog who has his tongue out. A man stands between the child and the dog, one hand on each, to make sure that the interaction remains friendly and safe. The man wears brown shoes, blue jeans, a dark jacket and sunglasses. His sandy blonde hair is shaggy. These figures stand on concrete and the sun casts shadows on the ground. In the background are trees and a sign that is blurry and out of focus.

If I were doing this exercise with this photograph, in step #1 I would focus on things like the colour of the child’s clothing, the size of the dog, and the way the adult, child, and dog are posed, including that the man has one hand on the child, one hand on the dog. I would talk about it being a photograph and how the faded tones suggest that this is an old photograph. I would note that the photograph was taken outside and that these three are standing on what appears to be pavement but that there are trees in the background. There is also what appears to be a wooden sign in the background but it is too blurry to read. I would also point out that the shadows on the ground indicate that it was a sunny day, but the type of clothing the two human figures are wearing suggests that it was also a cold day.

If I were to continue on and complete step #2 I would list that this was a photograph taken in the mid-1970s by my mother and that it is a picture of me (Keri) and my uncle with a dog we happened to meet in the parking lot of Mount Robson Park while our family was moving from British Columbia to Alberta. This was not our dog. We had never met him before nor did we ever see him again. But he was friendly, and I was absolutely enthralled by how fluffy he was. My uncle took me over to introduce me to the dog, staying close to make sure the dog didn’t hurt me.

This picture holds meaning for me for a number of reasons. First of all, it is an early example of my love of animals. Secondly, Mount Robson Park is part of the Canadian Rocky Mountains and was often a destination for family vacations. These trips shaped my interest in nature and outdoor activities in spaces like Provincial and National Parks. This led to me deciding to write my MA thesis on the visual culture of these kinds of places, a document that was eventually turned into a book . And lastly, this picture has taken on a new layer of importance for me lately as my uncle pictured here recently died of cancer. Even though it isn’t a great picture in terms of technical quality, it is a picture that I have framed in my house because it holds a lot of meaning for me.

By doing this exercise you are slowing down the process of meaning making and thinking about how the visual elements of the image relate to the larger context that helps to shape why this picture holds meaning for you. You can see how the two types of analysis–visual and contextual–work together. You need both halves of this equation. By slowing down and doing some deep noticing in our visual analysis, we can notice things that become significant when we switch over to contextual analysis. And our contextual analysis can provide us a starting place for further research if needed.

With this exercise you were working with an image that you are already very familiar with. But this same process can get repeated with any image. When you are working with an image that isn’t from your own personal life, there will likely be more steps needed to arrive at a contextual analysis–research, further reading, etc.–but the process itself remains the foundation for critical thinking about images.

Look Closely: A Critical Introduction to Visual Culture Copyright © 2023 by J. Keri Cronin and Hannah Dobbie is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Study Site Homepage

Visual Methodologies: An Introduction to Researching with Visual Materials

Student resources, welcome to the companion website.

Welcome to the companion website for Visual Methodologies, Fourth Edition,  by Gillian Rose. The resources on the site have been specifically designed to support your study.

For students

  • Searching for images online
  • Author video
  • Journal article
  • Audio clip  

About the book

Now in its Fourth Edition,  Visual Methodologies: An Introduction to Researching with Visual Methodologies  is a bestselling critical guide to the study and analysis of visual culture. Existing chapters have been fully updated to offer a rigorous examination and demonstration of an individual methodology in a clear and structured style. 

Reflecting changes in the way society consumes and creates its visual content, new features include:

  • Brand new chapters dealing with social media platforms, the development of digital methods and the modern circulation and audiencing of research images
  • More 'Focus' features covering interactive documentaries, digital story-telling and participant mapping
  • A Companion Website featuring links to useful further resources relating to each chapter.

A now classic text,  Visual Methodologies  appeals to undergraduates, graduates, researchers and academics across the social sciences and humanities who are looking to get to grips with the complex debates and ideas in visual analysis and interpretation.

About the author

Gillian Rose is Professor of Cultural Geography at The Open University, and her current research interests focus on contemporary visual culture in cities, and visual research methodologies. Her website, with links to many of her publications, is here:

http://www.open.ac.uk/people/gr334

She also blogs at

www.visualmethodculture.wordpress.com

And you can follow her on Twitter @ProfGillian.

Disclaimer:

This website may contain links to both internal and external websites. All links included were active at the time the website was launched. SAGE does not operate these external websites and does not necessarily endorse the views expressed within them. SAGE cannot take responsibility for the changing content or nature of linked sites, as these sites are outside of our control and subject to change without our knowledge. If you do find an inactive link to an external website, please try to locate that website by using a search engine. SAGE will endeavour to update inactive or broken links when possible.

22 August 2024: Due to technical disruption, we are experiencing some delays to publication. We are working to restore services and apologise for the inconvenience. For further updates please visit our website: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

what is visual analysis research

  • > Visual Research Methods
  • > A Brief Introduction to Visual Research Methods in Library and Information Studies

what is visual analysis research

Book contents

  • Frontmatter
  • Figures and Tables
  • Acknowledgments
  • Contributors
  • Introduction: An Invitation to Visual Research Methods
  • PART 1 A PRIMER ON VISUAL RESEARCH METHODS IN LIBRARY AND INFORMATION STUDIES
  • PART 2 VISUAL RESEARCH METHODS IN ACTION
  • Author Index
  • Subject Index

1 - A Brief Introduction to Visual Research Methods in Library and Information Studies

Published online by Cambridge University Press:  07 November 2020

Introduction

Described simply, visual research methods are research techniques that use visual elements such as photographs, maps, video and other artistic media – drawings, paintings and sculptures – in the process of answering research questions. Although this definition seems simple, scholars undertaking using visual methods may easily become lost or disoriented in the diversity of visual options that abound or the inconsistencies across the literature in the terms we use to describe visual research methods (Hartel and Thomson, 2011; Pollak, 2017). Perhaps it is because of the wide range of visual research methods or the inconsistencies in how they are described and discussed, that LIS researchers and those across other disciplines can face challenges in ‘discovering visual research options and deciding which ones best suit their goals’ (Pollak, 2017, 99).

When we talk about visual research methods in this book, we are talking about methods in which the visual element (photo, film, drawing or otherwise) is part of the research process of gathering or generating research data. We are not talking about data visualization or the use of visuals (such as infographics) solely to present research results. Certainly, visual research methods might cross over with visualizations, even in the same study. For example, Elizabeth Tait's chapter on 3D visualization (Chapter 4) covers data visualization, but as part of the participatory community research process itself. Though we recognize that visualizations are an important research dissemination tool in LIS and other fields, we do not address them here.

Anchored in the LIS literature, this chapter addresses definitions and terminology and presents several existing frameworks to help clarify and facilitate the discussion about visual research methods. We begin by addressing the question of method versus methodology, then briefly discuss terminology and guiding structures. We will outline the emergence of visual research methods in LIS, highlight some recent examples, and discuss the benefits and limitations of visual research methods that have been documented by researchers in the field. It is worth repeating here that we situate ourselves and this book within the qualitative paradigm. The works discussed here focus on social sciences approaches to visual methods.

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • A Brief Introduction to Visual Research Methods in Library and Information Studies
  • By Jenaya Webb , Shailoo Bedi
  • Edited by Shailoo Bedi , University of Victoria, British Columbia , Jenaya Webb , University of Toronto
  • Book: Visual Research Methods
  • Online publication: 07 November 2020
  • Chapter DOI: https://doi.org/10.29085/9781783304585.002

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

For immediate release | March 10, 2021

An introduction to visual research methods

book cover for Visual Research Methods: An Introduction for Library and Information Studies

CHICAGO — “ Visual Research Methods: An Introduction for Library and Information Studies ,” published by Facet Publishing and available through the ALA Store, is the first book to focus on visual methods in LIS, providing a comprehensive primer for students, educators, researchers, and practitioners in the field. Visual research methods (VRM) comprise a collection of methods that incorporate visual elements such as maps, drawings, photographs, videos, as well as three-dimensional objects into the research process. In addition, VRM including photo-elicitation, photovoice, draw-and-write techniques, and cognitive mapping are being leveraged to great effect to explore information experiences to investigate some of the central questions in the field; expand theoretical discussions in LIS; and improve library services and spaces. Contributed chapters in the book showcase examples of VRM in action and offer the insights, inspirations, and experiences of researchers and practitioners working with visual methods. Edited by Shailoo Bedi and Jenaya Webb, this book’s coverage includes:

  • an introduction to visual research methods including discussion of terminology;
  • an overview of the literature on VRM in libraries;
  • methodological framing including a discussion of theory and epistemology;
  • practical and ethical considerations for researchers embarking on VRM projects;
  • chapters showcasing VRM in action including drawing techniques, photographic techniques, and mixed methods; and
  • six contributed chapters each showcasing the results of visual research methods, discussions of the techniques, and reflections on VRM for research in information studies.

Dr. Bedi works at the University of Victoria (UVic) as both Director, Academic Commons and Strategic Assessment with the Libraries as well as Director, Office of Student Academic Success with Learning, Teaching Support & Innovation. Her research interests include the construction and issues of identity for racialized minority leaders, as well as student experience with learning spaces, student research creation, and visual research methods. Webb is the Public Services and Research Librarian at the Ontario Institute for Studies in Education (OISE) Library at the University of Toronto. Since completing her MLIS at the University of Toronto, she has worked to bring visual methods and approaches to the library context to explore user experience, wayfinding, and meaning-making in library spaces.

Facet Publishing , the commercial publishing and bookselling arm of CILIP: the Chartered Institute of Library and Information Professionals, is the leading publisher of books for library and information professionals worldwide. Many book retailers and distributors are experiencing service disruptions or delays, including Amazon. For speediest service, order direct from the ALA Store. ALA Store purchases fund advocacy, awareness and accreditation programs for library and information professionals worldwide. ALA Editions | ALA Neal-Schuman publishes resources used by library and information professionals, scholars, students, and educators to improve programs and services, build on best practices, enhance pedagogy, share research, develop leadership, and promote advocacy. ALA authors and developers are leaders in their fields, and their content is published in a variety of print and electronic formats. Contact ALA Editions | ALA Neal-Schuman at [email protected].

Related Links

"Visual Research Methods: An Introduction for Library and Information Studies"

"The Handbook of Art and Design Librarianship, Second Edition"

"Inquiry and Research: A Relational Approach in the Classroom"

Rob Christopher

Marketing Coordinator

American Library Association

ALA Publishing

Share This Page

Featured News

ALA 2025 Annual Conference & Exhibition. Philadelphia, June 26 - 30, American Library Association

August 19, 2024

ALA Opens 2025 Annual Conference & Exhibition Call for Proposals

CHICAGO — ALA invites education program and poster proposals for the 2025 Annual Conference & Exhibition, taking place June 26 – July 1, 2025, in Philadelphia. The submission sites are open now through September 23, 2024.

press release

Person standing with hands in hips between library shelves

July 25, 2024

Sam Helmick chosen 2024-2025 American Library Association president-elect

The American Library Association Council decided on Tuesday, July 23, 2024, that Sam Helmick will be the 2024-2025 president-elect effective immediately.

Report cover: 2023 Public Library Technology Survey Summary Report

July 9, 2024

New Public Library Technology Survey report details digital equity roles

Nearly half of libraries now lend internet hotspots; 95% offer digital literacy training CHICAGO — The Public Library Association (PLA) today published the 2023 Public Library Technology Survey report. The national survey updates emerging trends around...

Cindy Hohl headshot

July 2, 2024

Hohl inaugurated 2024-2025 ALA president

Cindy Hohl, director of policy analysis and operational support at Kansas City (Mo.) Public Library, was inaugurated ALA President for 2024-2025 on Tuesday, July 2, at the ALA Annual Conference in San Diego.

Background: Royal blue with white corners and three light blue stars bordered by a red line; Logos: Reader. Voter. Ready. American Library Association/League of Women Voters Education Fund; Copy: League of Women Voters & America's Libraries: Partners to Count On - a free webinar for librarians & League members on collaborating for greater impact - Wednesday, May 29, 2024 - 1:00-2:00 PM Central

May 7, 2024

ALA partners with League of Women Voters to empower voters in 2024

The American Library Association and League of Women Voters today announced a new partnership to educate and empower voters in 2024.

Optimus Prime shows off his library card and says "Roll out with a library card."

April 17, 2024

The TRANSFORMERS Are Ready to Roll Out for Library Card Sign-Up Month

The American Library Association (ALA) is teaming up with Skybound Entertainment and Hasbro to encourage people to roll out to their libraries with the TRANSFORMERS franchise, featuring Optimus Prime, as part of Library Card Sign-Up Month in September.

Reader. Voter. Ready. logo. ALA American Library Association. Image accompanying the text is a ballot being put into a book.

April 10, 2024

American Library Association Launches Reader. Voter. Ready. Campaign to Equip Libraries for 2024 Elections

Today the American Library Association (ALA) kicks off its Reader. Voter. Ready. campaign, calling on advocates to sign a pledge to be registered, informed, and ready to vote in all local, state and federal elections in 2024.

Top Ten Most Challenged Books of 2023 (partial book covers)

April 8, 2024

ALA kicks off National Library Week revealing the annual list of Top 10 Most Challenged Books and the State of America’s Libraries Report

The American Library Association (ALA) launched National Library Week with today’s release of its highly anticipated annual list of the Top 10 Most Challenged Books of 2023 and the State of America’s Libraries Report, which highlights the ways libraries...

Raymond Pun

Pun wins 2025-2026 ALA presidency

Raymond Pun, Academic and Research Librarian at the Alder Graduate School of Education in California has been elected 2024-2025 president-elect of the American Library Association (ALA).

COMMENTS

  1. Visual Methodologies in Qualitative Research: Autophotography ...

    Visual methodologies are a collection of methods used to understand and interpret images. These methods have been used for a long time in anthropology and sociology; however, they are a relatively new way to research for the majority of disciplines, especially health research.

  2. Systematic Protocols for the Visual Analysis of Single-Case ...

    Visual analysis involves examining graphed data within and across experimental phases. Specifically, researchers look for changes in the level, trend, or variability of the data across phases that would not be predicted to occur without the active manipulation of the independent variable.

  3. Visual and Contextual Analysis – Look Closely: A Critical ...

    The study of visual culture relies on two key skill sets: visual analysis and contextual analysis. Visual Analysis. Visual Analysis is just a fancy way of saying “give a detailed description of the image.”

  4. Sage Research Methods - The Handbook of Visual Analysis

    The Handbook of Visual Analysis which demonstrates the importance of visual data within the social sciences offers an essential guide to those working in a range of disciplines including: media and communication studies, sociology, anthropology, education, psychoanalysis, and health studies.

  5. visual analysis - Duke University

    A visual analysis addresses an artworks formal elementsvisual attributes such as color, line, texture, and size. A visual analysis may also include historical context or interpretations of meaning. Be sure to read the assignment carefully to decide which elements of visual analysis your professor expects you to include. Some professors ...

  6. Visual Methodologies: An Introduction to Researching with ...

    A now classic text, Visual Methodologies appeals to undergraduates, graduates, researchers and academics across the social sciences and humanities who are looking to get to grips with the complex debates and ideas in visual analysis and interpretation.

  7. The SAGE Handbook of Visual Research Methods

    the handbook is organized into seven main sections: part 1: framing the field of visual research; part 2: visual and spatial data production methods and technologies; part 3: participatory and subject-centered approaches; part 4: analytical frameworks and perspectives; part 5: multimodal and multisensorial research; part 6: researching online ...

  8. Visual Methodologies in Qualitative Research - SAGE Journals

    Introduction to Visual Qualitative Research Methodologies. Visual methodologies are used to understand and interpret images (Barbour, 2014) and include photography, film, video, painting, drawing, collage, sculpture, artwork, graffiti, adver-tising, and cartoons.

  9. A Brief Introduction to Visual Research Methods in Library ...

    Described simply, visual research methods are research techniques that use visual elements such as photographs, maps, video and other artistic media – drawings, paintings and sculptures – in the process of answering research questions.

  10. An introduction to visual research methods | ALA

    Visual research methods (VRM) comprise a collection of methods that incorporate visual elements such as maps, drawings, photographs, videos, as well as three-dimensional objects into the research process.