Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Content Analysis | Guide, Methods & Examples

Content Analysis | Guide, Methods & Examples

Published on July 18, 2019 by Amy Luo . Revised on June 22, 2023.

Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual:

  • Books, newspapers and magazines
  • Speeches and interviews
  • Web content and social media posts
  • Photographs and films

Content analysis can be both quantitative (focused on counting and measuring) and qualitative (focused on interpreting and understanding).  In both types, you categorize or “code” words, themes, and concepts within the texts and then analyze the results.

Table of contents

What is content analysis used for, advantages of content analysis, disadvantages of content analysis, how to conduct content analysis, other interesting articles.

Researchers use content analysis to find out about the purposes, messages, and effects of communication content. They can also make inferences about the producers and audience of the texts they analyze.

Content analysis can be used to quantify the occurrence of certain words, phrases, subjects or concepts in a set of historical or contemporary texts.

Quantitative content analysis example

To research the importance of employment issues in political campaigns, you could analyze campaign speeches for the frequency of terms such as unemployment , jobs , and work  and use statistical analysis to find differences over time or between candidates.

In addition, content analysis can be used to make qualitative inferences by analyzing the meaning and semantic relationship of words and concepts.

Qualitative content analysis example

To gain a more qualitative understanding of employment issues in political campaigns, you could locate the word unemployment in speeches, identify what other words or phrases appear next to it (such as economy,   inequality or  laziness ), and analyze the meanings of these relationships to better understand the intentions and targets of different campaigns.

Because content analysis can be applied to a broad range of texts, it is used in a variety of fields, including marketing, media studies, anthropology, cognitive science, psychology, and many social science disciplines. It has various possible goals:

  • Finding correlations and patterns in how concepts are communicated
  • Understanding the intentions of an individual, group or institution
  • Identifying propaganda and bias in communication
  • Revealing differences in communication in different contexts
  • Analyzing the consequences of communication content, such as the flow of information or audience responses

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

phd qualitative content analysis

  • Unobtrusive data collection

You can analyze communication and social interaction without the direct involvement of participants, so your presence as a researcher doesn’t influence the results.

  • Transparent and replicable

When done well, content analysis follows a systematic procedure that can easily be replicated by other researchers, yielding results with high reliability .

  • Highly flexible

You can conduct content analysis at any time, in any location, and at low cost – all you need is access to the appropriate sources.

Focusing on words or phrases in isolation can sometimes be overly reductive, disregarding context, nuance, and ambiguous meanings.

Content analysis almost always involves some level of subjective interpretation, which can affect the reliability and validity of the results and conclusions, leading to various types of research bias and cognitive bias .

  • Time intensive

Manually coding large volumes of text is extremely time-consuming, and it can be difficult to automate effectively.

If you want to use content analysis in your research, you need to start with a clear, direct  research question .

Example research question for content analysis

Is there a difference in how the US media represents younger politicians compared to older ones in terms of trustworthiness?

Next, you follow these five steps.

1. Select the content you will analyze

Based on your research question, choose the texts that you will analyze. You need to decide:

  • The medium (e.g. newspapers, speeches or websites) and genre (e.g. opinion pieces, political campaign speeches, or marketing copy)
  • The inclusion and exclusion criteria (e.g. newspaper articles that mention a particular event, speeches by a certain politician, or websites selling a specific type of product)
  • The parameters in terms of date range, location, etc.

If there are only a small amount of texts that meet your criteria, you might analyze all of them. If there is a large volume of texts, you can select a sample .

2. Define the units and categories of analysis

Next, you need to determine the level at which you will analyze your chosen texts. This means defining:

  • The unit(s) of meaning that will be coded. For example, are you going to record the frequency of individual words and phrases, the characteristics of people who produced or appear in the texts, the presence and positioning of images, or the treatment of themes and concepts?
  • The set of categories that you will use for coding. Categories can be objective characteristics (e.g. aged 30-40 ,  lawyer , parent ) or more conceptual (e.g. trustworthy , corrupt , conservative , family oriented ).

Your units of analysis are the politicians who appear in each article and the words and phrases that are used to describe them. Based on your research question, you have to categorize based on age and the concept of trustworthiness. To get more detailed data, you also code for other categories such as their political party and the marital status of each politician mentioned.

3. Develop a set of rules for coding

Coding involves organizing the units of meaning into the previously defined categories. Especially with more conceptual categories, it’s important to clearly define the rules for what will and won’t be included to ensure that all texts are coded consistently.

Coding rules are especially important if multiple researchers are involved, but even if you’re coding all of the text by yourself, recording the rules makes your method more transparent and reliable.

In considering the category “younger politician,” you decide which titles will be coded with this category ( senator, governor, counselor, mayor ). With “trustworthy”, you decide which specific words or phrases related to trustworthiness (e.g. honest and reliable ) will be coded in this category.

4. Code the text according to the rules

You go through each text and record all relevant data in the appropriate categories. This can be done manually or aided with computer programs, such as QSR NVivo , Atlas.ti and Diction , which can help speed up the process of counting and categorizing words and phrases.

Following your coding rules, you examine each newspaper article in your sample. You record the characteristics of each politician mentioned, along with all words and phrases related to trustworthiness that are used to describe them.

5. Analyze the results and draw conclusions

Once coding is complete, the collected data is examined to find patterns and draw conclusions in response to your research question. You might use statistical analysis to find correlations or trends, discuss your interpretations of what the results mean, and make inferences about the creators, context and audience of the texts.

Let’s say the results reveal that words and phrases related to trustworthiness appeared in the same sentence as an older politician more frequently than they did in the same sentence as a younger politician. From these results, you conclude that national newspapers present older politicians as more trustworthy than younger politicians, and infer that this might have an effect on readers’ perceptions of younger people in politics.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Luo, A. (2023, June 22). Content Analysis | Guide, Methods & Examples. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/methodology/content-analysis/

Is this article helpful?

Amy Luo

Other students also liked

Qualitative vs. quantitative research | differences, examples & methods, descriptive research | definition, types, methods & examples, reliability vs. validity in research | difference, types and examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Find My Rep

You are here

Qualitative Content Analysis in Practice

Qualitative Content Analysis in Practice

  • Margrit Schreier - Jacobs University Bremen, Germany
  • Description

In one of the first to focus on qualitative content analysis, Margrit Schreier takes students step-by step through:

- creating a coding frame

- segmenting the material

- trying out the coding frame

- evaluating the trial coding

- carrying out the main coding

- what comes after qualitative content analysis

- making use of software when conducting qualitative content analysis.

Each part of the process is described in detail and research examples are provided to illustrate each step. Frequently asked questions are answered, the most important points are summarized, and end of chapter questions provide an opportunity to revise these points. After reading the book, students are fully equiped to conduct their own qualitative content analysis.

This book provides a well written, clear and detailed account of QCA, highlighting the value of this research method for the analysis of social, political and psychological phenomena Tereza Capelos University of Surrey

Schreier writes clearly and with authority, positioning QCA in relation to other qualitative research methods and emphasising the hands-on aspects of the analysis process. She offers numerous illuminating examples and helpful pedagogical tools for the reader. This book will thus be most welcomed by students at different levels as well as by researchers.

Ulla Hällgren Graneheim Umeå University, Sweden

This book has been written for students but would be of value to anyone considering using the analysis method to help them reduce and make sense of a large volume of textual data. [...]The content is detailed and presented in text-book style with key points, definitions and beginners’ mistakes scattered throughout and frequently asked questions and end of chapter questions. These break up the text but also help when skimming. In addition, and what I found particularly valuable, was the liberal use of examples drawn from published papers. These really help to clarify and bring to life the issues raised.

Schreier provides several helpful educational tools, such as mid-chapter definitions, summaries and key-points....this is an excellent introductory or reference book for all students of content analysis

This book makes a valuable supplementary reading text about applied content analysis for my course in ethnography.

I will use this as a recommended text, and not adopt as a main or compulsory text. This is so given the book's particular focus on QCA, and the more complex treatment of research issues in general, which will be of use to a limited number of students but will be a good resource potentially for some select few taking my spring 2014 senior thesis class and NOT the fall basic / intro research methods class. Thank you.

Preview this book

Sample materials & chapters.

Companion Chapter

End of Chapter Questions

Introduction

For instructors

Select a purchasing option, order from:.

  • VitalSource
  • Amazon Kindle
  • Google Play

Related Products

Qualitative Data Analysis with NVivo

SAGE Research Methods is a research methods tool created to help researchers, faculty and students with their research projects. SAGE Research Methods links over 175,000 pages of SAGE’s renowned book, journal and reference content with truly advanced search and discovery tools. Researchers can explore methods concepts to help them design research projects, understand particular methods or identify a new method, conduct their research, and write up their findings. Since SAGE Research Methods focuses on methodology rather than disciplines, it can be used across the social sciences, health sciences, and more.

With SAGE Research Methods, researchers can explore their chosen method across the depth and breadth of content, expanding or refining their search as needed; read online, print, or email full-text content; utilize suggested related methods and links to related authors from SAGE Research Methods' robust library and unique features; and even share their own collections of content through Methods Lists. SAGE Research Methods contains content from over 720 books, dictionaries, encyclopedias, and handbooks, the entire “Little Green Book,” and "Little Blue Book” series, two Major Works collating a selection of journal articles, and specially commissioned videos.

Logo for Open Educational Resources

Chapter 17. Content Analysis

Introduction.

Content analysis is a term that is used to mean both a method of data collection and a method of data analysis. Archival and historical works can be the source of content analysis, but so too can the contemporary media coverage of a story, blogs, comment posts, films, cartoons, advertisements, brand packaging, and photographs posted on Instagram or Facebook. Really, almost anything can be the “content” to be analyzed. This is a qualitative research method because the focus is on the meanings and interpretations of that content rather than strictly numerical counts or variables-based causal modeling. [1] Qualitative content analysis (sometimes referred to as QCA) is particularly useful when attempting to define and understand prevalent stories or communication about a topic of interest—in other words, when we are less interested in what particular people (our defined sample) are doing or believing and more interested in what general narratives exist about a particular topic or issue. This chapter will explore different approaches to content analysis and provide helpful tips on how to collect data, how to turn that data into codes for analysis, and how to go about presenting what is found through analysis. It is also a nice segue between our data collection methods (e.g., interviewing, observation) chapters and chapters 18 and 19, whose focus is on coding, the primary means of data analysis for most qualitative data. In many ways, the methods of content analysis are quite similar to the method of coding.

phd qualitative content analysis

Although the body of material (“content”) to be collected and analyzed can be nearly anything, most qualitative content analysis is applied to forms of human communication (e.g., media posts, news stories, campaign speeches, advertising jingles). The point of the analysis is to understand this communication, to systematically and rigorously explore its meanings, assumptions, themes, and patterns. Historical and archival sources may be the subject of content analysis, but there are other ways to analyze (“code”) this data when not overly concerned with the communicative aspect (see chapters 18 and 19). This is why we tend to consider content analysis its own method of data collection as well as a method of data analysis. Still, many of the techniques you learn in this chapter will be helpful to any “coding” scheme you develop for other kinds of qualitative data. Just remember that content analysis is a particular form with distinct aims and goals and traditions.

An Overview of the Content Analysis Process

The first step: selecting content.

Figure 17.2 is a display of possible content for content analysis. The first step in content analysis is making smart decisions about what content you will want to analyze and to clearly connect this content to your research question or general focus of research. Why are you interested in the messages conveyed in this particular content? What will the identification of patterns here help you understand? Content analysis can be fun to do, but in order to make it research, you need to fit it into a research plan.

New stories Blogs Comment posts Lyrics
Letters to editor Films Cartoons Advertisements
Brand packaging Logos Instagram photos Tweets
Photographs Graffiti Street signs Personalized license plates
Avatars (names, shapes, presentations) Nicknames Band posters Building names

Figure 17.1. A Non-exhaustive List of "Content" for Content Analysis

To take one example, let us imagine you are interested in gender presentations in society and how presentations of gender have changed over time. There are various forms of content out there that might help you document changes. You could, for example, begin by creating a list of magazines that are coded as being for “women” (e.g., Women’s Daily Journal ) and magazines that are coded as being for “men” (e.g., Men’s Health ). You could then select a date range that is relevant to your research question (e.g., 1950s–1970s) and collect magazines from that era. You might create a “sample” by deciding to look at three issues for each year in the date range and a systematic plan for what to look at in those issues (e.g., advertisements? Cartoons? Titles of articles? Whole articles?). You are not just going to look at some magazines willy-nilly. That would not be systematic enough to allow anyone to replicate or check your findings later on. Once you have a clear plan of what content is of interest to you and what you will be looking at, you can begin, creating a record of everything you are including as your content. This might mean a list of each advertisement you look at or each title of stories in those magazines along with its publication date. You may decide to have multiple “content” in your research plan. For each content, you want a clear plan for collecting, sampling, and documenting.

The Second Step: Collecting and Storing

Once you have a plan, you are ready to collect your data. This may entail downloading from the internet, creating a Word document or PDF of each article or picture, and storing these in a folder designated by the source and date (e.g., “ Men’s Health advertisements, 1950s”). Sølvberg ( 2021 ), for example, collected posted job advertisements for three kinds of elite jobs (economic, cultural, professional) in Sweden. But collecting might also mean going out and taking photographs yourself, as in the case of graffiti, street signs, or even what people are wearing. Chaise LaDousa, an anthropologist and linguist, took photos of “house signs,” which are signs, often creative and sometimes offensive, hung by college students living in communal off-campus houses. These signs were a focal point of college culture, sending messages about the values of the students living in them. Some of the names will give you an idea: “Boot ’n Rally,” “The Plantation,” “Crib of the Rib.” The students might find these signs funny and benign, but LaDousa ( 2011 ) argued convincingly that they also reproduced racial and gender inequalities. The data here already existed—they were big signs on houses—but the researcher had to collect the data by taking photographs.

In some cases, your content will be in physical form but not amenable to photographing, as in the case of films or unwieldy physical artifacts you find in the archives (e.g., undigitized meeting minutes or scrapbooks). In this case, you need to create some kind of detailed log (fieldnotes even) of the content that you can reference. In the case of films, this might mean watching the film and writing down details for key scenes that become your data. [2] For scrapbooks, it might mean taking notes on what you are seeing, quoting key passages, describing colors or presentation style. As you might imagine, this can take a lot of time. Be sure you budget this time into your research plan.

Researcher Note

A note on data scraping : Data scraping, sometimes known as screen scraping or frame grabbing, is a way of extracting data generated by another program, as when a scraping tool grabs information from a website. This may help you collect data that is on the internet, but you need to be ethical in how to employ the scraper. A student once helped me scrape thousands of stories from the Time magazine archives at once (although it took several hours for the scraping process to complete). These stories were freely available, so the scraping process simply sped up the laborious process of copying each article of interest and saving it to my research folder. Scraping tools can sometimes be used to circumvent paywalls. Be careful here!

The Third Step: Analysis

There is often an assumption among novice researchers that once you have collected your data, you are ready to write about what you have found. Actually, you haven’t yet found anything, and if you try to write up your results, you will probably be staring sadly at a blank page. Between the collection and the writing comes the difficult task of systematically and repeatedly reviewing the data in search of patterns and themes that will help you interpret the data, particularly its communicative aspect (e.g., What is it that is being communicated here, with these “house signs” or in the pages of Men’s Health ?).

The first time you go through the data, keep an open mind on what you are seeing (or hearing), and take notes about your observations that link up to your research question. In the beginning, it can be difficult to know what is relevant and what is extraneous. Sometimes, your research question changes based on what emerges from the data. Use the first round of review to consider this possibility, but then commit yourself to following a particular focus or path. If you are looking at how gender gets made or re-created, don’t follow the white rabbit down a hole about environmental injustice unless you decide that this really should be the focus of your study or that issues of environmental injustice are linked to gender presentation. In the second round of review, be very clear about emerging themes and patterns. Create codes (more on these in chapters 18 and 19) that will help you simplify what you are noticing. For example, “men as outdoorsy” might be a common trope you see in advertisements. Whenever you see this, mark the passage or picture. In your third (or fourth or fifth) round of review, begin to link up the tropes you’ve identified, looking for particular patterns and assumptions. You’ve drilled down to the details, and now you are building back up to figure out what they all mean. Start thinking about theory—either theories you have read about and are using as a frame of your study (e.g., gender as performance theory) or theories you are building yourself, as in the Grounded Theory tradition. Once you have a good idea of what is being communicated and how, go back to the data at least one more time to look for disconfirming evidence. Maybe you thought “men as outdoorsy” was of importance, but when you look hard, you note that women are presented as outdoorsy just as often. You just hadn’t paid attention. It is very important, as any kind of researcher but particularly as a qualitative researcher, to test yourself and your emerging interpretations in this way.

The Fourth and Final Step: The Write-Up

Only after you have fully completed analysis, with its many rounds of review and analysis, will you be able to write about what you found. The interpretation exists not in the data but in your analysis of the data. Before writing your results, you will want to very clearly describe how you chose the data here and all the possible limitations of this data (e.g., historical-trace problem or power problem; see chapter 16). Acknowledge any limitations of your sample. Describe the audience for the content, and discuss the implications of this. Once you have done all of this, you can put forth your interpretation of the communication of the content, linking to theory where doing so would help your readers understand your findings and what they mean more generally for our understanding of how the social world works. [3]

Analyzing Content: Helpful Hints and Pointers

Although every data set is unique and each researcher will have a different and unique research question to address with that data set, there are some common practices and conventions. When reviewing your data, what do you look at exactly? How will you know if you have seen a pattern? How do you note or mark your data?

Let’s start with the last question first. If your data is stored digitally, there are various ways you can highlight or mark up passages. You can, of course, do this with literal highlighters, pens, and pencils if you have print copies. But there are also qualitative software programs to help you store the data, retrieve the data, and mark the data. This can simplify the process, although it cannot do the work of analysis for you.

Qualitative software can be very expensive, so the first thing to do is to find out if your institution (or program) has a universal license its students can use. If they do not, most programs have special student licenses that are less expensive. The two most used programs at this moment are probably ATLAS.ti and NVivo. Both can cost more than $500 [4] but provide everything you could possibly need for storing data, content analysis, and coding. They also have a lot of customer support, and you can find many official and unofficial tutorials on how to use the programs’ features on the web. Dedoose, created by academic researchers at UCLA, is a decent program that lacks many of the bells and whistles of the two big programs. Instead of paying all at once, you pay monthly, as you use the program. The monthly fee is relatively affordable (less than $15), so this might be a good option for a small project. HyperRESEARCH is another basic program created by academic researchers, and it is free for small projects (those that have limited cases and material to import). You can pay a monthly fee if your project expands past the free limits. I have personally used all four of these programs, and they each have their pluses and minuses.

Regardless of which program you choose, you should know that none of them will actually do the hard work of analysis for you. They are incredibly useful for helping you store and organize your data, and they provide abundant tools for marking, comparing, and coding your data so you can make sense of it. But making sense of it will always be your job alone.

So let’s say you have some software, and you have uploaded all of your content into the program: video clips, photographs, transcripts of news stories, articles from magazines, even digital copies of college scrapbooks. Now what do you do? What are you looking for? How do you see a pattern? The answers to these questions will depend partially on the particular research question you have, or at least the motivation behind your research. Let’s go back to the idea of looking at gender presentations in magazines from the 1950s to the 1970s. Here are some things you can look at and code in the content: (1) actions and behaviors, (2) events or conditions, (3) activities, (4) strategies and tactics, (5) states or general conditions, (6) meanings or symbols, (7) relationships/interactions, (8) consequences, and (9) settings. Table 17.1 lists these with examples from our gender presentation study.

Table 17.1. Examples of What to Note During Content Analysis

What can be noted/coded Example from Gender Presentation Study
Actions and behaviors
Events or conditions
Activities
Strategies and tactics
States/conditions
Meanings/symbols
Relationships/interactions
Consequences
Settings

One thing to note about the examples in table 17.1: sometimes we note (mark, record, code) a single example, while other times, as in “settings,” we are recording a recurrent pattern. To help you spot patterns, it is useful to mark every setting, including a notation on gender. Using software can help you do this efficiently. You can then call up “setting by gender” and note this emerging pattern. There’s an element of counting here, which we normally think of as quantitative data analysis, but we are using the count to identify a pattern that will be used to help us interpret the communication. Content analyses often include counting as part of the interpretive (qualitative) process.

In your own study, you may not need or want to look at all of the elements listed in table 17.1. Even in our imagined example, some are more useful than others. For example, “strategies and tactics” is a bit of a stretch here. In studies that are looking specifically at, say, policy implementation or social movements, this category will prove much more salient.

Another way to think about “what to look at” is to consider aspects of your content in terms of units of analysis. You can drill down to the specific words used (e.g., the adjectives commonly used to describe “men” and “women” in your magazine sample) or move up to the more abstract level of concepts used (e.g., the idea that men are more rational than women). Counting for the purpose of identifying patterns is particularly useful here. How many times is that idea of women’s irrationality communicated? How is it is communicated (in comic strips, fictional stories, editorials, etc.)? Does the incidence of the concept change over time? Perhaps the “irrational woman” was everywhere in the 1950s, but by the 1970s, it is no longer showing up in stories and comics. By tracing its usage and prevalence over time, you might come up with a theory or story about gender presentation during the period. Table 17.2 provides more examples of using different units of analysis for this work along with suggestions for effective use.

Table 17.2. Examples of Unit of Analysis in Content Analysis

Unit of Analysis How Used...
Words
Themes
Characters
Paragraphs
Items
Concepts
Semantics

Every qualitative content analysis is unique in its particular focus and particular data used, so there is no single correct way to approach analysis. You should have a better idea, however, of what kinds of things to look for and what to look for. The next two chapters will take you further into the coding process, the primary analytical tool for qualitative research in general.

Further Readings

Cidell, Julie. 2010. “Content Clouds as Exploratory Qualitative Data Analysis.” Area 42(4):514–523. A demonstration of using visual “content clouds” as a form of exploratory qualitative data analysis using transcripts of public meetings and content of newspaper articles.

Hsieh, Hsiu-Fang, and Sarah E. Shannon. 2005. “Three Approaches to Qualitative Content Analysis.” Qualitative Health Research 15(9):1277–1288. Distinguishes three distinct approaches to QCA: conventional, directed, and summative. Uses hypothetical examples from end-of-life care research.

Jackson, Romeo, Alex C. Lange, and Antonio Duran. 2021. “A Whitened Rainbow: The In/Visibility of Race and Racism in LGBTQ Higher Education Scholarship.” Journal Committed to Social Change on Race and Ethnicity (JCSCORE) 7(2):174–206.* Using a “critical summative content analysis” approach, examines research published on LGBTQ people between 2009 and 2019.

Krippendorff, Klaus. 2018. Content Analysis: An Introduction to Its Methodology . 4th ed. Thousand Oaks, CA: SAGE. A very comprehensive textbook on both quantitative and qualitative forms of content analysis.

Mayring, Philipp. 2022. Qualitative Content Analysis: A Step-by-Step Guide . Thousand Oaks, CA: SAGE. Formulates an eight-step approach to QCA.

Messinger, Adam M. 2012. “Teaching Content Analysis through ‘Harry Potter.’” Teaching Sociology 40(4):360–367. This is a fun example of a relatively brief foray into content analysis using the music found in Harry Potter films.

Neuendorft, Kimberly A. 2002. The Content Analysis Guidebook . Thousand Oaks, CA: SAGE. Although a helpful guide to content analysis in general, be warned that this textbook definitely favors quantitative over qualitative approaches to content analysis.

Schrier, Margrit. 2012. Qualitative Content Analysis in Practice . Thousand Okas, CA: SAGE. Arguably the most accessible guidebook for QCA, written by a professor based in Germany.

Weber, Matthew A., Shannon Caplan, Paul Ringold, and Karen Blocksom. 2017. “Rivers and Streams in the Media: A Content Analysis of Ecosystem Services.” Ecology and Society 22(3).* Examines the content of a blog hosted by National Geographic and articles published in The New York Times and the Wall Street Journal for stories on rivers and streams (e.g., water-quality flooding).

  • There are ways of handling content analysis quantitatively, however. Some practitioners therefore specify qualitative content analysis (QCA). In this chapter, all content analysis is QCA unless otherwise noted. ↵
  • Note that some qualitative software allows you to upload whole films or film clips for coding. You will still have to get access to the film, of course. ↵
  • See chapter 20 for more on the final presentation of research. ↵
  • . Actually, ATLAS.ti is an annual license, while NVivo is a perpetual license, but both are going to cost you at least $500 to use. Student rates may be lower. And don’t forget to ask your institution or program if they already have a software license you can use. ↵

A method of both data collection and data analysis in which a given content (textual, visual, graphic) is examined systematically and rigorously to identify meanings, themes, patterns and assumptions.  Qualitative content analysis (QCA) is concerned with gathering and interpreting an existing body of material.    

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.23(1); 2018 Feb

Logo of jrn

Directed qualitative content analysis: the description and elaboration of its underpinning methods and data analysis process

Qualitative content analysis consists of conventional, directed and summative approaches for data analysis. They are used for provision of descriptive knowledge and understandings of the phenomenon under study. However, the method underpinning directed qualitative content analysis is insufficiently delineated in international literature. This paper aims to describe and integrate the process of data analysis in directed qualitative content analysis. Various international databases were used to retrieve articles related to directed qualitative content analysis. A review of literature led to the integration and elaboration of a stepwise method of data analysis for directed qualitative content analysis. The proposed 16-step method of data analysis in this paper is a detailed description of analytical steps to be taken in directed qualitative content analysis that covers the current gap of knowledge in international literature regarding the practical process of qualitative data analysis. An example of “the resuscitation team members' motivation for cardiopulmonary resuscitation” based on Victor Vroom's expectancy theory is also presented. The directed qualitative content analysis method proposed in this paper is a reliable, transparent, and comprehensive method for qualitative researchers. It can increase the rigour of qualitative data analysis, make the comparison of the findings of different studies possible and yield practical results.

Introduction

Qualitative content analysis (QCA) is a research approach for the description and interpretation of textual data using the systematic process of coding. The final product of data analysis is the identification of categories, themes and patterns ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ; Zhang and Wildemuth, 2009 ). Researchers in the field of healthcare commonly use QCA for data analysis ( Berelson, 1952 ). QCA has been described and used in the first half of the 20th century ( Schreier, 2014 ). The focus of QCA is the development of knowledge and understanding of the study phenomenon. QCA, as the application of language and contextual clues for making meanings in the communication process, requires a close review of the content gleaned from conducting interviews or observations ( Downe-Wamboldt, 1992 ; Hsieh and Shannon, 2005 ).

QCA is classified into conventional (inductive), directed (deductive) and summative methods ( Hsieh and Shannon, 2005 ; Mayring, 2000 , 2014 ). Inductive QCA, as the most popular approach in data analysis, helps with the development of theories, schematic models or conceptual frameworks ( Elo and Kyngäs, 2008 ; Graneheim and Lundman, 2004 ; Vaismoradi et al., 2013 , 2016 ), which should be refined, tested or further developed by using directed QCA ( Elo and Kyngäs, 2008 ). Directed QCA is a common method of data analysis in healthcare research ( Elo and Kyngäs, 2008 ), but insufficient knowledege is available about how this method is applied ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ). This may hamper the use of directed QCA by novice qualitative researchers and account for a low application of this method compared with the inductive method ( Elo and Kyngäs, 2008 ; Mayring, 2000 ). Therefore, this paper aims to describe and integrate methods applied in directed QCA.

International databases such as PubMed (including Medline), Scopus, Web of Science and ScienceDirect were searched for retrieval of papers related to QCA and directed QCA. Use of keywords such as ‘directed content analysis’, ‘deductive content analysis’ and ‘qualitative content analysis’ led to 13,738 potentially eligible papers. Applying inclusion criteria such as ‘focused on directed qualitative content analysis’ and ‘published in peer-reviewed journals’; and removal of duplicates resulted in 30 papers. However, only two of these papers dealt with the description of directed QCA in terms of the methodological process. Ancestry and manual searches within these 30 papers revealed the pioneers of the description of this method in international literature. A further search for papers published by the method's pioneers led to four more papers and one monograph dealing with directed QCA ( Figure 1 ).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1744987117741667-fig1.jpg

The search strategy for the identification of papers.

Finally, the authors of this paper integrated and elaborated a comprehensive and stepwise method of directed QCA based on the commonalities of methods discussed in the included papers. Also, the experiences of the current authors in the field of qualitative research were incorporated into the suggested stepwise method of data analysis for directed QCA ( Table 1 ).

The suggested steps for directed content analysis.

StepsReferences
Preparation phase
 1. Acquiring the necessary general skills ,
 2. Selecting the appropriate sampling strategyInferred by the authors of the present paper from
 3. Deciding on the analysis of manifest and/or latent content
 4. Developing an interview guideInferred by the authors of the present paper from
 5. Conducting and transcribing interviews ,
 6. Specifying the unit of analysis
 7. Being immersed in data
Organisation phase
 8. Developing a formative categorisation matrixInferred by the authors of the present paper from
 9. Theoretically defining the main categories and subcategories ,
 10. Determining coding rules for main categories
 11. Pre-testing the categorisation matrixInferred by the authors of the present paper from
 12. Choosing and specifying the anchor samples for each main category
 13. Performing the main data analysis , ,
 14. Inductive abstraction of main categories from preliminary codes
 15. Establishment of links between generic categories and main categoriesSuggested by the authors of the present paper
Reporting phase
 16. Reporting all steps of directed content analysis and findings ,

While the included papers about directed QCA were the most cited ones in international literature, none of them provided sufficient detail with regard to how to conduct the data analysis process. This might hamper the use of this method by novice qualitative researchers and hinder its application by nurse researchers compared with inductive QCA. As it can be seen in Figure 1 , the search resulted in 5 articles that explain DCA method. The following is description of the articles, along with their strengths and weaknesses. Authors used the strengths in their suggested method as mentioned in Table 1 .

The methods suggested for directed QCA in the international literature

The method suggested by hsieh and shannon (2005).

Hsieh and Shannon (2005) developed two strategies for conducting directed QCA. The first strategy consists of reading textual data and highlighting those parts of the text that, on first impression, appeared to be related to the predetermined codes dictated by a theory or prior research findings. Next, the highlighted texts would be coded using the predetermined codes.

As for the second strategy, the only difference lay in starting the coding process without primarily highlighting the text. In both analysis strategies, the qualitative researcher should return to the text and perform reanalysis after the initial coding process ( Hsieh and Shannon, 2005 ). However, the current authors believe that this second strategy provides an opportunity for recognising missing texts related to the predetermined codes and also newly emerged ones. It also enhances the trustworthiness of findings.

As an important part of the method suggested by Hsieh and Shannon (2005) , the term ‘code’ was used for the different levels of abstraction, but a more precise definition of this term seems to be crucial. For instance, they stated that ‘data that cannot be coded are identified and analyzed later to determine if they represent a new category or a subcategory of an existing code’ (2005: 1282).

It seems that the first ‘code’ in the above sentence indicates the lowest level of abstraction that could be achieved instantly from raw data. However, the ‘code’ at the end of the sentence refers to a higher level of abstraction, because it denotes to a category or subcategory.

Furthermore, the interchangeable and inconsistent use of the words ‘predetermined code’ and ‘category’ could be confusing to novice qualitative researchers. Moreover, Hsieh and Shannon (2005) did not specify exactly which parts of the text, whether highlighted, coded or the whole text, should be considered during the reanalysis of the text after initial coding process. Such a lack of specification runs the risk of missing the content during the initial coding process, especially if the second review of the text is restricted to highlighted sections. One final important omission in this method is the lack of an explicit description of the process through which new codes emerge during the reanalysis of the text. Such a clarification is crucial, because the detection of subtle links between newly emerging codes and the predetermined ones is not straightforward.

The method suggested by Elo and Kyngäs (2008)

Elo and Kyngäs (2008) suggested ‘structured’ and ‘unconstrained’ methods or paths for directed QCA. Accordingly, after determining the ‘categorisation matrix’ as the framework for data collection and analysis during the study process, the whole content would be reviewed and coded. The use of the unconstrained matrix allows the development of some categories inductively by using the steps of ‘grouping’, ‘categorisation’ and ‘abstraction’. The use of a structured method requires a structured matrix upon which data are strictly coded. Hypotheses suggested by previous studies often are tested using this method ( Elo and Kyngäs, 2008 ).

The current authors believe that the label of ‘data gathering by the content’ (p. 110) in the unconstrained matrix path can be misleading. It refers to the data coding step rather than data collection. Also, in the description of the structured path there is an obvious discrepancy with regard to the selection of the portions of the content that fit or do not fit the matrix: ‘… if the matrix is structured, only aspects that fit the matrix of analysis are chosen from the data …’; ‘… when using a structured matrix of analysis, it is possible to choose either only the aspects from the data that fit the categorization frame or, alternatively, to choose those that do not’ ( Elo and Kyngäs, 2008 : 111–112).

Figure 1 in Elo and Kyngäs's paper ( 2008 : 110) clearly distinguished between the structured and unconstrained paths. On the other hand, the first sentence in the above quotation clearly explained the use of the structured matrix, but it was not clear whether the second sentence referred to the use of the structured or unconstrained matrix.

The method suggested by Zhang and Wildemuth (2009)

Considering the method suggested by Hsieh and Shannon (2005) , Zhang and Wildemuth (2009) suggested an eight-step method as follows: (1) preparation of data, (2) definition of the unit of analysis, (3) development of categories and the coding scheme, (4) testing the coding scheme in a text sample, (5) coding the whole text, (6) assessment of the coding's consistency, (7) drawing conclusions from the coded data, and (8) reporting the methods and findings ( Zhang and Wildemuth, 2009 ). Only in the third step of this method, the description of the process of category development, did Zhang and Wildemuth (2009) briefly make a distinction between the inductive versus deductive content analysis methods. On first impression, the only difference between the two approaches seems to be the origin from which categories are developed. In addition, the process of connecting the preliminary codes extracted from raw data with predetermined categories is described. Furthermore, it is not clear whether this linking should be established from categories to primary codes, or vice versa.

The method suggested by Mayring ( 2000 , 2014 )

Mayring ( 2000 , 2014 ) suggested a seven-step method for directed QCA that distinctively differentiated between inductive and deductive methods as follows: (1) determination of the research question and theoretical background, (2) definition of the category system such as main categories and subcategories based on the previous theory and research, (3) establishing a guideline for coding, considering definitions, anchor examples and coding rules, (5) reading the whole text, determining preliminary codes, adding anchor examples and coding rules, (5) revision of the category and coding guideline after working through 10–50% of the data, (6) reworking data if needed, or listing the final category, and (7) analysing and interpreting based on the category frequencies and contingencies.

Mayring suggested that coding rules should be defined to distinctly assign the parts of the text to a particular category. Furthermore, indicating which concrete part of the text serves as typical examples, also known as ‘anchor samples’, and belongs to a particular category was recommended for describing each category ( Mayring, 2000 , 2014 ). The current authors believe that these suggestions help clarify directed QCA and enhance its trustworthiness.

But when the term ‘preliminary coding’ was used, Mayring ( 2000 , 2014 ) did not clearly clarify whether these codes are inductively or deductively created. In addition, Mayring was inclined to apply the quantitative approach implicitly in steps 5 and 7, which is incongruent with the qualitative paradigm. Furthermore, nothing was stated about the possibility of the development of new categories from the textual material: ‘… theoretical considerations can lead to a further categories or rephrasing of categories from previous studies, but the categories are not developed out of the text material like in inductive category formation …’ ( Mayring, 2014 : 97).

Integration and clarification of methods for directed QCA

Directed QCA took different paths when the categorisation matrix contained concepts with higher-level versus lower-level abstractions. In matrices with low abstraction levels, linking raw data to predetermined categories was not difficult, and suggested methods in international nursing literature seem appropriate and helpful. For instance, Elo and Kyngäs (2008) introduced ‘mental well-being threats’ based on the categories of ‘dependence’, ‘worries’, ‘sadness’ and ‘guilt’. Hsieh and Shannon (2005) developed the categories of ‘denial’, ‘anger’, ‘bargaining’, ‘depression’ and ‘acceptance’ when elucidating the stages of grief. Therefore, the low-level abstractions easily could link raw data to categories. The predicament of directed QCA began when the categorisation matrix contained the concepts with high levels of abstraction. The gap regarding how to connect the highly abstracted categories to the raw data should be bridged by using a transparent and comprehensive analysis strategy. Therefore, the authors of this paper integrated the methods of directed QCA outlined in the international literature and elaborated them using the phases of ‘preparation’, ‘organization’ and ‘reporting’ proposed by Elo and Kyngäs (2008) . Also, the experiences of the current authors in the field of qualitative research were incorporated into their suggested stepwise method of data analysis. The method was presented using the example of the “team members’ motivation for cardiopulmonary resuscitation (CPR)” based on Victor Vroom's expectancy theory ( Assarroudi et al., 2017 ). In this example, interview transcriptions were considered as the unit of analysis, because interviews are the most common method of data collection in qualitative studies ( Gill et al., 2008 ).

Suggested method of directed QCA by the authors of this paper

This method consists of 16 steps and three phases, described below: preparation phase (steps 1–7), organisation phase (steps 8–15), and reporting phase (step 16).

The preparation phase:

  • The acquisition of general skills . In the first step, qualitative researchers should develop skills including self-critical thinking, analytical abilities, continuous self-reflection, sensitive interpretive skills, creative thinking, scientific writing, data gathering and self-scrutiny ( Elo et al., 2014 ). Furthermore, they should attain sufficient scientific and content-based mastery of the method chosen for directed QCA. In the proposed example, qualitative researchers can achieve this mastery through conducting investigations in original sources related to Victor Vroom's expectancy theory. Main categories pertaining to Victor Vroom's expectancy theory were ‘expectancy’, ‘instrumentality’ and ‘valence’. This theory defined ‘expectancy’ as the perceived probability that efforts could lead to good performance. ‘Instrumentality’ was the perceived probability that good performance led to desired outcomes. ‘Valence’ was the value that the individual personally placed on outcomes ( Vroom, 1964 , 2005 ).
  • Selection of the appropriate sampling strategy . Qualitative researchers need to select the proper sampling strategies that facilitate an access to key informants on the study phenomenon ( Elo et al., 2014 ). Sampling methods such as purposive, snowball and convenience methods ( Coyne, 1997 ) can be used with the consideration of maximum variations in terms of socio-demographic and phenomenal characteristics ( Sandelowski, 1995 ). The sampling process ends when information ‘redundancy’ or ‘saturation’ is reached. In other words, it ends when all aspects of the phenomenon under study are explored in detail and no additional data are revealed in subsequent interviews ( Cleary et al., 2014 ). In line with this example, nurses and physicians who are the members of the CPR team should be selected, given diversity in variables including age, gender, the duration of work, number of CPR procedures, CPR in different patient groups and motivation levels for CPR.
  • Deciding on the analysis of manifest and/or latent content . Qualitative researchers decide whether the manifest and/or latent contents should be considered for analysis based on the study's aim. The manifest content is limited to the transcribed interview text, but latent content includes both the researchers' interpretations of available text, and participants' silences, pauses, sighs, laughter, posture, etc. ( Elo and Kyngäs, 2008 ). Both types of content are recommended to be considered for data analysis, because a deep understanding of data is preferred for directed QCA ( Thomas and Magilvy, 2011 ).
  • Developing an interview guide . The interview guide contains open-ended questions based on the study's aims, followed by directed questions about main categories extracted from the existing theory or previous research ( Hsieh and Shannon, 2005 ). Directed questions guide how to conduct interviews when using directed or conventional methods. The following open-ended and directed questions were used in this example: An open-ended question was ‘What is in your mind when you are called for performing CPR?’ The directed question for the main category of ‘expectancy’ could be ‘How does the expectancy of the successful CPR procedure motivate you to resuscitate patients?’
  • Conducting and transcribing interviews . An interview guide is used to conduct interviews for directed QCA. After each interview session, the entire interview is transcribed verbatim immediately ( Poland, 1995 ) and with utmost care ( Seidman, 2013 ). Two recorders should be used to ensure data backup ( DiCicco-Bloom and Crabtree, 2006 ). (For more details concerning skills required for conducting successful qualitative interviews, see Edenborough, 2002 ; Kramer, 2011 ; Schostak, 2005 ; Seidman, 2013 ).
  • Specifying the unit of analysis . The unit of analysis may include the person, a program, an organisation, a class, community, a state, a country, an interview, or a diary written by the researchers ( Graneheim and Lundman, 2004 ). The transcriptions of interviews are usually considered units of analysis when data are collected using interviews. In this example, interview transcriptions and filed notes are considered as the units of analysis.
  • Immersion in data . The transcribed interviews are read and reviewed several times with the consideration of the following questions: ‘Who is telling?’, ‘Where is this happening?’, ‘When did it happen?’, ‘What is happening?’, and ‘Why?’ ( Elo and Kyngäs, 2008 ). These questions help researchers get immersed in data and become able to extract related meanings ( Elo and Kyngäs, 2008 ; Elo et al., 2014 ).

The organisation phase:

The categorisation matrix of the team members' motivation for CPR.

Motivation for CPR
ExpectancyInstrumentalityValenceOther inductively emerged categories

CPR: cardiopulmonary resuscitation.

  • Theoretical definition of the main categories and subcategories . Derived from the existing theory or previous research, the theoretical definitions of categories should be accurate and objective ( Mayring, 2000 , 2014 ). As for this example, ‘expectancy’ as a main category could be defined as the “subjective probability that the efforts by an individual led to an acceptable level of performance (effort–performance association) or to the desired outcome (effort–outcome association)” ( Van Eerde and Thierry, 1996 ; Vroom, 1964 ).
  • – Expectancy in the CPR was a subjective probability formed in the rescuer's mind.
  • – This subjective probability should be related to the association between the effort–performance or effort–outcome relationship perceived by the rescuer.
  • The pre-testing of the categorisation matrix . The categorisation matrix should be tested using a pilot study. This is an essential step, particularly if more than one researcher is involved in the coding process. In this step, qualitative researchers should independently and tentatively encode the text, and discuss the difficulties in the use of the categorisation matrix and differences in the interpretations of the unit of analysis. The categorisation matrix may be further modified as a result of such discussions ( Elo et al., 2014 ). This also can increase inter-coder reliability ( Vaismoradi et al., 2013 ) and the trustworthiness of the study.
  • Choosing and specifying the anchor samples for each main category . An anchor sample is an explicit and concise exemplification, or the identifier of a main category, selected from meaning units ( Mayring, 2014 ). An anchor sample for ‘expectancy’ as the main category of this example could be as follows: ‘… the patient with advanced metastatic cancer who requires CPR … I do not envision a successful resuscitation for him.’

An example of steps taken for the abstraction of the phenomenon of expectancy (main category).

Meaning unitSummarised meaning unitPreliminary codeGroup of codesSubcategoryGeneric categoryMain category
The patient with advanced heart failure: I do not envisage a successful resuscitation for himNo expectation for the resuscitation of those with advanced heart failureCardiovascular conditions that decrease the chance of successful resuscitationEstimation of the functional capacity of vital organsScientific estimation of life capacityEstimation of the chances of successful CPRExpectancy
Patients are rarely resuscitated, especially those who experience a cardiogenic shock following a heart attackLow possibility of resuscitation of patients with a cardiogenic shock
When ventricular fibrillation is likely, a chance of resuscitation still exists even after performing CPR for 30 minutesThe higher chance of resuscitation among patients with ventricular fibrillationCardiovascular conditions that increase the chance of successful resuscitation
Patients with sudden cardiac arrest are more likely to be resuscitated through CPRThe higher chance of resuscitation among patients with sudden cardiac arrest
Estimation of the severity of the patient's complications
Estimation of remaining life span
Intuitive estimation of the chances of successful resuscitation
Uncertainty in the estimation
Time considerations in resuscitation
Estimation of self-efficacy

CPR: cardiopulmonary resuscitation

  • The inductive abstraction of main categories from preliminary codes . Preliminary codes are grouped and categorised according to their meanings, similarities and differences. The products of this categorisation process are known as ‘generic categories’ ( Elo and Kyngäs, 2008 ) ( Table 3 ).
  • The establishment of links between generic categories and main categories . The constant comparison of generic categories and main categories results in the development of a conceptual and logical link between generic and main categories, nesting generic categories into the pre-existing main categories and creating new main categories. The constant comparison technique is applied to data analysis throughout the study ( Zhang and Wildemuth, 2009 ) ( Table 3 ).

The reporting phase:

  • Reporting all steps of directed QCA and findings . This includes a detailed description of the data analysis process and the enumeration of findings ( Elo and Kyngäs, 2008 ). Findings should be systematically presented in such a way that the association between the raw data and the categorisation matrix is clearly shown and easily followed. Detailed descriptions of the sampling process, data collection, analysis methods and participants' characteristics should be presented. The trustworthiness criteria adopted along with the steps taken to fulfil them should also be outlined. Elo et al. (2014) developed a comprehensive and specific checklist for reporting QCA studies.

Trustworthiness

Multiple terms are used in the international literature regarding the validation of qualitative studies ( Creswell, 2013 ). The terms ‘validity’, ‘reliability’, and ‘generalizability’ in quantitative studies are equivalent to ‘credibility’, ‘dependability’, and ‘transferability’ in qualitative studies, respectively ( Polit and Beck, 2013 ). These terms, along with the additional concept of confirmability, were introduced by Lincoln and Guba (1985) . Polit and Beck added the term ‘authenticity’ to the list. Collectively, they are the different aspects of trustworthiness in all types of qualitative studies ( Polit and Beck, 2013 ).

To ehnance the trustworthiness of the directed QCA study, researchers should thoroughly delineate the three phases of ‘preparation’, ‘organization’, and ‘reporting’ ( Elo et al., 2014 ). Such phases are needed to show in detail how categories are developed from data ( Elo and Kyngäs, 2008 ; Graneheim and Lundman, 2004 ; Vaismoradi et al., 2016 ). To accomplish this, appendices, tables and figures may be used to depict the reduction process ( Elo and Kyngäs, 2008 ; Elo et al., 2014 ). Furthermore, an honest account of different realities during data analysis should be provided ( Polit and Beck, 2013 ). The authors of this paper believe that adopting this 16-step method can enhance the trustworthiness of directed QCA.

Directed QCA is used to validate, refine and/or extend a theory or theoretical framework in a new context ( Elo and Kyngäs, 2008 ; Hsieh and Shannon, 2005 ). The purpose of this paper is to provide a comprehensive, systematic, yet simple and applicable method for directed QCA to facilitate its use by novice qualitative researchers.

Despite the current misconceptions regarding the simplicity of QCA and directed QCA, knowledge development is required for conducting them ( Elo and Kyngäs, 2008 ). Directed QCA is often performed on a considerable amount of textual data ( Pope et al., 2000 ). Nevertheless, few studies have discussed the multiple steps need to be taken to conduct it. In this paper, we have integrated and elaborated the essential steps pointed to by international qualitative researchers on directed QCA such as ‘preliminary coding’, ‘theoretical definition’ ( Mayring, 2000 , 2014 ), ‘coding rule’, ‘anchor sample’ ( Mayring, 2014 ), ‘inductive analysis in directed qualitative content analysis’ ( Elo and Kyngäs, 2008 ), and ‘pretesting the categorization matrix’ ( Elo et al., 2014 ). Moreover, the authors have added a detailed discussion regarding ‘the use of inductive abstraction’ and ‘linking between generic categories and main categories’.

The importance of directed QCA is increased due to the development of knowledge and theories derived from QCA using the inductive approach, and the growing need to test the theories. Directed QCA proposed in this paper, is a reliable, transparent and comprehensive method that may increase the rigour of data analysis, allow the comparison of the findings of different studies, and yield practical results.

Abdolghader Assarroudi (PhD, MScN, BScN) is Assistant Professor in Nursing, Department of Medical‐Surgical Nursing, School of Nursing and Midwifery, Sabzevar University of Medical Sciences, Sabzevar, Iran. His main areas of research interest are qualitative research, instrument development study and cardiopulmonary resuscitation.

Fatemeh Heshmati Nabavi (PhD, MScN, BScN) is Assistant Professor in nursing, Department of Nursing Management, School of Nursing and Midwifery, Mashhad University of Medical Sciences, Mashhad, Iran. Her main areas of research interest are medical education, nursing management and qualitative study.

Mohammad Reza Armat (MScN, BScN) graduated from the Mashhad University of Medical Sciences in 1991 with a Bachelor of Science degree in nursing. He completed his Master of Science degree in nursing at Tarbiat Modarres University in 1995. He is an instructor in North Khorasan University of Medical Sciences, Bojnourd, Iran. Currently, he is a PhD candidate in nursing at the Mashhad School of Nursing and Midwifery, Mashhad University of Medical Sciences, Iran.

Abbas Ebadi (PhD, MScN, BScN) is professor in nursing, Behavioral Sciences Research Centre, School of Nursing, Baqiyatallah University of Medical Sciences, Tehran, Iran. His main areas of research interest are instrument development and qualitative study.

Mojtaba Vaismoradi (PhD, MScN, BScN) is a doctoral nurse researcher at the Faculty of Nursing and Health Sciences, Nord University, Bodø, Norway. He works in Nord’s research group ‘Healthcare Leadership’ under the supervision of Prof. Terese Bondas. For now, this team has focused on conducting meta‐synthesis studies with the collaboration of international qualitative research experts. His main areas of research interests are patient safety, elderly care and methodological issues in qualitative descriptive approaches. Mojtaba is the associate editor of BMC Nursing and journal SAGE Open in the UK.

Key points for policy, practice and/or research

  • In this paper, essential steps pointed to by international qualitative researchers in the field of directed qualitative content analysis were described and integrated.
  • A detailed discussion regarding the use of inductive abstraction, and linking between generic categories and main categories, was presented.
  • A 16-step method of directed qualitative content analysis proposed in this paper is a reliable, transparent, comprehensive, systematic, yet simple and applicable method. It can increase the rigour of data analysis and facilitate its use by novice qualitative researchers.

Declaration of conflicting interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

The author(s) received no financial support for the research, authorship, and/or publication of this article.

  • Assarroudi A, Heshmati Nabavi F, Ebadi A, et al.(2017) Professional rescuers' experiences of motivation for cardiopulmonary resuscitation: A qualitative study . Nursing & Health Sciences . 19(2): 237–243. [ PubMed ] [ Google Scholar ]
  • Berelson B. (1952) Content Analysis in Communication Research , Glenoce, IL: Free Press. [ Google Scholar ]
  • Cleary M, Horsfall J, Hayter M. (2014) Data collection and sampling in qualitative research: Does size matter? Journal of Advanced Nursing 70 ( 3 ): 473–475. [ PubMed ] [ Google Scholar ]
  • Coyne IT. (1997) Sampling in qualitative research.. Purposeful and theoretical sampling; merging or clear boundaries? Journal of Advanced Nursing 26 ( 3 ): 623–630. [ PubMed ] [ Google Scholar ]
  • Creswell JW. (2013) Research Design: Qualitative, Quantitative, and Mixed Methods Approaches , 4th edn. Thousand Oaks, CA: SAGE Publications. [ Google Scholar ]
  • DiCicco-Bloom B, Crabtree BF. (2006) The qualitative research interview . Medical Education 40 ( 4 ): 314–321. [ PubMed ] [ Google Scholar ]
  • Downe-Wamboldt B. (1992) Content analysis: Method, applications, and issues . Health Care for Women International 13 ( 3 ): 313–321. [ PubMed ] [ Google Scholar ]
  • Edenborough R. (2002) Effective Interviewing: A Handbook of Skills and Techniques , 2nd edn. London: Kogan Page. [ Google Scholar ]
  • Elo S, Kyngäs H. (2008) The qualitative content analysis process . Journal of Advanced Nursing 62 ( 1 ): 107–115. [ PubMed ] [ Google Scholar ]
  • Elo S, Kääriäinen M, Kanste O, et al.(2014) Qualitative content analysis: A focus on trustworthiness . SAGE Open 4 ( 1 ): 1–10. [ Google Scholar ]
  • Gill P, Stewart K, Treasure E, et al.(2008) Methods of data collection in qualitative research: Interviews and focus groups . British Dental Journal 204 ( 6 ): 291–295. [ PubMed ] [ Google Scholar ]
  • Graneheim UH, Lundman B. (2004) Qualitative content analysis in nursing research: Concepts, procedures and measures to achieve trustworthiness . Nurse Education Today 24 ( 2 ): 105–112. [ PubMed ] [ Google Scholar ]
  • Hsieh H-F, Shannon SE. (2005) Three approaches to qualitative content analysis . Qualitative Health Research 15 ( 9 ): 1277–1288. [ PubMed ] [ Google Scholar ]
  • Kramer EP. (2011) 101 Successful Interviewing Strategies , Boston, MA: Course Technology, Cengage Learning. [ Google Scholar ]
  • Lincoln YS, Guba EG. (1985) Naturalistic Inquiry , Beverly Hills, CA: SAGE Publications. [ Google Scholar ]
  • Mayring P. (2000) Qualitative Content Analysis . Forum: Qualitative Social Research 1 ( 2 ): Available at: http://www.qualitative-research.net/fqs-texte/2-00/02-00mayring-e.htm (accessed 10 March 2005). [ Google Scholar ]
  • Mayring P. (2014) Qualitative content analysis: Theoretical foundation, basic procedures and software solution , Klagenfurt: Monograph. Available at: http://nbn-resolving.de/urn:nbn:de:0168-ssoar-395173 (accessed 10 May 2015). [ Google Scholar ]
  • Poland BD. (1995) Transcription quality as an aspect of rigor in qualitative research . Qualitative Inquiry 1 ( 3 ): 290–310. [ Google Scholar ]
  • Polit DF, Beck CT. (2013) Essentials of Nursing Research: Appraising Evidence for Nursing Practice , 7th edn. China: Lippincott Williams & Wilkins. [ Google Scholar ]
  • Pope C, Ziebland S, Mays N. (2000) Analysing qualitative data . BMJ 320 ( 7227 ): 114–116. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sandelowski M. (1995) Sample size in qualitative research . Research in Nursing & Health 18 ( 2 ): 179–183. [ PubMed ] [ Google Scholar ]
  • Schostak J. (2005) Interviewing and Representation in Qualitative Research , London: McGraw-Hill/Open University Press. [ Google Scholar ]
  • Schreier M. (2014) Qualitative content analysis . In: Flick U. (ed.) The SAGE Handbook of Qualitative Data Analysis , Thousand Oaks, CA: SAGE Publications Ltd, pp. 170–183. [ Google Scholar ]
  • Seidman I. (2013) Interviewing as Qualitative Research: A Guide for Researchers in Education and the Social Sciences , 3rd edn. New York: Teachers College Press. [ Google Scholar ]
  • Thomas E, Magilvy JK. (2011) Qualitative rigor or research validity in qualitative research . Journal for Specialists in Pediatric Nursing 16 ( 2 ): 151–155. [ PubMed ] [ Google Scholar ]
  • Vaismoradi M, Jones J, Turunen H, et al.(2016) Theme development in qualitative content analysis and thematic analysis . Journal of Nursing Education and Practice 6 ( 5 ): 100–110. [ Google Scholar ]
  • Vaismoradi M, Turunen H, Bondas T. (2013) Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study . Nursing & Health Sciences 15 ( 3 ): 398–405. [ PubMed ] [ Google Scholar ]
  • Van Eerde W, Thierry H. (1996) Vroom's expectancy models and work-related criteria: A meta-analysis . Journal of Applied Psychology 81 ( 5 ): 575. [ Google Scholar ]
  • Vroom VH. (1964) Work and Motivation , New York: Wiley. [ Google Scholar ]
  • Vroom VH. (2005) On the origins of expectancy theory . In: Smith KG, Hitt MA. (eds) Great Minds in Management: The Process of Theory Development , Oxford: Oxford University Press, pp. 239–258. [ Google Scholar ]
  • Zhang Y, Wildemuth BM. (2009) Qualitative analysis of content . In: Wildemuth B. (ed.) Applications of Social Research Methods to Questions in Information and Library Science , Westport, CT: Libraries Unlimited, pp. 308–319. [ Google Scholar ]

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

18.5 Content analysis

Learning objectives.

Learners will be able to…

  • Explain defining features of content analysis as a strategy for analyzing qualitative data
  • Determine when content analysis can be most effectively used
  • Formulate an initial content analysis plan (if appropriate for your research proposal)

What are you trying to accomplish with content analysis

Much like with thematic analysis, if you elect to use content analysis to analyze your qualitative data, you will be deconstructing the artifacts that you have sampled and looking for similarities across these deconstructed parts. Also consistent with thematic analysis, you will be seeking to bring together these similarities in the discussion of your findings to tell a collective story of what you learned across your data. While the distinction between thematic analysis and content analysis is somewhat murky, if you are looking to distinguish between the two, content analysis:

  • Places greater emphasis on determining the unit of analysis. Just to quickly distinguish, when we discussed sampling in Chapter 10 we also used the term “unit of analysis. As a reminder, when we are talking about sampling, unit of analysis refers to the entity that a researcher wants to say something about at the end of her study (individual, group, or organization). However, for our purposes when we are conducting a content analysis, this term has to do with the ‘chunk’ or segment of data you will be looking at to reflect a particular idea. This may be a line, a paragraph, a section, an image or section of an image, a scene, etc., depending on the type of artifact you are dealing with and the level at which you want to subdivide this artifact.
  • Content analysis is also more adept at bringing together a variety of forms of artifacts in the same study. While other approaches can certainly accomplish this, content analysis more readily allows the researcher to deconstruct, label and compare different kinds of ‘content’. For example, perhaps you have developed a new advocacy training for community members. To evaluate your training you want to analyze a variety of products they create after the workshop, including written products (e.g. letters to their representatives, community newsletters), audio/visual products (e.g. interviews with leaders, photos hosted in a local art exhibit on the topic) and performance products (e.g. hosting town hall meetings, facilitating rallies). Content analysis can allow you the capacity to examine evidence across these different formats.

For some more in-depth discussion comparing these two approaches, including more philosophical differences between the two, check out this article by Vaismoradi, Turunen, and Bondas (2013) . [1]

Variations in the approach

There are also significant variations among different content analysis approaches. Some of these approaches are more concerned with quantifying (counting) how many times a code representing a specific concept or idea appears. These are more quantitative and deductive in nature. Other approaches look for codes to emerge from the data to help describe some idea or event. These are more qualitative and inductive . Hsieh and Shannon (2005) [2] describe three approaches to help understand some of these differences:

  • Conventional Content Analysis. Starting with a general idea or phenomenon you want to explore (for which there is limited data), coding categories then emerge from the raw data. These coding categories help us understand the different dimensions, patterns, and trends that may exist within the raw data collected in our research.
  • Directed Content Analysis. Starts with a theory or existing research for which you develop your initial codes (there is some existing research, but incomplete in some aspects) and uses these to guide your initial analysis of the raw data to flesh out a more detailed understanding of the codes and ultimately, the focus of your study.
  • Summative Content Analysis. Starts by examining how many times and where codes are showing up in your data, but then looks to develop an understanding or an “interpretation of the underlying context” (p.1277) for how they are being used. As you might have guessed, this approach is more likely to be used if you’re studying a topic that already has some existing research that forms a basic place to begin the analysis.

This is only one system of categorization for different approaches to content analysis. If you are interested in utilizing a content analysis for your proposal, you will want to design an approach that fits well with the aim of your research and will help you generate findings that will help to answer your research question(s). Make sure to keep this as your north star, guiding all aspects of your design.

Determining your codes

We are back to coding! As in thematic analysis, you will be coding your data (labeling smaller chunks of information within each data artifact of your sample). In content analysis, you may be using pre-determined codes, such as those suggested by an existing theory (deductive) or you may seek out emergent codes that you uncover as you begin reviewing your data (inductive). Regardless of which approach you take, you will want to develop a well-documented codebook.

A codebook is a document that outlines the list of codes you are using as you analyze your data, a descriptive definition of each of these codes, and any decision-rules that apply to your codes. A decision-rule provides information on how the researcher determines what code should be placed on an item, especially when codes may be similar in nature. If you are using a deductive approach, your codebook will largely be formed prior to analysis, whereas if you use an inductive approach, your codebook will be built over time. To help illustrate what this might look like, Figure 18.12 offers a brief excerpt of a codebook from one of the projects I’m currently working on.

Excel sheet labeled "codes after team meeting on 4/12/19, perceptions on ageing project". Columns are labeled "codes", "descriptions", "decision rules". The rows are labeled "housing", "health" and "preparedness for ageing"

Coding, comparing, counting

Once you have (or are developing) your codes, your next step will be to actually code your data. In most cases, you are looking for your coding structure (your list of codes) to have good coverage . This means that most of the content in your sample should have a code applied to it. If there are large segments of your data that are uncoded, you are potentially missing things. Now, do note that I said most of the time. There are instances when we are using artifacts that may contain a lot of information, only some of which will apply to what we are studying. In these instances, we obviously wouldn’t be expecting the same level of coverage with our codes. As you go about coding you may change, refine and adapt your codebook as you go through your data and compare the information that reflects each code. As you do this, keep your research journal handy and make sure to capture and record these changes so that you have a trail documenting the evolution of your analysis. Also, as suggested earlier, content analysis may also involve some degree of counting as well. You may be keeping a tally of how many times a particular code is represented in your data, thereby offering your reader both a quantification of how many times (and across how many sources) a code was reflected and a narrative description of what that code came to mean.

Representing the findings from your coding scheme

Finally, you need to consider how you will represent the findings from your coding work. This may involve listing out narrative descriptions of codes, visual representations of what each code came to mean or how they related to each other, or a table that includes examples of how your data reflected different elements of your coding structure. However you choose to represent the findings of your content analysis, make sure the resulting product answers your research question and is readily understandable and easy-to-interpret for your audience.

Key Takeaways

  • Much like thematic analysis, content analysis is concerned with breaking up qualitative data so that you can compare and contrast ideas as you look across all your data, collectively. A couple of distinctions between thematic and content analysis include content analysis’s emphasis on more clearly specifying the unit of analysis used for the purpose of analysis and the flexibility that content analysis offers in comparing across different types of data.
  • Coding involves both grouping data (after it has been deconstructed) and defining these codes (giving them meaning). If we are using a deductive approach to analysis, we will start with the code defined. If we are using an inductive approach, the code will not be defined until the end of the analysis.

Identify a qualitative research article that uses content analysis (do a quick search of “qualitative” and “content analysis” in your research search engine of choice).

  • How do the authors display their findings?
  • What was effective in their presentation?
  • What was ineffective in their presentation?

Resources for learning more about Content Analysis

Bengtsson, M. (2016). How to plan and perform a qualitative study using content analysis .

Colorado State University (n.d.) Writing@CSU Guide: Content analysis .

Columbia University Mailman School of Public Health, Population Health. (n.d.) Methods: Content analysis

Mayring, P. (2000, June). Qualitative content analysis .

A few exemplars of studies employing Content Analysis

Collins et al. (2018). Content analysis of advantages and disadvantages of drinking among individuals with the lived experience of homelessness and alcohol use disorders .

Corley, N. A., & Young, S. M. (2018). Is social work still racist? A content analysis of recent literature .

Deepak et al. (2016). Intersections between technology, engaged learning, and social capital in social work education .

  • Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & Health Sciences, 15 (3), 398-405. ↵
  • Hsieh, H. F., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15 (9), 1277-1288. ↵

An approach to data analysis that seeks to identify patterns, trends, or ideas across qualitative data through processes of coding and categorization.

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

An approach to data analysis in which the researchers begins their analysis using a theory to see if their data fits within this theoretical framework (tests the theory).

An approach to data analysis in which we gather our data first and then generate a theory about its meaning through our analysis.

Part of the qualitative data analysis process where we begin to interpret and assign meaning to the data.

A document that we use to keep track of and define the codes that we have identified (or are using) in our qualitative data analysis.

A decision-rule provides information on how the researcher determines what code should be placed on an item, especially when codes may be similar in nature.

In qualitative data, coverage refers to the amount of data that can be categorized or sorted using the code structure that we are using (or have developed) in our study. With qualitative research, our aim is to have good coverage with our code structure.

Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

phd qualitative content analysis

How To Write The Results/Findings Chapter

For qualitative studies (dissertations & theses).

By: Jenna Crossley (PhD). Expert Reviewed By: Dr. Eunice Rautenbach | August 2021

So, you’ve collected and analysed your qualitative data, and it’s time to write up your results chapter. But where do you start? In this post, we’ll guide you through the qualitative results chapter (also called the findings chapter), step by step. 

Overview: Qualitative Results Chapter

  • What (exactly) the qualitative results chapter is
  • What to include in your results chapter
  • How to write up your results chapter
  • A few tips and tricks to help you along the way
  • Free results chapter template

What exactly is the results chapter?

The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods ). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and discuss its meaning), depending on your university’s preference.  We’ll treat the two chapters as separate, as that’s the most common approach.

In contrast to a quantitative results chapter that presents numbers and statistics, a qualitative results chapter presents data primarily in the form of words . But this doesn’t mean that a qualitative study can’t have quantitative elements – you could, for example, present the number of times a theme or topic pops up in your data, depending on the analysis method(s) you adopt.

Adding a quantitative element to your study can add some rigour, which strengthens your results by providing more evidence for your claims. This is particularly common when using qualitative content analysis. Keep in mind though that qualitative research aims to achieve depth, richness and identify nuances , so don’t get tunnel vision by focusing on the numbers. They’re just cream on top in a qualitative analysis.

So, to recap, the results chapter is where you objectively present the findings of your analysis, without interpreting them (you’ll save that for the discussion chapter). With that out the way, let’s take a look at what you should include in your results chapter.

Free template for results section of a dissertation or thesis

What should you include in the results chapter?

As we’ve mentioned, your qualitative results chapter should purely present and describe your results , not interpret them in relation to the existing literature or your research questions . Any speculations or discussion about the implications of your findings should be reserved for your discussion chapter.

In your results chapter, you’ll want to talk about your analysis findings and whether or not they support your hypotheses (if you have any). Naturally, the exact contents of your results chapter will depend on which qualitative analysis method (or methods) you use. For example, if you were to use thematic analysis, you’d detail the themes identified in your analysis, using extracts from the transcripts or text to support your claims.

While you do need to present your analysis findings in some detail, you should avoid dumping large amounts of raw data in this chapter. Instead, focus on presenting the key findings and using a handful of select quotes or text extracts to support each finding . The reams of data and analysis can be relegated to your appendices.

While it’s tempting to include every last detail you found in your qualitative analysis, it is important to make sure that you report only that which is relevant to your research aims, objectives and research questions .  Always keep these three components, as well as your hypotheses (if you have any) front of mind when writing the chapter and use them as a filter to decide what’s relevant and what’s not.

Need a helping hand?

phd qualitative content analysis

How do I write the results chapter?

Now that we’ve covered the basics, it’s time to look at how to structure your chapter. Broadly speaking, the results chapter needs to contain three core components – the introduction, the body and the concluding summary. Let’s take a look at each of these.

Section 1: Introduction

The first step is to craft a brief introduction to the chapter. This intro is vital as it provides some context for your findings. In your introduction, you should begin by reiterating your problem statement and research questions and highlight the purpose of your research . Make sure that you spell this out for the reader so that the rest of your chapter is well contextualised.

The next step is to briefly outline the structure of your results chapter. In other words, explain what’s included in the chapter and what the reader can expect. In the results chapter, you want to tell a story that is coherent, flows logically, and is easy to follow , so make sure that you plan your structure out well and convey that structure (at a high level), so that your reader is well oriented.

The introduction section shouldn’t be lengthy. Two or three short paragraphs should be more than adequate. It is merely an introduction and overview, not a summary of the chapter.

Pro Tip – To help you structure your chapter, it can be useful to set up an initial draft with (sub)section headings so that you’re able to easily (re)arrange parts of your chapter. This will also help your reader to follow your results and give your chapter some coherence.  Be sure to use level-based heading styles (e.g. Heading 1, 2, 3 styles) to help the reader differentiate between levels visually. You can find these options in Word (example below).

Heading styles in the results chapter

Section 2: Body

Before we get started on what to include in the body of your chapter, it’s vital to remember that a results section should be completely objective and descriptive, not interpretive . So, be careful not to use words such as, “suggests” or “implies”, as these usually accompany some form of interpretation – that’s reserved for your discussion chapter.

The structure of your body section is very important , so make sure that you plan it out well. When planning out your qualitative results chapter, create sections and subsections so that you can maintain the flow of the story you’re trying to tell. Be sure to systematically and consistently describe each portion of results. Try to adopt a standardised structure for each portion so that you achieve a high level of consistency throughout the chapter.

For qualitative studies, results chapters tend to be structured according to themes , which makes it easier for readers to follow. However, keep in mind that not all results chapters have to be structured in this manner. For example, if you’re conducting a longitudinal study, you may want to structure your chapter chronologically. Similarly, you might structure this chapter based on your theoretical framework . The exact structure of your chapter will depend on the nature of your study , especially your research questions.

As you work through the body of your chapter, make sure that you use quotes to substantiate every one of your claims . You can present these quotes in italics to differentiate them from your own words. A general rule of thumb is to use at least two pieces of evidence per claim, and these should be linked directly to your data. Also, remember that you need to include all relevant results , not just the ones that support your assumptions or initial leanings.

In addition to including quotes, you can also link your claims to the data by using appendices , which you should reference throughout your text. When you reference, make sure that you include both the name/number of the appendix , as well as the line(s) from which you drew your data.

As referencing styles can vary greatly, be sure to look up the appendix referencing conventions of your university’s prescribed style (e.g. APA , Harvard, etc) and keep this consistent throughout your chapter.

Section 3: Concluding summary

The concluding summary is very important because it summarises your key findings and lays the foundation for the discussion chapter . Keep in mind that some readers may skip directly to this section (from the introduction section), so make sure that it can be read and understood well in isolation.

In this section, you need to remind the reader of the key findings. That is, the results that directly relate to your research questions and that you will build upon in your discussion chapter. Remember, your reader has digested a lot of information in this chapter, so you need to use this section to remind them of the most important takeaways.

Importantly, the concluding summary should not present any new information and should only describe what you’ve already presented in your chapter. Keep it concise – you’re not summarising the whole chapter, just the essentials.

Tips for writing an A-grade results chapter

Now that you’ve got a clear picture of what the qualitative results chapter is all about, here are some quick tips and reminders to help you craft a high-quality chapter:

  • Your results chapter should be written in the past tense . You’ve done the work already, so you want to tell the reader what you found , not what you are currently finding .
  • Make sure that you review your work multiple times and check that every claim is adequately backed up by evidence . Aim for at least two examples per claim, and make use of an appendix to reference these.
  • When writing up your results, make sure that you stick to only what is relevant . Don’t waste time on data that are not relevant to your research objectives and research questions.
  • Use headings and subheadings to create an intuitive, easy to follow piece of writing. Make use of Microsoft Word’s “heading styles” and be sure to use them consistently.
  • When referring to numerical data, tables and figures can provide a useful visual aid. When using these, make sure that they can be read and understood independent of your body text (i.e. that they can stand-alone). To this end, use clear, concise labels for each of your tables or figures and make use of colours to code indicate differences or hierarchy.
  • Similarly, when you’re writing up your chapter, it can be useful to highlight topics and themes in different colours . This can help you to differentiate between your data if you get a bit overwhelmed and will also help you to ensure that your results flow logically and coherently.

If you have any questions, leave a comment below and we’ll do our best to help. If you’d like 1-on-1 help with your results chapter (or any chapter of your dissertation or thesis), check out our private dissertation coaching service here or book a free initial consultation to discuss how we can help you.

phd qualitative content analysis

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

22 Comments

David Person

This was extremely helpful. Thanks a lot guys

Aditi

Hi, thanks for the great research support platform created by the gradcoach team!

I wanted to ask- While “suggests” or “implies” are interpretive terms, what terms could we use for the results chapter? Could you share some examples of descriptive terms?

TcherEva

I think that instead of saying, ‘The data suggested, or The data implied,’ you can say, ‘The Data showed or revealed, or illustrated or outlined’…If interview data, you may say Jane Doe illuminated or elaborated, or Jane Doe described… or Jane Doe expressed or stated.

Llala Phoshoko

I found this article very useful. Thank you very much for the outstanding work you are doing.

Oliwia

What if i have 3 different interviewees answering the same interview questions? Should i then present the results in form of the table with the division on the 3 perspectives or rather give a results in form of the text and highlight who said what?

Rea

I think this tabular representation of results is a great idea. I am doing it too along with the text. Thanks

Nomonde Mteto

That was helpful was struggling to separate the discussion from the findings

Esther Peter.

this was very useful, Thank you.

tendayi

Very helpful, I am confident to write my results chapter now.

Sha

It is so helpful! It is a good job. Thank you very much!

Nabil

Very useful, well explained. Many thanks.

Agnes Ngatuni

Hello, I appreciate the way you provided a supportive comments about qualitative results presenting tips

Carol Ch

I loved this! It explains everything needed, and it has helped me better organize my thoughts. What words should I not use while writing my results section, other than subjective ones.

Hend

Thanks a lot, it is really helpful

Anna milanga

Thank you so much dear, i really appropriate your nice explanations about this.

Wid

Thank you so much for this! I was wondering if anyone could help with how to prproperly integrate quotations (Excerpts) from interviews in the finding chapter in a qualitative research. Please GradCoach, address this issue and provide examples.

nk

what if I’m not doing any interviews myself and all the information is coming from case studies that have already done the research.

FAITH NHARARA

Very helpful thank you.

Philip

This was very helpful as I was wondering how to structure this part of my dissertation, to include the quotes… Thanks for this explanation

Aleks

This is very helpful, thanks! I am required to write up my results chapters with the discussion in each of them – any tips and tricks for this strategy?

Wei Leong YONG

For qualitative studies, can the findings be structured according to the Research questions? Thank you.

Katie Allison

Do I need to include literature/references in my findings chapter?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Skip to content

Read the latest news stories about Mailman faculty, research, and events. 

Departments

We integrate an innovative skills-based curriculum, research collaborations, and hands-on field experience to prepare students.

Learn more about our research centers, which focus on critical issues in public health.

Our Faculty

Meet the faculty of the Mailman School of Public Health. 

Become a Student

Life and community, how to apply.

Learn how to apply to the Mailman School of Public Health. 

  • Additional Resources

Content Analysis

Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts. As an example, researchers can evaluate language used within a news article to search for bias or partiality. Researchers can then make inferences about the messages within the texts, the writer(s), the audience, and even the culture and time of surrounding the text.

Description

Sources of data could be from interviews, open-ended questions, field research notes, conversations, or literally any occurrence of communicative language (such as books, essays, discussions, newspaper headlines, speeches, media, historical documents). A single study may analyze various forms of text in its analysis. To analyze the text using content analysis, the text must be coded, or broken down, into manageable code categories for analysis (i.e. “codes”). Once the text is coded into code categories, the codes can then be further categorized into “code categories” to summarize data even further.

Three different definitions of content analysis are provided below.

Definition 1: “Any technique for making inferences by systematically and objectively identifying special characteristics of messages.” (from Holsti, 1968)

Definition 2: “An interpretive and naturalistic approach. It is both observational and narrative in nature and relies less on the experimental elements normally associated with scientific research (reliability, validity, and generalizability) (from Ethnography, Observational Research, and Narrative Inquiry, 1994-2012).

Definition 3: “A research technique for the objective, systematic and quantitative description of the manifest content of communication.” (from Berelson, 1952)

Uses of Content Analysis

Identify the intentions, focus or communication trends of an individual, group or institution

Describe attitudinal and behavioral responses to communications

Determine the psychological or emotional state of persons or groups

Reveal international differences in communication content

Reveal patterns in communication content

Pre-test and improve an intervention or survey prior to launch

Analyze focus group interviews and open-ended questions to complement quantitative data

Types of Content Analysis

There are two general types of content analysis: conceptual analysis and relational analysis. Conceptual analysis determines the existence and frequency of concepts in a text. Relational analysis develops the conceptual analysis further by examining the relationships among concepts in a text. Each type of analysis may lead to different results, conclusions, interpretations and meanings.

Conceptual Analysis

Typically people think of conceptual analysis when they think of content analysis. In conceptual analysis, a concept is chosen for examination and the analysis involves quantifying and counting its presence. The main goal is to examine the occurrence of selected terms in the data. Terms may be explicit or implicit. Explicit terms are easy to identify. Coding of implicit terms is more complicated: you need to decide the level of implication and base judgments on subjectivity (an issue for reliability and validity). Therefore, coding of implicit terms involves using a dictionary or contextual translation rules or both.

To begin a conceptual content analysis, first identify the research question and choose a sample or samples for analysis. Next, the text must be coded into manageable content categories. This is basically a process of selective reduction. By reducing the text to categories, the researcher can focus on and code for specific words or patterns that inform the research question.

General steps for conducting a conceptual content analysis:

1. Decide the level of analysis: word, word sense, phrase, sentence, themes

2. Decide how many concepts to code for: develop a pre-defined or interactive set of categories or concepts. Decide either: A. to allow flexibility to add categories through the coding process, or B. to stick with the pre-defined set of categories.

Option A allows for the introduction and analysis of new and important material that could have significant implications to one’s research question.

Option B allows the researcher to stay focused and examine the data for specific concepts.

3. Decide whether to code for existence or frequency of a concept. The decision changes the coding process.

When coding for the existence of a concept, the researcher would count a concept only once if it appeared at least once in the data and no matter how many times it appeared.

When coding for the frequency of a concept, the researcher would count the number of times a concept appears in a text.

4. Decide on how you will distinguish among concepts:

Should text be coded exactly as they appear or coded as the same when they appear in different forms? For example, “dangerous” vs. “dangerousness”. The point here is to create coding rules so that these word segments are transparently categorized in a logical fashion. The rules could make all of these word segments fall into the same category, or perhaps the rules can be formulated so that the researcher can distinguish these word segments into separate codes.

What level of implication is to be allowed? Words that imply the concept or words that explicitly state the concept? For example, “dangerous” vs. “the person is scary” vs. “that person could cause harm to me”. These word segments may not merit separate categories, due the implicit meaning of “dangerous”.

5. Develop rules for coding your texts. After decisions of steps 1-4 are complete, a researcher can begin developing rules for translation of text into codes. This will keep the coding process organized and consistent. The researcher can code for exactly what he/she wants to code. Validity of the coding process is ensured when the researcher is consistent and coherent in their codes, meaning that they follow their translation rules. In content analysis, obeying by the translation rules is equivalent to validity.

6. Decide what to do with irrelevant information: should this be ignored (e.g. common English words like “the” and “and”), or used to reexamine the coding scheme in the case that it would add to the outcome of coding?

7. Code the text: This can be done by hand or by using software. By using software, researchers can input categories and have coding done automatically, quickly and efficiently, by the software program. When coding is done by hand, a researcher can recognize errors far more easily (e.g. typos, misspelling). If using computer coding, text could be cleaned of errors to include all available data. This decision of hand vs. computer coding is most relevant for implicit information where category preparation is essential for accurate coding.

8. Analyze your results: Draw conclusions and generalizations where possible. Determine what to do with irrelevant, unwanted, or unused text: reexamine, ignore, or reassess the coding scheme. Interpret results carefully as conceptual content analysis can only quantify the information. Typically, general trends and patterns can be identified.

Relational Analysis

Relational analysis begins like conceptual analysis, where a concept is chosen for examination. However, the analysis involves exploring the relationships between concepts. Individual concepts are viewed as having no inherent meaning and rather the meaning is a product of the relationships among concepts.

To begin a relational content analysis, first identify a research question and choose a sample or samples for analysis. The research question must be focused so the concept types are not open to interpretation and can be summarized. Next, select text for analysis. Select text for analysis carefully by balancing having enough information for a thorough analysis so results are not limited with having information that is too extensive so that the coding process becomes too arduous and heavy to supply meaningful and worthwhile results.

There are three subcategories of relational analysis to choose from prior to going on to the general steps.

Affect extraction: an emotional evaluation of concepts explicit in a text. A challenge to this method is that emotions can vary across time, populations, and space. However, it could be effective at capturing the emotional and psychological state of the speaker or writer of the text.

Proximity analysis: an evaluation of the co-occurrence of explicit concepts in the text. Text is defined as a string of words called a “window” that is scanned for the co-occurrence of concepts. The result is the creation of a “concept matrix”, or a group of interrelated co-occurring concepts that would suggest an overall meaning.

Cognitive mapping: a visualization technique for either affect extraction or proximity analysis. Cognitive mapping attempts to create a model of the overall meaning of the text such as a graphic map that represents the relationships between concepts.

General steps for conducting a relational content analysis:

1. Determine the type of analysis: Once the sample has been selected, the researcher needs to determine what types of relationships to examine and the level of analysis: word, word sense, phrase, sentence, themes. 2. Reduce the text to categories and code for words or patterns. A researcher can code for existence of meanings or words. 3. Explore the relationship between concepts: once the words are coded, the text can be analyzed for the following:

Strength of relationship: degree to which two or more concepts are related.

Sign of relationship: are concepts positively or negatively related to each other?

Direction of relationship: the types of relationship that categories exhibit. For example, “X implies Y” or “X occurs before Y” or “if X then Y” or if X is the primary motivator of Y.

4. Code the relationships: a difference between conceptual and relational analysis is that the statements or relationships between concepts are coded. 5. Perform statistical analyses: explore differences or look for relationships among the identified variables during coding. 6. Map out representations: such as decision mapping and mental models.

Reliability and Validity

Reliability : Because of the human nature of researchers, coding errors can never be eliminated but only minimized. Generally, 80% is an acceptable margin for reliability. Three criteria comprise the reliability of a content analysis:

Stability: the tendency for coders to consistently re-code the same data in the same way over a period of time.

Reproducibility: tendency for a group of coders to classify categories membership in the same way.

Accuracy: extent to which the classification of text corresponds to a standard or norm statistically.

Validity : Three criteria comprise the validity of a content analysis:

Closeness of categories: this can be achieved by utilizing multiple classifiers to arrive at an agreed upon definition of each specific category. Using multiple classifiers, a concept category that may be an explicit variable can be broadened to include synonyms or implicit variables.

Conclusions: What level of implication is allowable? Do conclusions correctly follow the data? Are results explainable by other phenomena? This becomes especially problematic when using computer software for analysis and distinguishing between synonyms. For example, the word “mine,” variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. Software can obtain an accurate count of that word’s occurrence and frequency, but not be able to produce an accurate accounting of the meaning inherent in each particular usage. This problem could throw off one’s results and make any conclusion invalid.

Generalizability of the results to a theory: dependent on the clear definitions of concept categories, how they are determined and how reliable they are at measuring the idea one is seeking to measure. Generalizability parallels reliability as much of it depends on the three criteria for reliability.

Advantages of Content Analysis

Directly examines communication using text

Allows for both qualitative and quantitative analysis

Provides valuable historical and cultural insights over time

Allows a closeness to data

Coded form of the text can be statistically analyzed

Unobtrusive means of analyzing interactions

Provides insight into complex models of human thought and language use

When done well, is considered a relatively “exact” research method

Content analysis is a readily-understood and an inexpensive research method

A more powerful tool when combined with other research methods such as interviews, observation, and use of archival records. It is very useful for analyzing historical material, especially for documenting trends over time.

Disadvantages of Content Analysis

Can be extremely time consuming

Is subject to increased error, particularly when relational analysis is used to attain a higher level of interpretation

Is often devoid of theoretical base, or attempts too liberally to draw meaningful inferences about the relationships and impacts implied in a study

Is inherently reductive, particularly when dealing with complex texts

Tends too often to simply consist of word counts

Often disregards the context that produced the text, as well as the state of things after the text is produced

Can be difficult to automate or computerize

Textbooks & Chapters  

Berelson, Bernard. Content Analysis in Communication Research.New York: Free Press, 1952.

Busha, Charles H. and Stephen P. Harter. Research Methods in Librarianship: Techniques and Interpretation.New York: Academic Press, 1980.

de Sola Pool, Ithiel. Trends in Content Analysis. Urbana: University of Illinois Press, 1959.

Krippendorff, Klaus. Content Analysis: An Introduction to its Methodology. Beverly Hills: Sage Publications, 1980.

Fielding, NG & Lee, RM. Using Computers in Qualitative Research. SAGE Publications, 1991. (Refer to Chapter by Seidel, J. ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’.)

Methodological Articles  

Hsieh HF & Shannon SE. (2005). Three Approaches to Qualitative Content Analysis.Qualitative Health Research. 15(9): 1277-1288.

Elo S, Kaarianinen M, Kanste O, Polkki R, Utriainen K, & Kyngas H. (2014). Qualitative Content Analysis: A focus on trustworthiness. Sage Open. 4:1-10.

Application Articles  

Abroms LC, Padmanabhan N, Thaweethai L, & Phillips T. (2011). iPhone Apps for Smoking Cessation: A content analysis. American Journal of Preventive Medicine. 40(3):279-285.

Ullstrom S. Sachs MA, Hansson J, Ovretveit J, & Brommels M. (2014). Suffering in Silence: a qualitative study of second victims of adverse events. British Medical Journal, Quality & Safety Issue. 23:325-331.

Owen P. (2012).Portrayals of Schizophrenia by Entertainment Media: A Content Analysis of Contemporary Movies. Psychiatric Services. 63:655-659.

Choosing whether to conduct a content analysis by hand or by using computer software can be difficult. Refer to ‘Method and Madness in the Application of Computer Technology to Qualitative Data Analysis’ listed above in “Textbooks and Chapters” for a discussion of the issue.

QSR NVivo:  http://www.qsrinternational.com/products.aspx

Atlas.ti:  http://www.atlasti.com/webinars.html

R- RQDA package:  http://rqda.r-forge.r-project.org/

Rolly Constable, Marla Cowell, Sarita Zornek Crawford, David Golden, Jake Hartvigsen, Kathryn Morgan, Anne Mudgett, Kris Parrish, Laura Thomas, Erika Yolanda Thompson, Rosie Turner, and Mike Palmquist. (1994-2012). Ethnography, Observational Research, and Narrative Inquiry. Writing@CSU. Colorado State University. Available at: https://writing.colostate.edu/guides/guide.cfm?guideid=63 .

As an introduction to Content Analysis by Michael Palmquist, this is the main resource on Content Analysis on the Web. It is comprehensive, yet succinct. It includes examples and an annotated bibliography. The information contained in the narrative above draws heavily from and summarizes Michael Palmquist’s excellent resource on Content Analysis but was streamlined for the purpose of doctoral students and junior researchers in epidemiology.

At Columbia University Mailman School of Public Health, more detailed training is available through the Department of Sociomedical Sciences- P8785 Qualitative Research Methods.

Join the Conversation

Have a question about methods? Join us on Facebook

facebook

  • Hire a PhD Guide
  • Guidance Process
  • PhD Topic and Proposal Help
  • PhD Thesis Chapters Writing
  • PhD Literature Review Writing Help
  • PhD Research Methodology Chapter Help
  • Questionnaire Design for PhD Research
  • PhD Statistical Analysis Help
  • Qualitative Analysis Help for PhD Research
  • Software Implementation Help for PhD Projects
  • Journal Paper Publication Assistance
  • Addressing Comments, Revisions in PhD Thesis
  • Enhance the Quality of Your PhD Thesis with Professional Thesis Editing Services
  • PhD Thesis Defence Preparation

image

Ethical research guidance and consulting services for PhD candidates since 2008

Topic selection & proposal development, enquire now, software implementation using matlab, questionnaire designing & data analysis, chapters writing & journal papers, 12 unexplored data analysis tools for qualitative research.

Data analysis tools for qualitative research

Welcome to our guide on 5 lesser-known tools for studying information in a different way – specifically designed for understanding and interpreting data in qualitative research. Data analysis tools for qualitative research are specialized instruments designed to interpret non-numerical data, offering insights into patterns, themes, and relationships.

These tools enable researchers to uncover meaning from qualitative information, enhancing the depth and understanding of complex phenomena in fields such as social sciences, psychology, and humanities.

In the world of research, there are tools tailored for qualitative data analysis that can reveal hidden insights. This blog explores these tools, showcasing their unique features and advantages compared to the more commonly used quantitative analysis tools.

Whether you’re a seasoned researcher or just starting out, we aim to make these tools accessible and highlight how they can add depth and accuracy to your analysis. Join us as we uncover these innovative approaches, offering practical solutions to enhance your experience with qualitative research.

Tool 1:MAXQDA Analytics Pro

Data analysis tools MAXQDA Analytics Pro

MAXQDA Analytics Pro emerges as a game-changing tool for qualitative data analysis, offering a seamless experience that goes beyond the capabilities of traditional quantitative tools.

Here’s how MAXQDA stands out in the world of qualitative research:

Advanced Coding and Text Analysis: MAXQDA empowers researchers with advanced coding features and text analysis tools, enabling the exploration of qualitative data with unprecedented depth. Its intuitive interface allows for efficient categorization and interpretation of textual information.

Intuitive Interface for Effortless Exploration: The user-friendly design of MAXQDA makes it accessible for researchers of all levels. This tool streamlines the process of exploring qualitative data, facilitating a more efficient and insightful analysis compared to traditional quantitative tools.

Uncovering Hidden Narratives: MAXQDA excels in revealing hidden narratives within qualitative data, allowing researchers to identify patterns, themes, and relationships that might be overlooked by conventional quantitative approaches. This capability adds a valuable layer to the analysis of complex phenomena.

In the landscape of qualitative data analysis tools, MAXQDA Analytics Pro is a valuable asset, providing researchers with a unique set of features that enhance the depth and precision of their analysis. Its contribution extends beyond the confines of quantitative analysis tools, making it an indispensable tool for those seeking innovative approaches to qualitative research.

Tool 2: Quirkos

Data analysis tool Quirkos

Quirkos , positioned as data analysis software, shines as a transformative tool within the world of qualitative research.

Here’s why Quirkos is considered among the best for quality data analysis: Visual Approach for Enhanced Understanding: Quirkos introduces a visual approach, setting it apart from conventional analysis software. This unique feature aids researchers in easily grasping and interpreting qualitative data, promoting a more comprehensive understanding of complex information.

User-Friendly Interface: One of Quirkos’ standout features is its user-friendly interface. This makes it accessible to researchers of various skill levels, ensuring that the tool’s benefits are not limited to experienced users. Its simplicity adds to the appeal for those seeking the best quality data analysis software.

Effortless Pattern Identification: Quirkos simplifies the process of identifying patterns within qualitative data. This capability is crucial for researchers aiming to conduct in-depth analysis efficiently.

The tool’s intuitive design fosters a seamless exploration of data, making it an indispensable asset in the world of analysis software. Quirkos, recognized among the best quality data analysis software, offers a visual and user-friendly approach to qualitative research. Its ability to facilitate effortless pattern identification positions it as a valuable asset for researchers seeking optimal outcomes in their data analysis endeavors.

Tool 3: Provalis Research WordStat

Data analysis tool NVivo Transcription

Provalis Research WordStat stands out as a powerful tool within the world of qualitative data analysis tools, offering unique advantages for researchers engaged in qualitative analysis:

WordStat excels in text mining, providing researchers with a robust platform to delve into vast amounts of textual data. This capability enhances the depth of qualitative analysis, setting it apart in the landscape of tools for qualitative research.

Specializing in content analysis, WordStat facilitates the systematic examination of textual information. Researchers can uncover themes, trends, and patterns within qualitative data, contributing to a more comprehensive understanding of complex phenomena.

WordStat seamlessly integrates with qualitative research methodologies, providing a bridge between quantitative and qualitative analysis. This integration allows researchers to harness the strengths of both approaches, expanding the possibilities for nuanced insights.

In the domain of tools for qualitative research, Provalis Research WordStat emerges as a valuable asset. Its text mining capabilities, content analysis expertise, and integration with qualitative research methodologies collectively contribute to elevating the qualitative analysis experience for researchers.

Tool 4: ATLAS.ti

Data analysis tool ATLAS.Ti

ATLAS.ti proves to be a cornerstone in the world of qualitative data analysis tools, offering distinctive advantages that enhance the qualitative analysis process:

Multi-Faceted Data Exploration: ATLAS.ti facilitates in-depth exploration of textual, graphical, and multimedia data. This versatility enables researchers to engage with diverse types of qualitative information, broadening the scope of analysis beyond traditional boundaries.

Collaboration and Project Management: The tool excels in fostering collaboration among researchers and project management. This collaborative aspect sets ATLAS.ti apart, making it a comprehensive solution for teams engaged in qualitative research endeavors.

User-Friendly Interface: ATLAS.ti provides a user-friendly interface, ensuring accessibility for researchers of various skill levels. This simplicity in navigation enhances the overall qualitative analysis experience, making it an effective tool for both seasoned researchers and those new to data analysis tools. In the landscape of tools for qualitative research, ATLAS.ti emerges as a valuable ally. Its multi-faceted data exploration, collaboration features, and user-friendly interface collectively contribute to enriching the qualitative analysis journey for researchers seeking a comprehensive and efficient solution.

Tool 5: NVivo Transcription

Data analysis tool NVivo Transcription

NVivo Transcription emerges as a valuable asset in the world of data analysis tools, seamlessly integrating transcription services with qualitative research methodologies:

Efficient Transcription Services: NVivo Transcription offers efficient and accurate transcription services, streamlining the process of converting spoken words into written text. This feature is essential for researchers engaged in qualitative analysis, ensuring a solid foundation for subsequent exploration.

Integration with NVivo Software: The tool seamlessly integrates with NVivo software, creating a synergistic relationship between transcription and qualitative analysis. Researchers benefit from a unified platform that simplifies the organization and analysis of qualitative data, enhancing the overall research workflow.

Comprehensive Qualitative Analysis: NVivo Transcription contributes to comprehensive qualitative analysis by providing a robust foundation for understanding and interpreting audio and video data. Researchers can uncover valuable insights within the transcribed content, enriching the qualitative analysis process.

In the landscape of tools for qualitative research, NVivo Transcription plays a crucial role in bridging the gap between transcription services and qualitative analysis. Its efficient transcription capabilities, integration with NVivo software, and support for comprehensive qualitative analysis make it a valuable tool for researchers seeking a streamlined and effective approach to handling qualitative data.

Tool 6: Dedoose

Web-Based Accessibility: Dedoose’s online platform allows PhD researchers to conduct qualitative data analysis from anywhere, promoting flexibility and collaboration.

Mixed-Methods Support: Dedoose accommodates mixed-methods research, enabling the integration of both quantitative and qualitative data for a comprehensive analysis.

Multi-Media Compatibility: The tool supports various data formats, including text, audio, and video, facilitating the analysis of diverse qualitative data types.

Collaborative Features: Dedoose fosters collaboration among researchers, providing tools for shared coding, annotation, and exploration of qualitative data.

Organized Data Management: PhD researchers benefit from Dedoose’s organizational features, streamlining the coding and retrieval of data for a more efficient analysis process.

Tool 7: HyperRESEARCH

HyperRESEARCH caters to various qualitative research methods, including content analysis and grounded theory, offering a flexible platform for PhD researchers.

The software simplifies the coding and retrieval of data, aiding researchers in organizing and analyzing qualitative information systematically.

HyperRESEARCH allows for detailed annotation of text, enhancing the depth of qualitative analysis and providing a comprehensive understanding of the data.

The tool provides features for visualizing relationships within data, aiding researchers in uncovering patterns and connections in qualitative content.

HyperRESEARCH facilitates collaborative research efforts, promoting teamwork and shared insights among PhD researchers.

Tool 8: MAXQDA Analytics Plus

Advanced Collaboration:  

MAXQDA Analytics Plus enhances collaboration for PhD researchers with teamwork support, enabling multiple researchers to work seamlessly on qualitative data analysis.

Extended Visualization Tools:  

The software offers advanced data visualization features, allowing researchers to create visual representations of qualitative data patterns for a more comprehensive understanding.

Efficient Workflow:  

MAXQDA Analytics Plus streamlines the qualitative analysis workflow, providing tools that facilitate efficient coding, categorization, and interpretation of complex textual information.

Deeper Insight Integration:  

Building upon MAXQDA Analytics Pro, MAXQDA Analytics Plus integrates additional features for a more nuanced qualitative analysis, empowering PhD researchers to gain deeper insights into their research data.

User-Friendly Interface:  

The tool maintains a user-friendly interface, ensuring accessibility for researchers of various skill levels, contributing to an effective and efficient data analysis experience.

Tool 9: QDA Miner

Versatile Data Analysis: QDA Miner supports a wide range of qualitative research methodologies, accommodating diverse data types, including text, images, and multimedia, catering to the varied needs of PhD researchers.

Coding and Annotation Tools: The software provides robust coding and annotation features, facilitating a systematic organization and analysis of qualitative data for in-depth exploration.

Visual Data Exploration: QDA Miner includes visualization tools for researchers to analyze data patterns visually, aiding in the identification of themes and relationships within qualitative content.

User-Friendly Interface: With a user-friendly interface, QDA Miner ensures accessibility for researchers at different skill levels, contributing to a seamless and efficient qualitative data analysis experience.

Comprehensive Analysis Support: QDA Miner’s features contribute to a comprehensive analysis, offering PhD researchers a tool that integrates seamlessly into their qualitative research endeavors.

Tool 10: NVivo

NVivo supports diverse qualitative research methodologies, allowing PhD researchers to analyze text, images, audio, and video data for a comprehensive understanding.

The software aids researchers in organizing and categorizing qualitative data systematically, streamlining the coding and analysis process.

NVivo seamlessly integrates with various data formats, providing a unified platform for transcription services and qualitative analysis, simplifying the overall research workflow.

NVivo offers tools for visual representation, enabling researchers to create visual models that enhance the interpretation of qualitative data patterns and relationships.

NVivo Transcription integration ensures efficient handling of audio and video data, offering PhD researchers a comprehensive solution for qualitative data analysis.

Tool 11: Weft QDA

Open-Source Affordability: Weft QDA’s open-source nature makes it an affordable option for PhD researchers on a budget, providing cost-effective access to qualitative data analysis tools.

Simplicity for Beginners: With a straightforward interface, Weft QDA is user-friendly and ideal for researchers new to qualitative data analysis, offering basic coding and text analysis features.

Ease of Use: The tool simplifies the process of coding and analyzing qualitative data, making it accessible to researchers of varying skill levels and ensuring a smooth and efficient analysis experience.

Entry-Level Solution: Weft QDA serves as a suitable entry-level option, introducing PhD researchers to the fundamentals of qualitative data analysis without overwhelming complexity.

Basic Coding Features: While being simple, Weft QDA provides essential coding features, enabling researchers to organize and explore qualitative data effectively.

Tool 12: Transana

Transana specializes in the analysis of audio and video data, making it a valuable tool for PhD researchers engaged in qualitative studies with rich multimedia content.

The software streamlines the transcription process, aiding researchers in converting spoken words into written text, providing a foundation for subsequent qualitative analysis.

Transana allows for in-depth exploration of multimedia data, facilitating coding and analysis of visual and auditory aspects crucial to certain qualitative research projects.

With tools for transcribing and coding, Transana assists PhD researchers in organizing and categorizing qualitative data, promoting a structured and systematic approach to analysis.

Researchers benefit from Transana’s capabilities to uncover valuable insights within transcribed content, enriching the qualitative analysis process with a focus on visual and auditory dimensions.

Final Thoughts

In wrapping up our journey through 5 lesser-known data analysis tools for qualitative research, it’s clear these tools bring a breath of fresh air to the world of analysis. MAXQDA Analytics Pro, Quirkos, Provalis Research WordStat, ATLAS.ti, and NVivo Transcription each offer something unique, steering away from the usual quantitative analysis tools.

They go beyond, with MAXQDA’s advanced coding, Quirkos’ visual approach, WordStat’s text mining, ATLAS.ti’s multi-faceted data exploration, and NVivo Transcription’s seamless integration.

These tools aren’t just alternatives; they are untapped resources for qualitative research. As we bid adieu to the traditional quantitative tools, these unexplored gems beckon researchers to a world where hidden narratives and patterns are waiting to be discovered.

They don’t just add to the toolbox; they redefine how we approach and understand complex phenomena. In a world where research is evolving rapidly, these tools for qualitative research stand out as beacons of innovation and efficiency.

PhDGuidance is a website that provides customized solutions for PhD researchers in the field of qualitative analysis. They offer comprehensive guidance for research topics, thesis writing, and publishing. Their team of expert consultants helps researchers conduct copious research in areas such as social sciences, humanities, and more, aiming to provide a comprehensive understanding of the research problem.

PhDGuidance offers qualitative data analysis services to help researchers study the behavior of participants and observe them to analyze for the research work. They provide both manual thematic analysis and using NVivo for data collection. They also offer customized solutions for research design, data collection, literature review, language correction, analytical tools, and techniques for both qualitative and quantitative research projects.

Frequently Asked Questions

  • What is the best free qualitative data analysis software?

When it comes to free qualitative data analysis software, one standout option is RQDA. RQDA, an open-source tool, provides a user-friendly platform for coding and analyzing textual data. Its compatibility with R, a statistical computing language, adds a layer of flexibility for those familiar with programming. Another notable mention is QDA Miner Lite, offering basic qualitative analysis features at no cost. While these free tools may not match the advanced capabilities of premium software, they serve as excellent starting points for individuals or small projects with budget constraints.

2. Which software is used to Analyse qualitative data?

For a more comprehensive qualitative data analysis experience, many researchers turn to premium tools like NVivo, MAXQDA, or ATLAS.ti. NVivo, in particular, stands out due to its user-friendly interface, robust coding capabilities, and integration with various data types, including audio and visual content. MAXQDA and ATLAS.ti also offer advanced features for qualitative data analysis, providing researchers with tools to explore, code, and interpret complex qualitative information effectively.

3. How can I Analyse my qualitative data?

Analyzing qualitative data involves a systematic approach to make sense of textual, visual, or audio information. Here’s a general guide:

Data Familiarization: Understand the context and content of your data through thorough reading or viewing.

Open Coding: Begin with open coding, identifying and labeling key concepts without preconceived categories.

Axial Coding: Organize codes into broader categories, establishing connections and relationships between them.

Selective Coding: Focus on the most significant codes, creating a narrative that tells the story of your data.

Constant Comparison: Continuously compare new data with existing codes to refine categories and ensure consistency.

Use of Software: Employ qualitative data analysis software, such as NVivo or MAXQDA, to facilitate coding, organization, and interpretation.

4. Is it worth using NVivo for qualitative data analysis?

The use of NVivo for qualitative data analysis depends on the specific needs of the researcher and the scale of the project. NVivo is worth considering for its versatility, user-friendly interface, and ability to handle diverse data types. It streamlines the coding process, facilitates collaboration, and offers in-depth analytical tools. However, its cost may be a consideration for individuals or smaller research projects. Researchers with complex data sets, especially those involving multimedia content, may find NVivo’s advanced features justify the investment.

5. What are the tools used in quantitative data analysis?

Quantitative data analysis relies on tools specifically designed to handle numerical data. Some widely used tools include:

SPSS (Statistical Package for the Social Sciences): A statistical software suite that facilitates data analysis through descriptive statistics, regression analysis, and more. Excel: Widely used for basic quantitative analysis, offering functions for calculations, charts, and statistical analysis.

R and RStudio: An open-source programming language and integrated development environment used for statistical computing and graphics.

Python with Pandas and NumPy: Python is a versatile programming language, and Pandas and NumPy are libraries that provide powerful tools for data manipulation and analysis.

STATA: A software suite for data management and statistical analysis, widely used in various fields.

Hence, the choice of qualitative data analysis software depends on factors like project scale, budget, and specific requirements. Free tools like RQDA and QDA Miner Lite offer viable options for smaller projects, while premium software such as NVivo, MAXQDA, and ATLAS.ti provide advanced features for more extensive research endeavors. When it comes to quantitative data analysis, SPSS, Excel, R, Python, and STATA are among the widely used tools, each offering unique strengths for numerical data interpretation. Ultimately, the selection should align with the researcher’s goals and the nature of the data being analyzed.

Recent Posts

  • What Guides Your Research: Understanding Hypothesis v/s Research Questions Hypothesis , PhD Research May 29, 2024
  • How to Choose Well Matched Research Methodologies in PhD in 2024 – 25 Research Methodology January 16, 2024
  • 5 Different Types of Research Methodology for 2024 PhD Research January 9, 2024
  • 12 UNEXPLORED Data Analysis Tools for Qualitative Research Qualitative Analysis January 4, 2024
  • Separating Myth from Reality: The Scientific Rigor of Qualitative Research Topic and Proposal March 7, 2023
  • Data Analysis
  • PhD Research
  • Qualitative Analysis
  • Research Methodology
  • Topic and Proposal

REQUEST CALL BACK

Quick links.

  • PhD Guidance Maharashtra Trail
  • Synopsis and Thesis Assistance
  • Privacy Policy
  • Terms of use
  • Schedule Your Consultation Now
  • Grievance Redressal

Information

  • Geo Polymer for road construction
  • Machine Learning for Image processing applications
  • IoT and automation
  • Concrete strength with changing flyash percentage
  • Purchase regret prediction with Deep Learning
  • Low Power VLSI
  • Antenna design using HFSS
  • PhD Planner

CONTACT DETAILS

  • 022 4896 4199 (20 Lines)
  • 0091 93102 29971
  • [email protected]
  • Copyright © 2008-2024 PhD Guidance All Rights Reserved.

image

Digital Commons @ University of South Florida

  • USF Research
  • USF Libraries

Digital Commons @ USF > Office of Graduate Studies > USF Graduate Theses and Dissertations > USF Tampa Theses and Dissertations > 10320

USF Tampa Graduate Theses and Dissertations

A theoretical framework for understanding breast cancer survivor's post-treatment lived experiences in an educational program: a qualitative data analysis.

Katherine Jinghua Lin , University of South Florida

Graduation Year

Document type.

Dissertation

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Major professor.

Cecile A. Lengacher, Ph.D., R.N., F.A.A.N., F.A. P.OS.

Committee Member

Carmen S. Rodriguez, Ph.D., A.R.N.P., A.O.C.N.

Laura A. Szalacha, Ed.D

Jennifer Wolgemuth, Ph.D.

Adaptation, Coping, Survivorship, Symptoms

Breast cancer (BC) is the most prevalent type of cancer among women and the most common cancer diagnosis for all the population in the United States. Early screening and treatment for BC have improved the prognosis for breast cancer survivors (BCSs) and increased survival rate. Current evidence showed insufficient data related to BCSs’ post-treatment symptoms, coping issues, and availability and impact of educational and support programs associated with breast cancer survivorship.

The overall purpose of this qualitative data analysis research project is to explore and identify BCSs’ perceptions (post-treatment) of physical, cognitive, and psychological symptoms experienced, as well as perceptions of coping strategies learned during participation in a Breast Cancer-Education Support (BCES) program delivered as part of the R01 grant study “Efficacy of MBSR Treatment on Cognitive Impairment among Breast Cancer Survivors” (NIH Project # R01CA199160-01).

This research helped identify and inform gaps in the research evidence related to BCSs' post-treatment lived experiences and their unmet cancer survivorship needs through qualitative content analysis of BCSs’ weekly journals and survey entries. The research findings also contributed to providing new evidence to strengthen health care professionals’, communities’, and families’ understanding of BCSs’ cancer trajectory across their cancer continuum and identify unmet needs related to their survivorship. This study added valuable qualitative data to define these survivors’ real experiences and meet the knowledge gaps in this arena. As a result, it was anticipated that the care plan for cancer survivors can be tailored to individual needs and can provide data for designing cancer education and support programs to improve BCSs’ quality of life.

Directed qualitative content analysis using deductive and inductive coding, and poetic analysis were used as the research method for this study. Four key themes were identified: 1) enduring and suffering; 2) decreased quality of life; 3) coping and comforting strategies; and 4) the change of self. Eleven voice poems emerged using poetic analysis and the BCSs’ original words. Poems were attached to each theme to bring those themes to 'life' and help connect the readers emotionally to the BCSs' life experiences.

In conclusion, this study added valuable qualitative data to define BCSs’ post-treatment real life experiences based on their perceptions. This study also contributed to nursing theory by adding the suggested expansion of Morse’s Responding to Threats to the Integrity of Self theory.

Scholar Commons Citation

Lin, Katherine Jinghua, "A Theoretical Framework for Understanding Breast Cancer Survivor's Post-treatment Lived Experiences in an Educational Program: A Qualitative Data Analysis" (2022). USF Tampa Graduate Theses and Dissertations. https://digitalcommons.usf.edu/etd/10320

Since August 26, 2024

Included in

Nursing Commons , Philosophy of Science Commons , Women's Studies Commons

Advanced Search

  • Email Notifications and RSS
  • All Collections
  • USF Faculty Publications
  • Open Access Journals
  • Conferences and Events
  • Theses and Dissertations
  • Textbooks Collection

Useful Links

  • USF Office of Graduate Studies
  • Rights Information
  • SelectedWorks
  • Submit Research

Home | About | Help | My Account | Accessibility Statement | Language and Diversity Statements

Privacy Copyright

We Trust in Human Precision

20,000+ Professional Language Experts Ready to Help. Expertise in a variety of Niches.

API Solutions

  • API Pricing
  • Cost estimate
  • Customer loyalty program
  • Educational Discount
  • Non-Profit Discount
  • Green Initiative Discount1

Value-Driven Pricing

Unmatched expertise at affordable rates tailored for your needs. Our services empower you to boost your productivity.

PC editors choice

  • Special Discounts
  • Enterprise transcription solutions
  • Enterprise translation solutions
  • Transcription/Caption API
  • AI Transcription Proofreading API

Trusted by Global Leaders

GoTranscript is the chosen service for top media organizations, universities, and Fortune 50 companies.

GoTranscript

One of the Largest Online Transcription and Translation Agencies in the World. Founded in 2005.

Speaker 1: In this video, we're going to look at the ever popular qualitative analysis method, thematic analysis. We'll unpack what thematic analysis is, explore its strengths and weaknesses, and explain when and when not to use it. By the end of the video, you'll have a clearer understanding of thematic analysis so that you can approach your research project with confidence. By the way, if you're currently working on a dissertation or thesis or research project, be sure to grab our free dissertation templates to help fast-track your write-up. These tried and tested templates provide a detailed roadmap to guide you through each chapter, section by section. If that sounds helpful, you can find the link in the description down below. So, first things first, what is thematic analysis? Well, as the name suggests, thematic analysis, or TA for short, is a qualitative analysis method focused on identifying patterns, themes, and meanings within a data set. Breaking that down a little, TA involves interpreting language-based data to uncover categories or themes that relate to the research aims and research questions of the study. This data could be taken from interview transcripts, open-ended survey responses, or even social media posts. In other words, thematic analysis can be used on both primary and secondary data. Let's look at an example to make things a little more tangible. Assume you're researching customer sentiment toward a newly launched product line. Using thematic analysis, you could review open-ended survey responses from a sample of consumers looking for similarities, patterns, and categories in the data. These patterns would form a foundation for the development of an initial set of themes. You'd then reduce and synthesize these themes by filtering them through the lens of your specific research aims until you have a small number of key themes that help answer your research questions. By the way, if you're not familiar with the concept of research aims and research questions, be sure to check out our primer video covering that. Link in the description. Now that we've defined what thematic analysis is, let's unpack the different forms that TA can take, specifically inductive and deductive. Your choice of approach will make a big difference to the analysis process, so it's important to understand the difference. Let's take a look at each of them. First up is inductive thematic analysis. This type of TA is completely bottom-up, inductive in terms of approach. In other words, the codes and themes will emerge exclusively from your analysis of the data as you work through it rather than being determined beforehand. This makes it a relatively flexible approach as you can adjust, remove, or add codes and themes as you become more familiar with your data. For example, you could use inductive TA to conduct research on staff experiences of a new office space. In this case, you'd conduct interviews and begin developing codes based on the initial patterns you observe. You could then adjust or change these codes on an iterative basis as you become more familiar with the full data set, following which you develop your themes. By the way, if you're not familiar with the process of qualitative coding, we've got a dedicated video covering that. As always, the link is in the description. Next up, we've got deductive thematic analysis. Contrasted to the inductive option, deductive TA uses predetermined, tightly defined codes. These codes, often referred to as a priori codes, are typically drawn from the study's theoretical framework, as well as empirical research and the researcher's knowledge of the situation. Typically, these codes would be compiled into a codebook where each code would be clearly defined and scoped. As an example, your research might aim to assess constituent opinions regarding local government policy. Applying deductive thematic analysis here would involve developing a list of tightly defined codes in advance based on existing theory and knowledge. Those codes would then be compiled into a codebook and applied to interview data collected from constituents. Importantly, throughout the coding and analysis process, those codes and their descriptions would remain fixed. It's worth mentioning that deductive thematic analysis can be undertaken both individually or by multiple researchers. The latter is referred to as coding reliability TA. As the name suggests, this approach aims to achieve a high level of reliability with regard to the application of codes. By having multiple researchers apply the same set of codes to the same data set, inconsistencies in interpretation can be ironed out and a higher level of reliability can be reached. By the way, qualitative coding is something that we regularly help students with here at Grad Coach, so if you'd like a helping hand with your research project, be sure to check out the link that's down in the description. All right, we've covered quite a lot here. To recap, thematic analysis can be conducted using either an inductive approach where your codes naturally emerge from the data or a deductive approach where your codes are independently or collaboratively developed before analyzing the data. So now that we've unpacked the different types of thematic analysis, it's important to understand the broader strengths and weaknesses of this method so that you know when and when not to use it. One of the main strengths of thematic analysis is the relative simplicity with which you can derive codes and themes and, by extension, conclusions. Whether you take an inductive or a deductive approach, identifying codes and themes can be an easier process with thematic analysis than with some other methods. Discourse analysis, for example, requires both an in-depth analysis of the data and a strong understanding of the context in which that data was collected, demanding a significant time investment. Flexibility is another major strength of thematic analysis. The relatively generic focus on identifying patterns and themes allows TA to be used on a broad range of research topics and data types. Whether you're undertaking a small sociological study with a handful of participants or a large market research project with hundreds of participants, thematic analysis can be equally effective. Given these attributes, thematic analysis is best used in projects where the research aims involve identifying similarities and patterns across a wide range of data. This makes it particularly useful for research topics centered on understanding patterns of meaning expressed in thoughts, beliefs, and opinions. For example, research focused on identifying the thoughts and feelings of an audience in response to a new ad campaign might utilize TA to find patterns in participant responses. All that said, just like any analysis method, thematic analysis has its shortcomings and isn't suitable for every project. First, the inherent flexibility of TA also means that results can at times be kind of vague and imprecise. In other words, the broad applicability of this method means that the patterns and themes you draw from your data can potentially lack the sensitivity to incorporate text and contradiction. Second is the problem of inconsistency and lack of rigor. Put another way, the simplicity of thematic analysis can sometimes mean it's a little too crude for more delicate research aims. Specifically, the focus on identifying patterns and themes can lead to results that lack nuance. For example, even an inductive thematic analysis applied to a sample of just 10 participants might overlook some of the subtle nuances of participant responses in favor of identifying generalized themes. It could also miss fine details in language and expression that might reveal counterintuitive but more accurate implications. All that said, thematic analysis is still a useful method in many cases, but it's important to assess whether it fits your needs. So think carefully about what you're trying to achieve with your research project. In other words, your research aims and research questions. And be sure to explore all the options before choosing an analysis method. If you need some inspiration, we've got a video that unpacks the most popular qualitative analysis methods. Link is in the description. If you're enjoying this video so far, please help us out by hitting that like button. You can also subscribe for loads of plain language actionable advice. If you're new to research, check out our free dissertation writing course, which covers everything you need to get started on your research project. As always, links in the description. Okay, that was a lot. So let's do a quick recap. Thematic analysis is a qualitative analysis method focused on identifying patterns of meaning as themes within data, whether primary or secondary. As we've discussed, there are two overarching types of thematic analysis. Inductive TA, in which the codes emerge from an initial review of the data itself and are revised as you become increasingly familiar with the data. And deductive TA, in which the codes are determined beforehand based on a combination of the theoretical and or conceptual framework, empirical studies, and prior knowledge. As with all things, thematic analysis has its strengths and weaknesses and based on those is generally most appropriate for research focused on identifying patterns in data and drawing conclusions in relation to those. If you liked the video, please hit that like button to help more students find this content. For more videos like this one, check out the Grad Coach channel and make sure you subscribe for plain language, actionable research tips and advice every week. Also, if you're looking for one-on-one support with your dissertation, thesis, or research project, be sure to check out our private coaching service where we hold your hand throughout the research process step by step. You can learn more about that and book a free initial consultation at gradcoach.com.

techradar

Purdue e-Pubs

  • < Previous

Home > Libraries > LIB_FSPRES > 197

Libraries Faculty and Staff Presentations

Leveraging chatgpt for qualitative data analysis: a case study on data management practices among computer vision scholars.

Zonghan Lei , Purdue University Follow Wei Zakharov , Purdue University Follow Yung-Hsiang Lu , Purdue University

Lei, Z., Zakharov, W., & Lu, Y. (2024). Leveraging ChatGPT for qualitative data analysis: A case study on data management practices among computer vision scholars. Presented at the 2024 Teaching and Learning with AI conference, Orlando, FL.

Qualitative data analysis plays a crucial role in deriving meaningful insights from research data. However, conventional software tools like NVivo present challenges such as high costs and complexity (Dalkin, et al., 2021). This study advocates for integrating ChatGPT, an AI technology, into qualitative data analysis workflows to overcome these challenges. Focusing on the data management practices of Computer Vision professors, the study investigates how ChatGPT enhances human analysis by streamlining processes and uncovering hidden patterns within datasets. Structured interviews were conducted with six participants from research institutions (R1). The transcripts underwent manual scrutiny to identify recurring themes and patterns. Subsequently, the results were compared with ChatGPT analysis to evaluate its efficacy in qualitative data analysis. The findings illustrate the effectiveness of ChatGPT in augmenting traditional qualitative data analysis methods. By leveraging AI capabilities, ChatGPT facilitates a more efficient and comprehensive analysis, enabling researchers to uncover nuanced insights that may have been overlooked through manual analysis alone. This case study contributes to the ongoing discourse on AI's role in research, demonstrating how ChatGPT can enhance qualitative data analysis and drive advancements in academic research methodologies. The study also revealed certain limitations of AI as an analysis tool, such as potential inaccuracies, biases, and as well as ethical concerns. Therefore, while AI aids in analysis, manual intervention remains crucial to ensure accuracy and comprehensiveness in research methodologies.

Date of this Version

Recommended citation.

Lei, Zonghan; Zakharov, Wei; and Lu, Yung-Hsiang, "Leveraging ChatGPT for Qualitative Data Analysis: A Case Study on Data Management Practices among Computer Vision Scholars" (2024). Libraries Faculty and Staff Presentations. Paper 197. https://docs.lib.purdue.edu/lib_fspres/197

Since August 26, 2024

Included in

Library and Information Science Commons

Advanced Search

  • Notify me via email or RSS
  • Purdue Libraries
  • Purdue University Press Open Access Collections

Links for Authors

  • Policies and Help Documentation
  • Submit Research
  • Collections
  • Disciplines
  • Purdue Libraries Web Site

Home | About | FAQ | My Account | Accessibility Statement

Privacy Copyright

  • DOI: 10.1080/23729333.2024.2392212
  • Corpus ID: 271998347

Assessing maps for social topic representation: a qualitative content analysis of maps for sustainable mobility

  • Chenyu Zuo , Mengyi Wei , +2 authors Liqiu Meng
  • Published in International Journal of… 26 August 2024
  • Geography, Environmental Science, Sociology

27 References

Building an open dataset of ubiquitous map images for cartographic research: practices and prospects, affordances and models of cartographic communication, encoding variables, evaluation criteria and evaluation methods for data physicalizations: a review, dtumos, digital twin for large-scale urban mobility operating system, how we see time – the evolution and current state of visualizations of temporal data, making data tangible: a cross-disciplinary design space for data physicalization, intelligent planning and research on urban traffic congestion, scaling up innovative participatory design for public transportation planning: lessons from experiments in the global south, feature extraction methods: a review, high-spatiotemporal-resolution mapping of global urban change from 1985 to 2015, related papers.

Showing 1 through 3 of 0 Related Papers

  • Open access
  • Published: 20 August 2024

Exploring factors affecting the acceptance of fall detection technology among older adults and their families: a content analysis

  • Hsin-Hsiung Huang 1 ,
  • Ming-Hao Chang 1 ,
  • Peng-Ting Chen 1 , 2 ,
  • Chih-Lung Lin 3 ,
  • Pi-Shan Sung 4 ,
  • Chien-Hsu Chen 5 &
  • Sheng-Yu Fan 6  

BMC Geriatrics volume  24 , Article number:  694 ( 2024 ) Cite this article

262 Accesses

14 Altmetric

Metrics details

This study conducted in-depth interviews to explore the factors that influence the adoption of fall detection technology among older adults and their families, providing a valuable evaluation framework for healthcare providers in the field of fall detection, with the ultimate goal of assisting older adults immediately and effectively when falls occur.

The method employed a qualitative approach, utilizing semi-structured interviews with 30 older adults and 29 families, focusing on their perspectives and expectations of fall detection technology. Purposive sampling ensured representation from older adults with conditions such as Parkinson's, dementia, and stroke.

The results reveal key considerations influencing the adoption of fall-detection devices, including health factors, reliance on human care, personal comfort, awareness of market alternatives, attitude towards technology, financial concerns, and expectations for fall detection technology.

Conclusions

This study identifies seven key factors influencing the adoption of fall detection technology among older adults and their families. The conclusion highlights the need to address these factors to encourage adoption, advocating for user-centered, safe, and affordable technology. This research provides valuable insights for the development of fall detection technology, aiming to enhance the safety of older adults and reduce the caregiving burden.

Peer Review reports

Introduction

As the population of older adults grows, an emerging concern revolves around the prevalence of falls. Age-related gait and balance issues are prevalent and significant in the older adults, increasing the risk of falls and injuries [ 1 ]. Falls can result in a range of injuries, such as fracture or head injury [ 2 , 3 ]. Undoubtedly, the aging population faces a substantial risk related to falls, leading to both mortality and morbidity [ 4 ]. In the United States, statistics indicate that in 2018, 27.5% of adults aged 65 and older reported experiencing at least one fall in the previous year [ 5 ]. One out of five falls results in severe injuries, such as fractures or head trauma. These falls incurred a staggering $50 billion in total medical expenses in the US in 2015 [ 6 ]. There has been a concerning rise in the number of falls resulting in injuries over the years. One study revealed that only 39% of older individuals reported experiencing a fall [ 7 ]. Furthermore, research suggests that the impact of falls continues to affect both admitted and non-admitted older adults, leading to a reduced quality of life for up to nine months following the injury [ 8 ]. On the one hand, a study revealed significant concern and fear among individuals regarding the possibility of the older adults experiencing another fall [ 9 ]. On the other hand, time on the ground (TOG) has been identified as a crucial factor affecting prognosis after a fall. TOG refers to the duration an individual remains on the ground after falling. This factor has been specifically examined in dementia patients, as falls frequently occur in memory care facilities [ 10 ]. However, falls occurring within the home environment during old age often signal the presence of severe underlying health conditions, especially without intime assistance like memory care facilities [ 11 ]. Obviously, falls among older adults is an imperative issue that needs to be addressed.

Given the fact that falls pose a significant concern in healthcare and for family caregivers, there is a growing interest in the development of methods to detect falls. Previous studies on fall detection technology explore the use of sensors in detecting fall-related events among older individuals [ 12 , 13 , 14 ]. One study states that fall detection technology covers three dimensions, including wearable devices, camera-based devices, and ambiance devices. It's worth mentioning that many fall detection methods are already mature and commercially available. These include video-based systems using cameras to monitor movements, microwave-based methods with radar technology to detect falls, and acoustic monitoring that analyzes sounds to identify fall events. These technologies provide valuable alternatives and enhancements to sensor-based fall detection systems [ 15 ]. Wearable devices gather data on body posture and movement, utilizing algorithms to determine if a fall has occurred. Cameras strategically positioned enable ongoing monitoring of older adults, with captured data stored for subsequent analysis and reference. Ambience devices are placed in the surroundings, like walls, floors, and beds. Data from sensors are collected, and an algorithm analyzes the input to determine if a fall has occurred [ 14 ]. Another study found that many solutions also use mobile device sensors, particularly accelerometers, for fall detection in older adults [ 13 ]. The above literature review provides examples of fall detection technology application areas that already exist in the market. Therefore, fall detection technology among older adults has the potential to alleviate the societal burden. However, technology-based solutions, despite their potential benefits, often face resistance from older adults, creating barriers to the adoption of health-related information and communication technology. To address these barriers, we conducted a comprehensive literature review, examining the challenges that older adults may encounter when using fall detection technology.

In 1987, Ram introduced an innovation resistance model [ 16 ], aiming to address the reluctance of consumers to adopt new innovations, particularly when these innovations have the potential to disrupt their existing satisfaction levels or clash with their established beliefs. Building upon this framework, Ram and Sheth [ 17 ] (1989) identified a range of obstacles that hinder consumers' willingness to embrace innovations, classifying them into two main categories: functional barriers and psychological barriers. Functional barriers encompass aspects such as usage limitations, value considerations, and risk perceptions. We conducted a literature review on the barriers that older adults may face when using the technology. Among usage barriers, age-related factors, including hearing impairments, reduced dexterity, declining vision, and mild cognitive challenges, can significantly impact the ease with which users adopt new technologies [ 18 , 19 , 20 , 21 , 22 ]. Previous research [ 18 , 23 , 24 , 25 , 26 ] has emphasized that technical unfamiliarity, which includes inadequate technical skills, a lack of understanding about how to use technology, and limited computer literacy, poses significant challenges for older individuals in adopting new technologies. Additionally, a lack of clear and comprehensive instructions has been identified as a common obstacle for older adults in the literature [ 24 , 27 , 28 ]. Given that the value barrier concept suggests innovative products must offer greater value than existing ones to motivate consumers to switch, there is a scarcity of references related to this description. On the other hand, risk barriers encompass concerns about product reliability, including issues like false alarms and inaccurate data, which can be functional risks that older individuals may encounter [ 19 , 27 , 29 , 30 , 31 ]. High costs also contribute to risk barriers. Many older adults are concerned about the price of the product itself [ 22 , 30 , 32 ]. Furthermore, privacy concerns have been raised by many older individuals, adding to the array of issues related to risk barriers [ 18 , 21 , 22 , 33 , 34 ].

Psychological barriers encompass traditional belief barriers and image-related barriers. Older adults also encounter psychological barriers when using information and communication technology. Among older adults, attitude toward technology represents a common traditional belief barrier, reflecting issues related to trust in their ability to manage devices and their reluctance to adopt it [ 18 , 21 , 35 ]. Image barriers involve concerns about a product's appearance [ 27 ], with some older individuals perceiving certain products as designed for younger generations, which may deter their adoption [ 24 ].

While numerous articles have explored the barriers older individuals face in adopting information and communication technology (ICT) [ 18 , 22 , 36 ], it's essential to acknowledge that ICT encompasses a wide range of applications, making it a diverse and multifaceted topic. Within healthcare, various applications exist, which can make it challenging for healthcare providers to develop products that cater specifically to their target users. While the previous studies encompass fall prevalence, economic burden of falls, and the challenges older adults may face when using ICT, this study focuses more on barriers of these technological products used by older adults and their families, providing a valuable evaluation framework that can aid healthcare providers, particularly in the field of fall detection. Through this research, we aim to offer a valuable assessment framework for making the best use of ICT to help older adults immediately and effectively when falls happen.

Study design

In order to address our research inquiry on the perceived challenges associated with the adoption of fall detection technology and expectations of fall detection technology among older adults and their families, we employed a qualitative approach. Our primary sources of data analysis were semi-structured interviews from in-depth interviews. In-depth interviews are widely acknowledged and commonly used in qualitative research [ 37 ]. The semi-structured interview outline utilized in our study provided a well-defined yet flexible and open-ended framework for exploring the topic [ 38 ]. To align with the research objectives, we developed a semi-structured interview outline, including the background of participants, expectations of fall detection technology, and innovation resistance (see Tables 1 and 2 ). Face-to-face interviews were then conducted with older adults along with their families.

Study subject and recruitment

The aim of this study was to understand the perspectives of older adults with chronic disease, who are prone to falls [ 1 ], and their family caregivers, who are the older adults’ spouses or children. Purposive sampling was employed, and specific inclusion criteria were set for the study participants. These criteria consisted of: (1) healthy individuals over the age of 20 who agreed to participate; (2) participants aged 45 or above, including those affected by stroke, frailty, dementia, Parkinson's disease, and other diseases; (3) participants whose condition was stable, able to mobilize, and willing to take part in the study. We included participants younger than 60 years old in our study because they have chronic diseases such as stroke, dementia, and Parkinson's disease. Individuals with these conditions are more prone to falls compared to others. Although these diseases are typically associated with older adults, we believe that younger participants with these conditions are potential future users of fall detection technology. Therefore, our sample includes individuals under 60 years old and their respective family caregivers.

To ensure clear comprehension of the study's purpose, procedures, and potential risks, an individualized approach was adopted in explaining the study to each participant. Additionally, oral explanations were provided to ensure their understanding of the research instructions and terms outlined in the consent form. In total, interviews were conducted with 30 older adults and 29 families (with one family unable to attend).

Data collection

The study received ethical approval from the Human Research Ethics Review Committee, and the case number assigned was A-ER-110–211. From September 2022 to April 2023, in-depth interviews were conducted in NCKU outpatient hospital using a semi-structured interview outline. The interview process began with the researchers introducing themselves to the participants and providing a detailed explanation of the study's purpose, the interview procedure, and the rights of the participants. Privacy regulations were emphasized, assuring the interviewees that their personal data would be treated confidentially. Following comprehension of the study's objectives and their rights, the participants were informed about the recording of the interview. It was made clear that if they preferred not to be recorded, the investigators would respect their decision and take handwritten notes instead. Each interview lasted approximately 40–60 min. After each interview, research assistants were responsible for transcribing the recorded interview files to create a written transcript of the data. Prior to analysis, the researchers reviewed the verbatim transcripts of the interviews to ensure accuracy and identify any potential errors. If any inconsistencies or missing information were found, another researcher would review the audio recording and the transcript to ensure accuracy and correct any deviations from the original intended meaning.

Data analysis

The qualitative interview data in this study was subjected to content analysis. To streamline the content analysis process and identify themes within the qualitative responses, a panel consisting of four members was established. In addition, the whole process of data analysis was supervised by the professor. The panels include one doctoral researcher, one research assistant, and two graduate students. In employing the inductive approach, 4 researchers employed a systematic process that involved dividing the data into distinct units of meaning, condensing these units, assigning codes, categorizing the codes, and identifying overarching themes [ 39 , 40 ]. The analysis began with the researchers thoroughly reading and rereading the interview data, treating each segment as a unit of analysis. Similar statements within the text were identified and extracted to form meaning units. These meaning units were then condensed through a careful reduction process while ensuring the preservation of their core essence. Subsequently, the meaning units were systematically coded based on their content, with researchers assigning specific codes to each unit. Once the coding process was complete, all the codes were further organized into meaningful categories. Finally, the researchers identified and grouped together different categories that shared related underlying meanings, thereby forming overarching themes [ 41 ]. This rigorous approach to content analysis enabled a comprehensive exploration and interpretation of the qualitative interview data in the study.

Respondent characteristics

From September 2022 to April 2023, the study included 30 older adults and 29 family members, all recruited from NCKU Medical Center in Taiwan. These participants are referred to as N_ Interviewee (older adults /family). The older adults, primarily diagnosed with Parkinson's disease, dementia, or stroke, were selected based on their scores on the Morse Scale [ 42 ], Clinical Frailty Scales [ 43 ], and Barthel Index [ 44 ]. Additionally, the study documented the history of fall events and the relationship between the older adults and their family. Among the older adults, 19 older adults had experience using smartphones, while the remaining older adults did not have the experience (Table 3 ).

Based on the interviews conducted with older adults and their families, we have identified the primary considerations influencing the decision to use wearable fall-detection devices (as detailed in Fig. 1 ; Appendix). These considerations span various aspects, including (1) health considerations, (2) reliance on human care, (3) personal comfort issues, (4) market alternatives, (5) attitude towards technology, (6) financial concerns, and (7) expectations for fall detection technology. The main factors are described below.

figure 1

Factors influencing adoption of fall detection technology in older adults and families

Health considerations

Concerns about potential health risks associated with wearable fall-detection devices emerged as a significant barrier to their adoption. older adults and their families expressed apprehensions about adverse effects such as dizziness, skin irritation, electrical leakage, and electromagnetic radiation. These concerns are particularly pronounced among older individuals, who tend to be more cautious about new technologies that interact directly with their bodies.

“Yeah, older adults won’t wear it if it's uncomfortable; it's just about avoiding dizziness.” (8_family)

For instance, some family members voiced worries about the possible radiation-related functions of these devices. Others were concerned about the risk of skin allergies and electrical leakage due to the close contact of these devices with the skin. These apprehensions highlight a broader fear of unknown health impacts, which can deter older adults from embracing new technological solutions for fall detection.

“Well, just now, it's just that I've heard that there might be some concerns about it. Because it's worn on the skin, so there's a fear of it having some impact on their skin. Also, there's the question of whether it might have electrical leakage.” (6_family)
“Perhaps, he has some kind of fear, like he might think that this thing could cause harm to the body? Or maybe he's worried about things like skin allergies or getting an electric shock, and so on.” (20_family)

Reliance on human care

Despite the potential benefits of fall-detection technology, many participants in the study emphasized a strong preference for human care and assistance. The majority believe that hiring caregivers or relying on family members is a more reliable and comforting approach. This trust in human assistance is deeply rooted and may significantly hinder the adoption of technological solutions.

Several older adults indicated that they felt no need for fall-detection devices because they were constantly accompanied by attentive family members or professional caregivers. For instance, some older adults mentioned that their spouses or foreign domestic workers were always available to assist them with daily activities, rendering the technology unnecessary. Others noted that their children, who are medical professionals, provided adequate care, further diminishing the perceived need for such devices.

Additionally, the cultural context plays a significant role in this reliance on human care. The close-knit family structure and the high value placed on personal interaction and caregiving contribute to the resistance against technological interventions. Many participants expressed a preference for investing in human care over spending money on devices, indicating that they view personal care as more effective and compassionate.

“Most people now hire foreign domestic workers to provide care. If he needs to get up to go to the bathroom, he'll definitely inform the foreign caregiver, saying, "I need this, I need that, please help me up.” (22_older adults)
“So instead of this, we might end up hiring someone to take care of him or considering long-term care services. Because rather than spending that money, it's the same as having someone look after you 24 h a day.” (2_family)

In summary, both health considerations and a strong reliance on human care are critical factors influencing the adoption of wearable fall-detection devices among older adults. Addressing these concerns through better education about the safety and benefits of these technologies, as well as integrating them into existing caregiving practices, may help in overcoming these barriers.

Personal comfort issues

The comfort and practicality of wearable devices are critical concerns for potential users, significantly impacting their adoption. Key issues identified include the weight and physical discomfort of these devices. Users are generally inclined to avoid technologies that cause inconvenience or discomfort in their daily lives, highlighting the necessity for user-friendly and ergonomic designs.

Participants indicated that the weight of the devices is a primary concern; many stated a preference for lightweight options. Physical discomfort, such as restrictions in movement, emerged as a significant factor. For example, older adults expressed concerns about devices causing discomfort when attached to the knee or foot, which could interfere with their mobility and overall comfort. There is a clear preference for devices that are unobtrusive and do not hinder daily activities.

“Fastened around the knee, I can't do it now. I'm afraid I'll get stuck when I'm walking.” (1_older adults)
“I care about the weight. It shouldn't be too heavy; it should be relatively lightweight.” (20_older adults)

Market alternatives

The preference for traditional fall prevention tools, such as canes and emergency buttons, was evident among many participants. These established solutions are familiar and trusted, making them more appealing than newer technological alternatives. Additionally, some participants believed that canes provide proactive assistance to prevent falls, whereas fall detection technology only alerts family members after a fall has occurred, which does not prevent the incident itself.

Participants noted that they already possess reliable fall prevention tools at home, such as emergency buttons, which they trust for their effectiveness in emergencies. The familiarity and simplicity of these tools make them a preferred choice over fall detection technology. Additionally, canes with stable bases are viewed as effective in ensuring personal safety and preventing falls, further reducing the perceived need for fall detection technology. To compete with traditional methods, fall-detection technology must not only match but surpass the reliability and convenience of existing tools.

“I currently have an emergency button installed in my home. If I have an accident, I can just press that button, and the security company will come to assist me.” (19_older adults)
“Because he just took the crutch and walked with it. Yes, if he wears this, he will still fall.” (8_family)

Attitude towards technology

A prevailing theme in the interviews is resistance to change, with some older individuals expressing a reluctance to adapt to new technologies. This resistance is often rooted in perceptions of inconvenience, unfamiliarity, and a general aversion to having devices attached to their bodies. Overcoming this resistance will require addressing user concerns and providing user-friendly solutions.

Elderly individuals frequently describe new devices as uncomfortable and cumbersome. For example, one older adult noted feeling "strange" and "not used to it" when considering wearing fall-detection devices. Others expressed outright resistance, emphasizing a strong preference for maintaining their current routines without the addition of new technological elements. This sentiment is further compounded by a dislike for the perceived hassle of wearing or carrying additional items, such as glasses or wearable devices.

“It's a strange feeling, doesn't feel like it, not used to it, feels weird.” (16_older adults)
“I'm just too lazy to wear glasses. We usually don't like having things hanging here and there.” (24_older adults)
“And to be honest, older people might have a greater psychological burden. If you ask them to carry something every day, they might not like it or feel that it restricts their mobility, and they might not want it.” (20_family)

Financial concerns

The cost of fall-detection devices is a significant consideration for many older adults and their families. Affordability is a key factor in their decision-making process, with financial capability greatly impacting the willingness to adopt new technology.

Many participants highlighted the financial burden that expensive fall-detection devices could impose. For families already managing substantial living expenses, the additional cost of advanced technology may be prohibitive. This financial strain is particularly acute for those on fixed incomes or with limited financial resources.

“I don’t want this if it’s too much money.” (9_older adults)
“I think financial capability comes first. If there are no issues with economic conditions, you have to make sure they have the financial ability to afford it. That's the main issue.” (5_family)

Expectations for fall detection technology

Participants highlighted several key expectations for fall detection technology, which, if met, could facilitate its adoption. These expectations include features such as remote notifications, physical support, real-time older adults status updates, and immediate assistance functions. Meeting these expectations can enhance the perceived value of fall detection technology and increase user willingness to adopt it.

A major expectation is the ability of the technology to provide real-time notifications to caregivers or family members when a fall occurs. Participants expressed a desire for systems that could alert them regardless of their location, ensuring timely intervention. For example, one family member emphasized the need for notifications even if older adults are far away, illustrating the importance of reliable and far-reaching communication capabilities.

Another expectation is for the technology to offer some form of physical support to prevent falls before they happen. Participants envisioned devices that could sense an impending fall and provide immediate physical assistance to prevent the incident. This proactive approach would not only enhance safety but also provide peace of mind for both users and their caregivers.

Real-time older adults’ status updates and the ability to monitor the condition of older adults remotely were also highly valued. For instance, having access to visual data or images of the older adults’ home environment was seen as a way to increase the sense of security and ensure timely responses to any issues. Comprehensive data on the older adults' health and activity levels could help in managing and understanding their overall condition.

“If we can assist her just before she falls, that would be the ideal scenario. Being able to support her right before the fall occurs.” (1_family)
“So, if we talk about it in terms of shoes, if it can sense that a person might slip or fall, can it prevent them from falling?” (2_family)
“It might be like this. If he wears it and triggers the alarm when he's far away, like what I just mentioned, if he's in Xitou and triggers the alarm, we're in Tainan.” (6_family)
“Data, as I just mentioned, is about being able to have a more immediate and clear understanding of the progression of the condition. And assuming that there is also the capability to capture images or, in a way, for me to see their condition at home, this might make me feel more at ease.” (10_family)

The adoption of fall-detection wearable devices among older individuals and their families is influenced by a complex interplay of factors, as revealed by the findings of this study. Understanding these factors is essential for the successful integration of such technologies into the lives of older adults. The participants' concerns about safety issues, such as skin irritation, dizziness, electrical leakage and radiation, may stem from a heightened awareness of the potential risks associated with electrical products, especially for wearable devices. These concerns can deter older adults from embracing wearable information and communications technology, implying that safety issue could be the potential barrier. Similarly, another study has identified safety factors, including concerns relate to radiation and the use of electricity [ 45 ]. Thus, to address this barrier, device designers should prioritize safety issues, reducing any safety-related risks. These considerations can help alleviate concerns and enhance user’s confidence. Another theme is the preference for human care over technology, with many participants believing that caregivers or family members provided more reliable support. One review study [ 30 ] emphasizes that companionship plays a crucial role in the context of having a source of support and presence in one's life. The preference for human care in taking care of older adults suggests that fall-detection devices should be viewed as complementary tools rather than replacements for caregivers. This aligns with concerns about the fear of losing social connections and experiencing loneliness [ 46 ]. In other words, while technology can aid in ensuring safety, the emotional and social aspects provided by human caregivers are irreplaceable. This is an important finding that emphasizing this perspective may decrease the barriers of using fall detection technology among older adults.

Issues related to device comfort and practicality were highlighted as significant factors influencing adoption as well. Concerns from stakeholders include device weight and physical discomfort. Obviously, user-friendly design is essential to mitigate these concerns [ 47 ]. Designers should aim to create lightweight, comfortable devices that seamlessly integrate into daily life, or design a fall detection technology that does not require older adults to wear. In addition, participants expressed a preference for traditional fall prevention tools, such as canes or emergency buttons, citing familiarity and trust in these established solutions. Several participants voiced the opinion that a cane is more beneficial than a fall detection device since a cane can provide support to older adults and reduce the risk of falls, whereas they believe that fall detection devices may not effectively prevent older adults from falling. This concept that the product is able to prevent falls is similar to fall prediction systems [ 48 ]. On the one hand, this factor may require fall detection technology to demonstrate its superiority over existing options or complement the characteristics of existing products. On the other hand, perception of inconvenience, unfamiliarity, and embarrassment were common attitudes among older adults [ 19 , 32 , 47 ]. In our study, some participants also stated that fall detection devices are troublesome. We suggest making fall detection devices easy to use by designing them to be simple and not bothersome.

The cost of fall detection devices emerged as a significant consideration for both older adults and their families. Affordability is a key factor in their decision-making process [ 22 , 27 , 30 , 32 , 47 ], highlighting the importance of exploring options for making these devices more accessible, such as through insurance coverage or subsidies. On the other hand, one study investigated the preferred specifications, perceived ease of use, and perceived usefulness of an automated fall detection device among older adults who rely on wheelchairs or scooters. It was noted that participants expressed a belief in the utility and user-friendliness of an automated fall detection device. The features include wireless charging, a wristwatch-like design, the option to change the emergency contact person in case of a fall, and the ability to deactivate notifications in case of false alarms [ 49 ]. In our study, participants emphasized the importance of comprehensive fall detection solutions, including remote notifications, real-time older adults’ status updates, and immediate assistance functions. It seems that the function of fall detection technology is oriented toward notifying the families, enabling them to assist immediately. Therefore, prioritizing the creation of devices that detect falls and provide added value through additional features is beneficial for enhancing overall safety and well-being.

Limitations

Although this study contributes to the field of fall detection technology, the study has several limitations. First, the sample of older adults comes from neurology outpatient. This limits the findings to this specific group and decreases their generalizability. Second, the findings of this study are based on the opinions and experiences of the respondents and may not be fully representative of all potential users of fall detection technology. The experiences and preferences of non-respondents remain unknown and might differ from those who participated in the study. In addition, the study involved respondents with varying levels of fall risk, as they suffered from different health conditions such as acute stroke, mild to moderate dementia, impaired cognitive function, and poor balance and gait. Third, as fall risk factors can significantly influence the perception and acceptance of fall detection technology, the results may not fully capture the nuances of specific subgroups within older population. The in-depth, face-to-face interviews were conducted in the outpatient area of the hospital. Although none of the interviewees discontinued the interviews due to privacy concerns, it is important to consider the potential influence of the interview setting. In addition, the outpatient waiting area in a hospital is an open and public space, which might have affected the responses of the interviewees. They may have been conscious of their surroundings and the presence of other individuals, possibly influencing the openness of their responses. Finally, the study focused on a specific population in Taiwan, and the findings may be influenced by cultural and regional factors unique to this context. Cultural differences and healthcare practices may lead to varying perspectives on fall detection technology in other regions or countries.

Conclusion and suggestions

In this study, we examined the factors influencing the adoption of wearable fall-detection devices among older adults and their caregivers. We identified several key considerations: concerns about potential health risks associated with these devices, the preference for human care over technology, the importance of device comfort and practicality, market alternatives, cost considerations, the attitude towards technology, and expectations of technology. Based on our evaluation framework, it is essential to consider safety, usability, affordability, and complementary to human care when developing fall detection products. In addition, meeting user expectations for comprehensive features like remote notifications and immediate assistance functions can further enhance adoption. Addressing these factors and challenges is expected to enhance the safety and quality of life for older adults, thereby relieving the burden of care.

Availability of data and materials

Data is provided within the manuscript.

Abbreviations

Information and communications technology

Mild Cognitive Impairment

Hypertension

Viswanathan A, Sudarsky L. Balance and gait problems in the elderly. Handb Clin Neurol. 2012;103:623–34.

Article   PubMed   Google Scholar  

Institute of Medicine (US) Division of Health Promotion and Disease Prevention. Berg RL, Cassells JS, eds. The second fifty years: promoting health and preventing disability. Washington (DC): National Academies Press (US); 1992.

Centers for Disease Control and Prevention. Older adult fall prevention. https://www.cdc.gov/falls/data-research/facts-stats/?CDC_AAref_Val=https://www.cdc.gov/falls/facts.html . Accessed 22 Nov 2023.

Ambrose AF, Paul G, Hausdorff JM. Risk factors for falls among older adults: a review of the literature. Maturitas. 2013;75(1):51–61.

Moreland B, Kakara R, Henry A. Trends in nonfatal falls and fall-related injuries among adults aged ≥65 years - United States, 2012–2018. MMWR Morb Mortal Wkly Rep. 2020;69(27):875–81.

Article   PubMed   PubMed Central   Google Scholar  

Florence CS, Bergen G, Atherly A, Burns E, Stevens J, Drake C. Medical Costs of fatal and nonfatal falls in older adults. J Am Geriatr Soc. 2018;66(4):693–8.

Boongird C, Ross R. Views and expectations of community-dwelling thai elderly in reporting falls to their primary care physicians. J Appl Gerontol. 2017;36(4):480–98.

Hartholt KA, van Beeck EF, Polinder S, van der Velde N, van Lieshout EM, Panneman MJ, van der Cammen TJ, Patka P. Societal consequences of falls in the older population: injuries, healthcare costs, and long-term reduced quality of life. J Trauma. 2011;71(3):748–53.

PubMed   Google Scholar  

Dhar M, Kaeley N, Mahala P, Saxena V, Pathania M. The prevalence and associated risk factors of fear of fall in the elderly: a hospital-based, cross-sectional study. Cureus. 2022;14(3):e23479.

PubMed   PubMed Central   Google Scholar  

Bayen E, Nickels S, Xiong G, Jacquemot J, Subramaniam R, Agrawal P, Hemraj R, Bayen A, Miller BL, Netscher G. Reduction of Time on the ground related to real-time video detection of falls in memory care facilities: observational study. J Med Internet Res. 2021;23(6):e17551.

Wild D, Nayak US, Isaacs B. How dangerous are falls in old people at home?. Br Med J (Clin Res Ed). 1981;282(6260):266–8.

Article   PubMed   CAS   Google Scholar  

Lapierre N, Neubauer N, Miguel-Cruz A, Rios Rincon A, Liu L, Rousseau J. The state of knowledge on technologies and their use for fall detection: a scoping review [published correction appears in int j med inform. 2018 aug; 116:9]. Int J Med Inform. 2018;111:58–71.

Mrozek D, Koczur A, Małysiak-Mrozek B. Fall detection in older adults with mobile IoT devices and machine learning in the cloud and on the edge. Inf Sci. 2020;537:132–47.

Article   Google Scholar  

Tanwar R, Nandal N, Zamani M, Manaf AA. Pathway of trends and technologies in fall detection: a systematic review. Healthcare (Basel). 2022;10(1):172.

Newaz NT, Hanada E. The methods of fall detection: a literature review. Sensors (Basel, Switzerland). 2023;23(11): 5212. https://doi.org/10.3390/s23115212 .

Ram S. A model of innovation resistance. Adv Consum Res. 1987;14(1):208–12.

Google Scholar  

Ram S, Sheth JN. Consumer resistance to innovations: the marketing problem and its solutions. J Consum Mark. 1989;6(2):5–14.

Fischer SH, David D, Crotty BH, Dierks M, Safran C. Acceptance and use of health information technology by community-dwelling elders. Int J Med Inform. 2014;83(9):624–35.

Demiris G, Chaudhuri S, Thompson HJ. Older adults’ experience with a novel fall detection device. Telemed J E Health. 2016;22(9):726–32.

Guzman-Parra J, Barnestein-Fonseca P, Guerrero-Pertiñez G, Anderberg P, Jimenez-Fernandez L, Valero-Moreno E, Goodman-Casanova JM, Cuesta-Vargas A, Garolera M, Quintana M, García-Betances RI, Lemmens E, Sanmartin Berglund J, Mayoral-Cleries F. Attitudes and use of information and communication technologies in older adults with mild cognitive impairment or early stages of dementia and their caregivers: cross-sectional study. J Med Internet Res. 2020;22(6):e17253.

Wilson J, Heinsch M, Betts D, Booth D, Kay-Lambkin F. Barriers and facilitators to the use of e-health by older adults: a scoping review. BMC Public Health. 2021;21(1):1556–1556.

Zaman SB, Khan RK, Evans RG, Thrift AG, Maddison R, Islam SMS. Exploring barriers to and enablers of the adoption of information and communication technology for the care of older adults with chronic diseases: scoping review. JMIR Aging. 2022;5(1):e25251.

Saracchini R, Catalina C, Bordoni L. A mobile augmented reality assistive technology for the elderly. Comunicar. 2015;23:23.

Mercer K, Giangregorio L, Schneider E, Chilana P, Li M, Grindrod K. Acceptance of commercially available wearable activity trackers among adults aged over 50 and with chronic illness: a mixed-methods evaluation. JMIR Mhealth Uhealth. 2016;4(1):e7.

Jain SR, Sui Y, Ng CH, Chen ZX, Goh LH, Shorey S. Patients’ and healthcare professionals’ perspectives towards technology-assisted diabetes self-management education. A qualitative systematic review. PLoS One. 2020;15(8):e0237647.

Article   PubMed   PubMed Central   CAS   Google Scholar  

O’Brien J, Mason A, Cassarino M, Chan J, Setti A. Older women’s experiences of a community-led walking programme using activity trackers. Int J Environ Res Public Health. 2021;18(18):9818.

Kononova A, Li L, Kamp K, Bowen M, Rikard RV, Cotten S, Peng W. The use of wearable activity trackers among older adults: focus group study of tracker perceptions, motivators, and barriers in the maintenance stage of behavior change. JMIR Mhealth Uhealth. 2019;7(4):e9832.

Jiwani R, Dennis B, Bess C, Monk S, Meyer K, Wang J, Espinoza S. Assessing acceptability and patient experience of a behavioral lifestyle intervention using fitbit technology in older adults to manage type 2 diabetes amid COVID-19 pandemic: a focus group study. Geriatr Nurs. 2021;42(1):57–64.

Ehn M, Eriksson LC, Åkerberg N, Johansson AC. Activity monitors as support for older persons’ physical activity in daily life: qualitative study of the users’ experiences. JMIR Mhealth Uhealth. 2018;6(2):e34.

Tsertsidis A, Kolkowska E, Hedström K. Factors influencing seniors’ acceptance of technology for ageing in place in the post-implementation stage: a literature review. Int J Med Inform. 2019;129:324–33.

Moore K, O’Shea E, Kenny L, Barton J, Tedesco S, Sica M, Crowe C, Alamäki A, Condell J, Nordström A, Timmons S. Older adults’ experiences with using wearable devices: qualitative systematic review and meta-synthesis. JMIR Mhealth Uhealth. 2021;9(6):e23832.

Chiu CJ, Liu CW. Understanding older adult’s technology adoption and withdrawal for elderly care and education: mixed method analysis from national survey. J Med Internet Res. 2017;19(11): e374.

Peek ST, Luijkx KG, Rijnaard MD, Nieboer ME, van der Voort CS, Aarts S, van Hoof J, Vrijhoef HJ, Wouters EJ. Older adults’ reasons for using technology while aging in place. Gerontology. 2016;62(2):226–37.

Perotti L, Stamm O, Mesletzky L, Vorwerg S, Fournelle M, Müller-Werdan U. Needs and attitudes of older chronic back pain patients towards a wearable for ultrasound biofeedback during stabilization exercises: a qualitative analysis. Int J Environ Res Public Health. 2023;20(6):4927.

Abouzahra M, Ghasemaghaei M. The antecedents and results of seniors’ use of activity tracking wearable devices. Health Policy Technol. 2020;9(2):213–7.

Finkelstein R, Wu Y, Brennan-Ing M. Older adults’ experiences with using information and communication technology and tech support services in New York City: findings and recommendations for post-pandemic digital pedagogy for older adults. Front Psychol. 2023;14:1129512.

Dickinson A, Horton K, Machen I, Bunn F, Cove J, Jain D, Maddex T. The role of health professionals in promoting the uptake of fall prevention interventions: a qualitative study of older people’s views. Age Ageing. 2011;40(6):724–30.

McIntosh MJ, Morse JM. Situating and constructing diversity in semi-structured interviews. Glob Qual Nurs Res. 2015;2:2333393615597674.

Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24(2):105–12.

Hsu YH, Lee TH, Chung KP, Tung YC. Determining the factors influencing the selection of post-acute care models by patients and their families: a qualitative content analysis. BMC Geriatr. 2023;23(1):179.

Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs. 2008;62(1):107–15.

Morse JM, Black C, Oberle K, Donahue P. A prospective study to identify the fall-prone patient. Soc Sci Med. 1989;28(1):81–6.

Rockwood K, Theou O. Using the clinical frailty scale in allocating scarce health care resources. Can Geriatr J. 2020;23(3):210.

Mahoney FI, Barthel DW. Functional evaluation: the barthel index. Md State Med J. 1965;14:61–5.

PubMed   CAS   Google Scholar  

Felber NA, Lipworth W, Tian YJA, Roulet Schwab D, Wangmo T. Informing existing technology acceptance models: a qualitative study with older persons and caregivers. Eur J Ageing. 2024;21(1):12.

Tian YJA, Felber NA, Pageau F, Schwab DR, Wangmo T. Benefits and barriers associated with the use of smart home health technologies in the care of older persons: a systematic review. BMC geriatr. 2024;24(1):152.

Puri A, Kim B, Nguyen O, Stolee P, Tung J, Lee J. User acceptance of wrist-worn activity trackers among community-dwelling older adults: mixed method study. JMIR Mhealth Uhealth. 2017;5(11):e173.

El-Bendary N, Tan Q, Pivot F, Lam A. Fall detection and prevention for the elderly: a review of trends and challenges. Int J Smart Sensing Intellig Syst. 2013;6:1230–66.

Rice LA, Fliflet A, Frechette M, Brokenshire R, Abou L, Presti P, Mahajan H, Sosnoff J, Rogers WA. Insights on an automated fall detection device designed for older adult wheelchair and scooter users: a qualitative study. Disabil Health J. 2022;15(1S):101207.

Download references

Acknowledgements

This research was made possible by the support and assistance of a number of people whom we would like to thank. We are very grateful to the anonymous referees for their valuable comments and constructive suggestions on interview and coding. We would like to thank all the respondents for their valuable opinions. This research was supported by the Ministry of Technology and Science under grant number NSTC 112-2628-E-006-008-MY3, NSTC 112-2627-M-006 -005, and the Medical Device Innovation Center (MDIC), National Cheng Kung University(NCKU) from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MoE) in Taiwan. This research was approved by the local Institutional Review Board of NCKUH (IRB Approval No. A-ER-110-211).

This research was supported by the National Science Council under grant number NSTC 112–2628-E-006–008-MY3 and NSTC 112–2627-M-006-005.

Author information

Authors and affiliations.

Department of Biomedical Engineering, National Cheng Kung University, Tainan, Taiwan, ROC

Hsin-Hsiung Huang, Ming-Hao Chang & Peng-Ting Chen

Medical Device Innovation Center, National Cheng Kung University, No.138, Shengli Rd., North District, Tainan City, 704, Taiwan, ROC

Peng-Ting Chen

Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan, ROC

Chih-Lung Lin

Department of Neurology, National Cheng Kung University Hospital, Tainan, Taiwan, ROC

Pi-Shan Sung

Department of Industrial Design, National Cheng Kung University, Tainan, Taiwan, ROC

Chien-Hsu Chen

Institute of Gerontology, National Cheng Kung University, Tainan, Taiwan, ROC

Sheng-Yu Fan

You can also search for this author in PubMed   Google Scholar

Contributions

Hsin-Hsiung Huang contributed significantly as the main interviewer, played a key role in coding, and contributed to the conception of the article.  Ming-Hao Chang participated in designing interview questions, coding, and ensuring the quality of language in the article.  Peng-Ting Chen assisted in conceptualizing research directions, overseeing the interview, coding, and the writing process, and shaped the article's concept.  Chih-Lung Lin, Pi-Shan Sung, Chien-Hsu Chen, and Sheng-Yu Fan assisted in conceptualizing research directions.

Authors' information

Hsin-Hsiung Huang is pursuing his Ph.D. degree in the Department of Biomedical Engineering from National Cheng Kung University, Taiwan. His major research interests fall in medical device commercialization in the elderly market.

Ming-Hao Chang is pursuing his Master’s degree in the Department of Biomedical Engineering from National Cheng Kung University, Taiwan. His major research interests fall in medical device commercialization, especially in startups.

Professor Peng-Ting Chen received her Ph.D. in Technology Management from the University of National Chiao-Tung University, Taiwan. She is a professor in the Department of Biomedical Engineering, at National Cheng Kung University, Taiwan. Her current research interests include biomedical device-related business planning, strategies, and policies.

Corresponding author

Correspondence to Peng-Ting Chen .

Ethics declarations

Ethics approval and consent to participate.

The study was approved by the Institutional Review Board of NCKUH (IRB number: A-ER-110–211) before commencement. Informed consent was obtained from all subjects.

Consent for publication

 Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Huang, HH., Chang, MH., Chen, PT. et al. Exploring factors affecting the acceptance of fall detection technology among older adults and their families: a content analysis. BMC Geriatr 24 , 694 (2024). https://doi.org/10.1186/s12877-024-05262-0

Download citation

Received : 06 March 2024

Accepted : 30 July 2024

Published : 20 August 2024

DOI : https://doi.org/10.1186/s12877-024-05262-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Information and Communications Technology (ICT)
  • Innovation resistance
  • Content analysis

BMC Geriatrics

ISSN: 1471-2318

phd qualitative content analysis

IMAGES

  1. An overview of the process of a qualitative content analysis from

    phd qualitative content analysis

  2. Methods of qualitative data analysis.

    phd qualitative content analysis

  3. General Process of Qualitative Content Analysis as presented in

    phd qualitative content analysis

  4. An overview of the process of a qualitative content analysis

    phd qualitative content analysis

  5. (PDF) Qualitative content analysis

    phd qualitative content analysis

  6. Process of a qualitative content analysis (adapted from Bengtsson, 2016

    phd qualitative content analysis

VIDEO

  1. Content Analysis || Research Methodology || Dr.vivek pragpura || sociology with vivek ||

  2. Kismat to dua or asirbad se chalta he#podcast#Dr imran Patel podcast

  3. Research Session: Seeing Peace and Violence in the Women Peace and Security Agenda

  4. 68 Content Analysis Research Method for Consumer Behavior and Marketing

  5. Constant Comparison vs. Content Analysis: Explained!

  6. Qualitative Data Text/Content Analysis MAXQDA . 10/10 . QN/QL Analysis 30th Sep.2020 #AE-QN/QL-201

COMMENTS

  1. Qualitative Content Analysis 101 (+ Examples)

    Content analysis is a qualitative analysis method that focuses on recorded human artefacts such as manuscripts, voice recordings and journals. Content analysis investigates these written, spoken and visual artefacts without explicitly extracting data from participants - this is called unobtrusive research. In other words, with content ...

  2. How to plan and perform a qualitative study using content analysis

    When performing a qualitative content analysis, the investigator must consider the data collected from a neutral perspective and consider their objectivity. However, the researcher has a choice between the manifest and the latent level, and the depth of the analysis will depend on how the data are collected.

  3. PDF Qualitative Analysis of Content

    Step 1: Prepare the Data. Qualitative content analysis can be used to analyze various types of data, but generally the data need to be transformed into written text before analysis can start. If the data come from existing texts, the choice of the content must be justified by what you want to know (Patton, 2002).

  4. A hands-on guide to doing content analysis

    The objective in qualitative content analysis is to systematically transform a large amount of text into a highly organised and concise summary of key results. Analysis of the raw data from verbatim transcribed interviews to form categories or themes is a process of further abstraction of data at each step of the analysis; ...

  5. PDF Writing up your PhD (Qualitative Research)

    This is for PhD students working on a qualitative thesis who have completed their data collection and analysis and are at the stage of writing up. The materials should also be useful if you are writing up a 'mixed-methods' thesis, including chapters of analysis and discussion of qualitative data.

  6. Content Analysis

    Content analysis is a research method used to identify patterns in recorded communication. To conduct content analysis, you systematically collect data from a set of texts, which can be written, oral, or visual: Books, newspapers and magazines. Speeches and interviews. Web content and social media posts. Photographs and films.

  7. Three Approaches to Qualitative Content Analysis

    Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional, directed, or summative. All three approaches are used to interpret meaning from the content of text data and, hence, adhere to the naturalistic paradigm.

  8. Demystifying Content Analysis

    Qualitative Content Analysis. Content analysis rests on the assumption that texts are a rich data source with great potential to reveal valuable information about particular phenomena. 8 It is the process of considering both the participant and context when sorting text into groups of related categories to identify similarities and differences, patterns, and associations, both on the surface ...

  9. Qualitative Content Analysis in Practice

    Qualitative content analysis is a powerful method for analyzing large amounts of qualitative data collected through interviews or focus groups. It is frequently employed by students, but introductory textbooks on content analysis have largely focused on the quantitative version of the method. ... PhD students and researchers across the social ...

  10. The Practical Guide to Qualitative Content Analysis

    Qualitative content analysis is a research method used to analyze and interpret the content of textual data, such as written documents, interview transcripts, or other forms of communication. It provides a systematic way to identify patterns, concepts, and larger themes within the data to gain insight into the meaning and context of the content.

  11. Chapter 17. Content Analysis

    Chapter 17. Content Analysis Introduction. Content analysis is a term that is used to mean both a method of data collection and a method of data analysis. Archival and historical works can be the source of content analysis, but so too can the contemporary media coverage of a story, blogs, comment posts, films, cartoons, advertisements, brand packaging, and photographs posted on Instagram or ...

  12. Directed qualitative content analysis: the description and elaboration

    Qualitative content analysis (QCA) is a research approach for the description and interpretation of textual data using the systematic process of coding. ... His main areas of research interest are instrument development and qualitative study. Mojtaba Vaismoradi (PhD, MScN, BScN) is a doctoral nurse researcher at the Faculty of Nursing and ...

  13. Qualitative Content Analysis

    Qualitative content analysis is one of the several qualita-tive methods currently available for analyzing data and inter-preting its meaning (Schreier, 2012). As a research method, it represents a systematic and objective means of describing and quantifying phenomena (Downe-Wamboldt, 1992; Schreier, 2012).

  14. 18.5 Content analysis

    How to plan and perform a qualitative study using content analysis. Colorado State University (n.d.) Writing@CSU Guide: Content analysis. Columbia University Mailman School of Public Health, Population Health. (n.d.) Methods: Content analysis. Mayring, P. (2000, June). Qualitative content analysis. A few exemplars of studies employing Content ...

  15. Three Approaches to Qualitative Content Analysis

    Abstract. Content analysis is a widely used qualitative research technique. Rather than being a single method, current applications of content analysis show three distinct approaches: conventional ...

  16. Qualitative Content Analysis: A Focus on Trustworthiness

    Qualitative content analysis is commonly used for analyzing qualitative data. However, few articles have examined the trustworthiness of its use in nursing science studies. The trustworthiness of qualitative content analysis is often presented by using terms such as credibility, dependability, conformability, transferability, and authenticity ...

  17. Dissertation Results & Findings Chapter (Qualitative)

    The results chapter in a dissertation or thesis (or any formal academic research piece) is where you objectively and neutrally present the findings of your qualitative analysis (or analyses if you used multiple qualitative analysis methods). This chapter can sometimes be combined with the discussion chapter (where you interpret the data and ...

  18. Content analysis and thematic analysis ...

    Content analysis and thematic analysis as qualitative descriptive approaches. According to Sandelowski and Barroso research findings can be placed on a continuum indicating the degree of transformation of data during the data analysis process from description to interpretation.The use of qualitative descriptive approaches such as descriptive phenomenology, content analysis, and thematic ...

  19. Content Analysis Method and Examples

    Content analysis is a research tool used to determine the presence of certain words, themes, or concepts within some given qualitative data (i.e. text). Using content analysis, researchers can quantify and analyze the presence, meanings, and relationships of such certain words, themes, or concepts.

  20. Inductive content analysis: A guide for beginning qualitative researchers

    Inductive content analysis (ICA), or qualitative content analysis, is a method of qualitative data analysis well-suited to use in health-related research, particularly in relatively small- ... At the time that Danya was starting her PhD, little was known about how genetic health professionals (i.e., genetic counsellors and clinical geneticists ...

  21. (PDF) Qualitative content analysis

    Creswell (2007) confirm that q ualitative conte nt. analysis is a flexible method of analising qualitative data. which uses induct ive and deductive ap proaches or a. combination of both ...

  22. 12 Data analysis tools for qualitative research

    Transana specializes in the analysis of audio and video data, making it a valuable tool for PhD researchers engaged in qualitative studies with rich multimedia content. The software streamlines the transcription process, aiding researchers in converting spoken words into written text, providing a foundation for subsequent qualitative analysis.

  23. Demystifying Qualitative Content Analysis: A Comprehensive Guide

    Content analysis is a qualitative analysis method that draws findings from analysis of recorded communication, which can include both primary and secondary data. As we discussed, content analysis can be approached in two ways. Conceptual analysis, where the focus is on the frequency of concepts, and relational analysis, where the focus is on ...

  24. "A Theoretical Framework for Understanding Breast Cancer Survivor's Pos

    Directed qualitative content analysis using deductive and inductive coding, and poetic analysis were used as the research method for this study. Four key themes were identified: 1) enduring and suffering; 2) decreased quality of life; 3) coping and comforting strategies; and 4) the change of self.

  25. PDF A Qualitative Analysis of a Primary Care Medical- Legal Partnership

    tor), CBB (PhD in social work), and LG (PhD in sociology)) trained research assistants to con-duct the interviews. These 3 authors were research faculty at the time of the interviews, received training in qualitative methods, and have led qualitative studies. The training included read-ing materials about qualitative interviews, instruc-

  26. A Comprehensive Guide to Thematic Analysis for Qualitative Research

    Well, as the name suggests, thematic analysis, or TA for short, is a qualitative analysis method focused on identifying patterns, themes, and meanings within a data set. Breaking that down a little, TA involves interpreting language-based data to uncover categories or themes that relate to the research aims and research questions of the study.

  27. "Leveraging ChatGPT for Qualitative Data Analysis: A Case Study on Data

    Qualitative data analysis plays a crucial role in deriving meaningful insights from research data. However, conventional software tools like NVivo present challenges such as high costs and complexity (Dalkin, et al., 2021). This study advocates for integrating ChatGPT, an AI technology, into qualitative data analysis workflows to overcome these challenges. Focusing on the data management ...

  28. A hands-on guide to doing content analysis

    A common starting point for qualitative content analysis is often transcribed interview texts. The objective in qualitative content analysis is to systematically transform a large amount of text into a highly organised and concise summary of key results. Analysis of the raw data from verbatim transcribed interviews to form categories or themes ...

  29. Assessing maps for social topic representation: a qualitative content

    DOI: 10.1080/23729333.2024.2392212 Corpus ID: 271998347; Assessing maps for social topic representation: a qualitative content analysis of maps for sustainable mobility @article{Zuo2024AssessingMF, title={Assessing maps for social topic representation: a qualitative content analysis of maps for sustainable mobility}, author={Chenyu Zuo and Mengyi Wei and Puzhen Zhang and Chuan Chen and Liqiu ...

  30. Exploring factors affecting the acceptance of fall detection technology

    The qualitative interview data in this study was subjected to content analysis. To streamline the content analysis process and identify themes within the qualitative responses, a panel consisting of four members was established. In addition, the whole process of data analysis was supervised by the professor. The panels include one doctoral ...