U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Iran J Public Health
  • v.49(5); 2020 May

What Is Analysis of Covariance (ANCOVA) and How to Correctly Report Its Results in Medical Research?

Alireza khammar.

1. Department of Occupational Health Engineering, School of Health, Zabol University of Medical Sciences, Zabol, Iran

Mohammad YARAHMADI

2. Razi Herbal Medicines Research Center, Lorestan University of Medical Sciences, Khoramabad, Iran

Farzan MADADIZADEH

3. Noncommunicable Diseases Research Center, Fasa University of Medical Sciences, Fasa, Iran

4. Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran

Dear Editor-in-Chief

Sometimes in medical researches, a variable that is not among the main research variables may affect the dependent variable and its relation with the independent variable, which, if identified, can be involved in the modeling and its linear effect are controlled. This is not a dependent or independent variable, this type of variable is known as covariate ( 1 ).

To control the effect of covariate variable, not only the changes in variance of the dependent variable are examined (ANOVA), but also the relationship between the dependent variable and covariate in different levels of a qualitative variable is analyzed (Regression) ( 2 ). The statistical method that can combine ANOVA and Regression for adjusting linear effect of covariate and make a clearer picture is called the analysis of covariance (ANCOVA) ( 1 ).

ANCOVA discovers the variance changes of the dependent variable due to change in covariate variable and discriminates it from the variance changes due to changes in the levels of the qualitative variable; so it reduces the uncertain changes of the variance of dependent variable (error) and make pure results as well as increases the analytical power ( 3 ).

For example, examining the rate of learning the medical lessons in the students of different groups. The prior familiarity of some students with the topics leads to increased learning scores. Therefore, it is a covariate variable. Another example of covariate variable is a pretest score in an interventional study that needs to identify, measure and control before the intervention.

For more information on the use of the ANCOVA methodology and the appropriate way of reporting the results, note the following points:

  • The dependent variable must be a continuous quantitative variable and have a normal distribution.
  • The covariate must be a continuous quantitative variable ( 2 ).
  • The levels of the qualitative variable must be independent ( 2 ).
  • There should be a linear relationship between the dependent variable and covariate ( 3 ). If the relationship was non-linear, the Multivariate ANOVA method would be useful by considering the covariate as a secondary dependent variable another way is using linear transformations and applying ANCOVA to converted variables ( 3 ).
  • The sign (+ or −) and size of the correlation coefficient between the dependent variable and covariate should be the same at each level of the qualitative variable ( 1 ). In other words, if we draw a regression line for the relationship between the dependent variable and covariate at each level of the qualitative variable, the slope of the regression lines should be the same at all levels (Homogeneity of regression slopes) ( 2 ).
  • The independent variable has no relationship with the covariate variable and that it does not affect the relationship between the dependent variable and covariate ( 4 , 5 ).

How to correctly report ANCOVA results:

  • Reporting the correlation coefficient and significant P -value of investigating the relationship between the dependent variable and covariate ( 4 ).
  • Reporting insignificance relationship between covariate and independent variable and thus the equality of slope of regression lines.
  • Provision summary table of the means of dependent variable before and after the adjustment the effect of covariate with separately reporting the p-value of means comparison.

ANCOVA is a type of ANOVA with controlling linear effect of covariate variable by using regression analysis.

Hopefully, by considering the above notes, not only researchers become more familiar with the ANCOVA method, but also the medical field studies are further enhanced by providing the appropriate results of statistical methods.

Conflict of interest

The authors declare that there is no conflict of interest.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

ANCOVA: Uses, Assumptions & Example

By Jim Frost 1 Comment

What is ANCOVA?

ANCOVA, or the analysis of covariance, is a powerful statistical method that analyzes the differences between three or more group means while controlling for the effects of at least one continuous covariate.

ANCOVA is a potent tool because it adjusts for the effects of covariates in the model. By isolating the effect of the categorical independent variable on the dependent variable, researchers can draw more accurate and reliable conclusions from their data.

In this post, learn about ANCOVA vs ANOVA, how it works, the benefits it provides, and its assumptions. Plus, we’ll work through an ANCOVA example and interpret it!

How are ANCOVA and ANOVA different?

ANCOVA is an extension of ANOVA. While ANOVA can compare the means of three or more groups, it cannot control for covariates. ANCOVA builds on ANOVA by introducing one or more covariates into the model.

In an ANCOVA model, you must specify the dependent variable (continuous outcome), at least one categorical variable that defines the comparison groups, and a covariate.

ANCOVA is simply an ANOVA model that includes at least one covariate.

Covariates are continuous independent variables that influence the dependent variable but are not of primary interest to the study. Additionally, the experimenters do not control the covariates. Instead, they only observe and record their values. In contrast, they do control the categorical factors and set them at specific values for the study.

Researchers refer to covariates as nuisance variables because they:

  • Are uncontrolled conditions in the experiment.
  • Can influence the outcome.

This unfortunate combination of attributes allows covariates to introduce both imprecision and bias into the results. You can see why they’re a nuisance!

Even though the researchers aren’t interested in these variables, they must find a way to deal with them. That’s where ANCOVA comes in!

Learn more about Independent and Dependent Variables and  Covariates: Definition & Uses .

Two-Fold Benefits for Analysis of Covariance

Fortunately, you can use an ANCOVA model to control covariates statistically. Simply put, ANCOVA removes the effects of the covariates on the dependent variable, allowing for a more accurate assessment of the relationship between the categorical factors and the outcome.

ANCOVA does the following:

  • Increases statistical power and precision by accounting for some of the within-group variability.
  • Removes confounder bias by adjusting for preexisting differences between groups.

Learn more about Statistical Power and Confounder Bias .

Let’s think through an ANCOVA example to understand the potential improvements of using this method. Then we’ll perform the analysis.

Suppose we want to determine which of three teaching methods is the best by comparing their mean test scores. We can include a pretest score as a covariate to account for participants having different starting skill levels.

How does ANCOVA use the covariate to improve the results relative to ANOVA for this example?

Power and Precision Increases

Individual differences in academic ability can significantly impact the outcome. In fact, even within a single teaching method group, there can be substantial variation in participants’ skills. This unexplained variation (error) can obscure the true impact of each method.

By including pretest scores as a covariate in the ANCOVA model, it can adjust for the initial skill level of each participant. This adjustment allows for a clearer and more accurate understanding of whether a participant’s success on the final test was due to the teaching method or their preexisting ability.

In the context of the F-test’s calculations, ANCOVA explains a portion of the within-group variability for each teaching method by attributing it to the pretest score. Using the covariate to reduce the error, ANCOVA can better detect differences between teaching methods (power) and provide greater precision when estimating mean test score differences between the groups (effect sizes).

Learn more about How ANOVA’s F-test Works  and Effect Sizes in Statistics .

Bias Reduction

If the groups have preexisting differences in ability, that can bias the results at the end. Imagine one group starting with more high achievers than the other groups. At the end of the study, the average test score for that group will be higher than warranted due to the early lead in skills rather than the teaching method itself.

ANCOVA models adjust for preexisting differences between the groups, creating a level playing field for unbiased comparisons of the teaching methods.

ANCOVA Example

Let’s perform an ANCOVA analysis! I’ll stick with the teaching method example we’ve been working with, but I’ll use only two groups for simplicity, methods A and B. Download the CSV dataset to try it yourself: ANCOVA .

In the model, I’ve entered Posttest Score as the dependent variable (continuous outcome), Teaching Method as the categorical factor, and Pretest Score as the covariate.

In the first set of output, we see that Teaching Method has a very low p-value . The mean difference between teaching methods is statistically significant!

The Pretest Score covariate is significant. It, too, has a relationship with the independent variable. If it wasn’t significant, consider removing it, making it an ANOVA model.

ANCOVA statistical output.

So, how do the teaching methods compare? In the coefficient output below, Method B has a coefficient of 10, indicating its group mean is 10 points higher than Method A. That’s our estimated effect size!

Table of coefficients for our ANCOVA.

It’s always beneficial to see your data to gain a better understanding. Here’s our data with the regression line for each group.

Scatterplot with regression lines displaying our data.

The vertical shift between the two regression lines is the mean difference between the two groups, which is 10 points for our example. ANCOVA determines whether this line shift is statistically significant. Notice how the lines are parallel? More on that in the assumptions section! Learn more about Regression Lines and Their Equations .

ANCOVA Assumptions

ANCOVA is a linear model. Consequently, it has the same assumptions as ordinary least squares regression—with an addition (kind of).

Here’s a simplified list of the ANCOVA assumptions:

  • Linear relationships adequately explain the outcomes.
  • Independent variables are not correlated with the error term.
  • Observations of the error term are uncorrelated with each other.
  • Error term has a constant variance.
  • No perfect correlation between independent variables.
  • Error term follows a normal distribution.

If your model doesn’t satisfy these assumptions, the results might be untrustworthy. To learn about them in more detail, read my post Ordinary Least Squares Assumptions .

Homogeneity of Slopes Assumption

ANCOVA also has the homogeneity of regression slopes assumption. I don’t classify this issue as an assumption because it’s OK to have unequal slopes. In fact, some data require it. Instead, it’s actually about specifying a model that fits your data adequately and then knowing how to interpret the results correctly. Learn about Specifying the Correct Model .

As you saw in our example analysis, each group in an ANCOVA has a regression line. If those regressions all have the same slope, interpreting the mean differences between groups is simple because they are constant across all values of your covariate.

In the illustration below, points on the regression lines represent the group means for any given covariate value. When the slopes are equal, the differences between group means are constant across all covariate values. That’s nice and easy to interpret because there is only one mean difference for each pair of groups—just like our ANCOVA example!

Graph displaying heterogeneous regression lines for ANCOVA.

Check the homogeneity of regression slopes assumption by including an interaction term (grouping factor*covariate) in the ANCOVA model. If the interaction term is:

  • Not significant, the slopes are equal and you can remove this term.
  • Significant, the slopes are not equal.

Learn more about Testing and Comparing Regression Slopes and Interpreting Interaction Effects .

If your data naturally produce regression lines with different slopes, that’s not a problem. You just need to model it correctly using an interaction term and know how to interpret it.

When you have unequal slopes, the differences between group means are not constant across covariate values. In other words, the differences depend on the value of the covariate. Consequently, you must pick several covariate values and compare the group means at those points.

The illustration below shows how unequal slopes cause the differences between group means to change with the covariate.

Graph displaying regression lines with different slopes.

Share this:

analysis of covariance in research methodology

Reader Interactions

' src=

August 1, 2023 at 1:58 am

What if a covariate is equal in baseline between groups, is there still more precise to use ANCOVA than ANOVA?

Comments and Questions Cancel reply

  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Analysis of Covariance (ANCOVA)

Introduction.

  • Encyclopedia Articles
  • Chapters in General Statistics Textbooks
  • Chapters in Experimental Design Textbooks
  • Journal Articles
  • Reference Works
  • Assumptions and Interpretation Issues
  • Early Controversy Surrounding ANCOVA in Nonrandomized Studies
  • ANCOVA in Observational Studies Post-Rosenbaum and Rubin 1983 and Rosenbaum and Rubin 1984
  • Multiple ANCOVA
  • Multiple Factor ANCOVA
  • Multivariate ANCOVA
  • Alternatives Accommodating Certain Assumption Violations and Design Flaws
  • ANOVA on the Y/X Ratio as Outcome
  • ANOVA on the Residual Y - bwX
  • Alternative Analyses for Observational Studies

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Effect Size
  • Exploratory Data Analysis
  • Factor Analysis
  • Meta-Analysis
  • Reliability–Contemporary Psychometric Conceptions

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Data Visualization
  • Executive Functions in Childhood
  • Remote Work
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Analysis of Covariance (ANCOVA) by Bradley E. Huitema LAST REVIEWED: 15 January 2020 LAST MODIFIED: 15 January 2020 DOI: 10.1093/obo/9780199828340-0256

The analysis of covariance (ANCOVA) is a method for testing the hypothesis of the equality of two or more population means, ideally in the context of a designed experiment. It is similar in purpose to the analysis of variance (ANOVA), but it differs in that an adjustment is made to both the dependent variable means and the error term to provide both descriptive and inferential advantages. The adjustments are made on the basis of information on one or more variables (called covariates) that are measured on each participant before treatments are applied. The advantages of incorporating the covariate information are typically (1) more meaningful outcome means and (2) a smaller error term than is associated with ANOVA. These adjustments result in more interpretable effects, narrower confidence intervals, and an increase in the statistical power of the analysis. Suppose an experiment is carried out to evaluate effects of two treatments. The randomly assigned treatment groups differ somewhat in average age, and age is correlated with the achievement measure used as the dependent variable. Differences between groups on achievement will be somewhat ambiguous to interpret because the groups differ in terms of both age and treatment condition. The analysis of covariance will provide “adjusted means” that estimate the value the outcome means would have been if the groups had been exactly the same with respect to age. At the same time, within-group variation in achievement scores predictable from the covariate (age) will be removed from the error variation to increase the precision of the test for differences between the adjusted means. The application of ANCOVA in some observational studies (rather than randomized experiments) is controversial and has led to a large literature that explores the concerns surrounding the adequacy of the analysis when used in this context. The label “analysis of covariance” is now viewed as anachronistic by some research methodologists and statisticians because this analysis can be both conceptualized and computed as a variant of the general linear model (GLM). But the term remains useful because it immediately conveys to most researchers the notion that a categorical variable (the treatment conditions) and two continuous variables (the covariate and the dependent variable) are involved in a single analysis. Researchers should be warned, however, that ANCOVA is not the same as the “analysis of covariance structures,” a term that was frequently used in the 1970s and 1980s to refer to what is currently known as a “structural equation model.” Additionally, some sources of information regarding ANCOVA subsume several analyses related to (but different from) ANCOVA under this general heading. Examples of these related analyses include the test of the significance of the covariate, the test for homogeneous regression slopes, and the Johnson-Neyman technique.

General Overviews

The following three subsections list sources containing general overviews and introductions to analysis of covariance (ANCOVA). This list begins with the most elementary sources, progresses through those that are of intermediate length and sophistication, and ends with advanced treatments in the form of journal articles and comprehensive reference works. Short elementary presentations designed for readers interested in only the general ideas on ANCOVA are found in encyclopedia articles written for beginning researchers. Several intermediate and advanced level general statistics texts also provide solid introductions to ANCOVA. More extensive coverage is presented in full chapters on the topic found in several textbooks on experimental design. These textbooks provide the main exposure to ANCOVA for most researchers in the behavioral sciences. More technical presentations are available in articles published in methodology and statistics journals.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Psychology »
  • Meet the Editorial Board »
  • Abnormal Psychology
  • Academic Assessment
  • Acculturation and Health
  • Action Regulation Theory
  • Action Research
  • Addictive Behavior
  • Adolescence
  • Adoption, Social, Psychological, and Evolutionary Perspect...
  • Advanced Theory of Mind
  • Affective Forecasting
  • Affirmative Action
  • Ageism at Work
  • Allport, Gordon
  • Alzheimer’s Disease
  • Ambulatory Assessment in Behavioral Science
  • Analysis of Covariance (ANCOVA)
  • Animal Behavior
  • Animal Learning
  • Anxiety Disorders
  • Art and Aesthetics, Psychology of
  • Artificial Intelligence, Machine Learning, and Psychology
  • Assessment and Clinical Applications of Individual Differe...
  • Attachment in Social and Emotional Development across the ...
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Adults
  • Attention-Deficit/Hyperactivity Disorder (ADHD) in Childre...
  • Attitudinal Ambivalence
  • Attraction in Close Relationships
  • Attribution Theory
  • Authoritarian Personality
  • Bayesian Statistical Methods in Psychology
  • Behavior Therapy, Rational Emotive
  • Behavioral Economics
  • Behavioral Genetics
  • Belief Perseverance
  • Bereavement and Grief
  • Biological Psychology
  • Birth Order
  • Body Image in Men and Women
  • Bystander Effect
  • Categorical Data Analysis in Psychology
  • Childhood and Adolescence, Peer Victimization and Bullying...
  • Clark, Mamie Phipps
  • Clinical Neuropsychology
  • Clinical Psychology
  • Cognitive Consistency Theories
  • Cognitive Dissonance Theory
  • Cognitive Neuroscience
  • Communication, Nonverbal Cues and
  • Comparative Psychology
  • Competence to Stand Trial: Restoration Services
  • Competency to Stand Trial
  • Computational Psychology
  • Conflict Management in the Workplace
  • Conformity, Compliance, and Obedience
  • Consciousness
  • Coping Processes
  • Correspondence Analysis in Psychology
  • Counseling Psychology
  • Creativity at Work
  • Critical Thinking
  • Cross-Cultural Psychology
  • Cultural Psychology
  • Daily Life, Research Methods for Studying
  • Data Science Methods for Psychology
  • Data Sharing in Psychology
  • Death and Dying
  • Deceiving and Detecting Deceit
  • Defensive Processes
  • Depressive Disorders
  • Development, Prenatal
  • Developmental Psychology (Cognitive)
  • Developmental Psychology (Social)
  • Diagnostic and Statistical Manual of Mental Disorders (DSM...
  • Discrimination
  • Dissociative Disorders
  • Drugs and Behavior
  • Eating Disorders
  • Ecological Psychology
  • Educational Settings, Assessment of Thinking in
  • Embodiment and Embodied Cognition
  • Emerging Adulthood
  • Emotional Intelligence
  • Empathy and Altruism
  • Employee Stress and Well-Being
  • Environmental Neuroscience and Environmental Psychology
  • Ethics in Psychological Practice
  • Event Perception
  • Evolutionary Psychology
  • Expansive Posture
  • Experimental Existential Psychology
  • Eyewitness Testimony
  • Eysenck, Hans
  • Festinger, Leon
  • Five-Factor Model of Personality
  • Flynn Effect, The
  • Forensic Psychology
  • Forgiveness
  • Friendships, Children's
  • Fundamental Attribution Error/Correspondence Bias
  • Gambler's Fallacy
  • Game Theory and Psychology
  • Geropsychology, Clinical
  • Global Mental Health
  • Habit Formation and Behavior Change
  • Health Psychology
  • Health Psychology Research and Practice, Measurement in
  • Heider, Fritz
  • Heuristics and Biases
  • History of Psychology
  • Human Factors
  • Humanistic Psychology
  • Implicit Association Test (IAT)
  • Industrial and Organizational Psychology
  • Inferential Statistics in Psychology
  • Insanity Defense, The
  • Intelligence
  • Intelligence, Crystallized and Fluid
  • Intercultural Psychology
  • Intergroup Conflict
  • International Classification of Diseases and Related Healt...
  • International Psychology
  • Interviewing in Forensic Settings
  • Intimate Partner Violence, Psychological Perspectives on
  • Introversion–Extraversion
  • Item Response Theory
  • Law, Psychology and
  • Lazarus, Richard
  • Learned Helplessness
  • Learning Theory
  • Learning versus Performance
  • LGBTQ+ Romantic Relationships
  • Lie Detection in a Forensic Context
  • Life-Span Development
  • Locus of Control
  • Loneliness and Health
  • Mathematical Psychology
  • Meaning in Life
  • Mechanisms and Processes of Peer Contagion
  • Media Violence, Psychological Perspectives on
  • Mediation Analysis
  • Memories, Autobiographical
  • Memories, Flashbulb
  • Memories, Repressed and Recovered
  • Memory, False
  • Memory, Human
  • Memory, Implicit versus Explicit
  • Memory in Educational Settings
  • Memory, Semantic
  • Metacognition
  • Metaphor, Psychological Perspectives on
  • Microaggressions
  • Military Psychology
  • Mindfulness
  • Mindfulness and Education
  • Minnesota Multiphasic Personality Inventory (MMPI)
  • Money, Psychology of
  • Moral Conviction
  • Moral Development
  • Moral Psychology
  • Moral Reasoning
  • Nature versus Nurture Debate in Psychology
  • Neuroscience of Associative Learning
  • Nonergodicity in Psychology and Neuroscience
  • Nonparametric Statistical Analysis in Psychology
  • Observational (Non-Randomized) Studies
  • Obsessive-Complusive Disorder (OCD)
  • Occupational Health Psychology
  • Olfaction, Human
  • Operant Conditioning
  • Optimism and Pessimism
  • Organizational Justice
  • Parenting Stress
  • Parenting Styles
  • Parents' Beliefs about Children
  • Path Models
  • Peace Psychology
  • Perception, Person
  • Performance Appraisal
  • Personality and Health
  • Personality Disorders
  • Personality Psychology
  • Person-Centered and Experiential Psychotherapies: From Car...
  • Phenomenological Psychology
  • Placebo Effects in Psychology
  • Play Behavior
  • Positive Psychological Capital (PsyCap)
  • Positive Psychology
  • Posttraumatic Stress Disorder (PTSD)
  • Prejudice and Stereotyping
  • Pretrial Publicity
  • Prisoner's Dilemma
  • Problem Solving and Decision Making
  • Procrastination
  • Prosocial Behavior
  • Prosocial Spending and Well-Being
  • Protocol Analysis
  • Psycholinguistics
  • Psychological Literacy
  • Psychological Perspectives on Food and Eating
  • Psychology, Political
  • Psychoneuroimmunology
  • Psychophysics, Visual
  • Psychotherapy
  • Psychotic Disorders
  • Publication Bias in Psychology
  • Reasoning, Counterfactual
  • Rehabilitation Psychology
  • Relationships
  • Religion, Psychology and
  • Replication Initiatives in Psychology
  • Research Methods
  • Risk Taking
  • Role of the Expert Witness in Forensic Psychology, The
  • Sample Size Planning for Statistical Power and Accurate Es...
  • Schizophrenic Disorders
  • School Psychology
  • School Psychology, Counseling Services in
  • Self, Gender and
  • Self, Psychology of the
  • Self-Construal
  • Self-Control
  • Self-Deception
  • Self-Determination Theory
  • Self-Efficacy
  • Self-Esteem
  • Self-Monitoring
  • Self-Regulation in Educational Settings
  • Self-Report Tests, Measures, and Inventories in Clinical P...
  • Sensation Seeking
  • Sex and Gender
  • Sexual Minority Parenting
  • Sexual Orientation
  • Signal Detection Theory and its Applications
  • Simpson's Paradox in Psychology
  • Single People
  • Single-Case Experimental Designs
  • Skinner, B.F.
  • Sleep and Dreaming
  • Small Groups
  • Social Class and Social Status
  • Social Cognition
  • Social Neuroscience
  • Social Support
  • Social Touch and Massage Therapy Research
  • Somatoform Disorders
  • Spatial Attention
  • Sports Psychology
  • Stanford Prison Experiment (SPE): Icon and Controversy
  • Stereotype Threat
  • Stereotypes
  • Stress and Coping, Psychology of
  • Student Success in College
  • Subjective Wellbeing Homeostasis
  • Taste, Psychological Perspectives on
  • Teaching of Psychology
  • Terror Management Theory
  • Testing and Assessment
  • The Concept of Validity in Psychological Assessment
  • The Neuroscience of Emotion Regulation
  • The Reasoned Action Approach and the Theories of Reasoned ...
  • The Weapon Focus Effect in Eyewitness Memory
  • Theory of Mind
  • Therapy, Cognitive-Behavioral
  • Thinking Skills in Educational Settings
  • Time Perception
  • Trait Perspective
  • Trauma Psychology
  • Twin Studies
  • Type A Behavior Pattern (Coronary Prone Personality)
  • Unconscious Processes
  • Video Games and Violent Content
  • Virtues and Character Strengths
  • Women and Science, Technology, Engineering, and Math (STEM...
  • Women, Psychology of
  • Work Well-Being
  • Workforce Training Evaluation
  • Wundt, Wilhelm
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [185.194.105.172]
  • 185.194.105.172

Analysis of Covariance

  • Reference work entry
  • First Online: 01 January 2014
  • Cite this reference work entry

analysis of covariance in research methodology

  • James J. Cochran 2  

1286 Accesses

Introduction

The Analysis of Covariance (generally known as ANCOVA) is a statistical methodology for incorporating quantitatively measured independent observed (not controlled) variables in a designed experiment. Such a quantitatively measured independent observed variable is generally referred to as a covariate (hence the name of the methodology – analysis of covariance). Covariates are also referred to as concomitant variables or control variables.

If we denote the general linear model (GLM) associated with a completely randomized design as

Y ij = the i th observed value of the response variable at the j th treatment level

μ = a constant common to all observations

τ j = the effect of the j th treatment level

ε ij = the random variation attributable to all uncontrolled influences on the i th observed value of the response variable at the j th treatment level

For this model the within group...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References and Further Reading

Bonate PL (2000) Analysis of pretest-posttest designs. Chapman and Hall/CRC, Boca Raton

Google Scholar  

Burnett TD, Barr DR (1977) A nonparametric analogy of analysis of covariance. Educ Psychol Meas 37(2):341–348

Chang GH (1993) Nonparametric analysis of covariance in block designs. Dissertation (Texas Tech University)

Conover WJ, Iman RL (1982) Analysis of covariance using the rank transformation. Biometrics 38:715–724

Doncaster CP, Davey AJH (2007) Analysis of variance and covariance: how to choose and construct models for the life sciences. Cambridge University Press, Cambridge

Huitema BE (1980) The analysis of covariance and alternatives. Wiley, New York

MATH   Google Scholar  

Lesaffre E, Senn S (2003) A note on non-Parametric ANCOVA for covariate adjustment in randomized clinical trials. Stat Med 22(23):3583–3596

McSweeney M, Porter AC (1971) Small sample properties of nonparametric index of response and rank analysis of covariance. Presented at the Annual Meeting of the American Educational Research Association, New York

Milliken GA, Johnson DE (2002) Analysis of messy data vol.3: analysis of covariance. Chapman and Hall, New York

Puri ML, Sen PK (1969) Analysis of covariance based on general rank scores. Ann Math Stat 40:610–618

MATH   MathSciNet   Google Scholar  

Quade D (1967) Rank analysis of covariance. J Am Stat Assoc 62:1187–1200

MathSciNet   Google Scholar  

Quade D (1982) Nonparametric analysis of covariance by matching. Biometrics 38:597–611

Rutherford A (2001) Introducing ANOVA and ANCOVA: a GLM approach. Sage, Los Angeles

Shirley EA (1981) a Distribution-Free Method for Analysis of Covariance Based on Ranked Data. J Appl Stats 30:158–162

Tsangari H, Akritas MG (2004) Nonparametric ANCOVA with two and three covariates. J Multivariate Anal 88(2):298–319

Download references

Author information

Authors and affiliations.

Louisiana Tech University, Ruston, LA, USA

James J. Cochran ( Associate Professor )

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Department of Statistics and Informatics, Faculty of Economics, University of Kragujevac, City of Kragujevac, Serbia

Miodrag Lovric

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this entry

Cite this entry.

Cochran, J.J. (2011). Analysis of Covariance. In: Lovric, M. (eds) International Encyclopedia of Statistical Science. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04898-2_115

Download citation

DOI : https://doi.org/10.1007/978-3-642-04898-2_115

Published : 02 December 2014

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-642-04897-5

Online ISBN : 978-3-642-04898-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Home

Getting Started with Analysis of Covariance

The Analysis of Covariance, or ANCOVA, is a regression model that includes both categorical and numeric predictors, often just one of each. It is commonly used to analyze a follow-up numeric response after exposure to various treatments, controlling for a baseline measure of that same response. For example, given two subjects with the same baseline value of the study outcome, one in a treated group and the other in a control group, will the subjects have different follow-up outcomes on average?

To demonstrate, we’ll replicate an example presented in chapter 22 of Kutner, et al. (2005). A company proposes to study the effects of different types of promotion tactics on sales of a product. Three different promotions, or treatments, will be considered:

  • free samples of product for customers and regular shelf space
  • additional shelf space
  • additional display at end of aisles

Fifteen stores are selected and five each are randomized to each treatment group. Prior to implementing the different promotion treatments, existing products sales are recorded (i.e., the baseline measure). Then the new promotions are implemented for a period of time and the products sales are recorded again (i.e., the follow-up measure).

Let’s read the data into R and prepare for analysis.

We see that store 1 in treatment group 1 had baseline sales of 21 cases and follow-up sales, or “post” sales, of 38 cases.

Looking at mean “post” sales, it seems treatments 1 and 2 were better than treatment 3

Let’s visualize the raw “post” data along with the means (in red) using ggplot2 (Wickham, 2016).

plot of raw follow-up data with follow-up means superimposed.

We see quite a bit of variability around the means.

Now let’s incorporate the baseline sales and plot a straight trendline to summarize the relationship between baseline and post sales for each treatment group. We change the color scale to use the "Set2" palette from the RColorBrewer package, a colorblind friendly palette.

plot of post versus baseline with trendlines superimposed, colored by treatment group

Notice the variability within each group is much lower compared to the variability in the previous plot. This is because follow-up sales are correlated with baseline sales. Incorporating this information allows us to make more precise estimates of treatment effects. Also notice the variability within groups appears to be similar across the groups. This is a key assumption of ANCOVA.

Let’s proceed to analyze this data using ANCOVA. For this we use the workhorse lm() function. Below we model “post” as a function of the “baseline” measure and the “trt” grouping variable. We save the model as “m” and investigate the treatment effect using the anova() function. That’s right, the anova() function, not ancova() ! Remember, ANCOVA is just a name for a special type of regression model. The base R anova() function allows us to investigate which predictors are explaining variability in our response through a series of partial F tests.

The second line of the output tests the null hypothesis that “trt” has no effect on “post” after controlling for “baseline” . The small p-value provides evidence against the null. It appears that even after taking baseline measure into account, additional variability between “post” sales can be explained by treatment groups. This leads us to believe that at least one of the promotional strategies may be better than the others.

It’s worth noting that the anova() function uses Type I sums of squares, which means the order of the variables in the model matters . If we switch the order of the predictors, post ~ trt + baseline, data = d , we get different results. Observe:

The first line now tests the null hypothesis that “trt” has no effect on “post” without accounting for baseline sales . For this model, the appropriate function to use would be the Anova() function from the car package, which uses Type II sums of squares. Order of the variables does not matter in Type II sums of squares. Each predictor is tested assuming the other is in the model . Notice the result for “trt” matches the result in anova(m) .

The same tests can be run using base R’s drop1() function, which compares the full model to a model without the predictor. Notice we need to specify test = "F" .

To learn more about Type I and Type II sums of squares, we recommend the article Anova – Type I/II/III SS explained .

Before we get too excited about our results, we should run some model diagnostics to verify important assumptions: constant variance of residuals within groups and normality of residuals.

To check constant variance, we can look at residuals by treatment group. Below we extract residuals from our model, add them to our data, and create a dot plot of residuals by treatment group. The variability of the residuals looks similar between the groups, which we like to see.

plot of residuals by treatment group

Calling plot() on our model object and specifying which = 2 produces a QQ plot of residuals. If our residuals are normally distributed, we expect to see a scatter plot that forms a straight diagonal line. (See our article, Understanding QQ Plots , for more information.) It appears there are slight departures from normality but nothing too alarming.

QQ plot of residuals

When assessing a QQ plot, it can be helpful to compare it to other QQ plots created with normally distributed data. Below we do just that. The top left QQ plot with red dots is our original residual QQ plot. The other QQ plots are for random draws from a standard normal distribution. Our residual QQ plot doesn’t look that different from the other plots, which makes us feel satisfied with the normality assumption.

Plot containing 25 QQ plots, with top-left QQ plot in red to identify as original QQ plot.

Another assumption of our ANCOVA model is that the baseline effect is constant regardless of treatment group. In other words, the slopes of the linear trend lines are parallel . This seems to be the case in the second exploratory plot above. We specified this assumption in our model by making the effects additive . We have two main effects : “baseline” and “trt”. The effect of one has no effect on the other according to our model specification. But this may not be correct. We can formally test this assumption by fitting a new model where “baseline” and “trt” interact and then testing whether the interaction is warranted. This is sometimes called a Test for Parallel Slopes .

Below we specify an interaction using the formula syntax baseline * trt and save the model as “m_int”. As before we use the anova() function to test the null hypothesis that there is no interaction between “baseline” and “trt” when it comes to explaining variability in “post”.

The row labeled “baseline:trt” has a large p-value, which leads us to not reject the null of no interaction effect. It seems parallel linear trend lines are a safe assumption.

Now that we’ve determined “trt” seems to have some effect on “post” after controlling for “baseline”, how can we estimate those effects? Which promotion strategy is better, and how much better is it? Just because a treatment effect is “significant” doesn’t necessarily mean it’s important or meaningful.

To estimate treatment effects, we use our model to calculate expected “post” values for given “trt” and “baseline” values. For example, expected sales for all treatments assuming a baseline value of 25 can be calculated using the predict() function. (We chose a baseline of 25 since that’s the overall mean of baseline values.)

The emmeans package (Lenth, 2023) automates calculations such as this and provides facilities for making pairwise comparisons of means with confidence intervals on the difference. Below we specify we want to estimate expected mean sales for each treatment group and make pairwise comparisons of those means using the emmeans() function. We simply give it our model object and use the syntax pairwise ~ trt .

The expected mean values are in the section called “emmeans”, which is short for estimated marginal means. This is another of way saying we calculated expected values holding one or more predictors fixed. In this case the emmeans packaged held “baseline” fixed at 25, the overall mean baseline value. Notice this section includes confidence intervals on the estimated means. It appears promotional strategy one (trt = 1) leads to sales of about 37 to 41 cases.

The “contrast” section presents the pairwise comparisons. The estimates are the difference in means and the tests are for the null hypothesis that the difference is equal to 0. All differences appear reliably different from 0 based on the small p-values. Notice these are adjusted Tukey p-values which are inflated to account for the fact we’re running three tests instead of one. This helps guard against false positives (i.e., rejecting a null hypothesis in error). The chance of this happening increases when performing multiple comparisons. (See our article, Understanding Multiple Comparisons and Simultaneous Inference , for more information.)

To calculate confidence intervals on the differences in means we can pipe the output of the emmeans() function into confint() .

It looks like promotional strategy “trt1” can be expected to sell anywhere from about 1 to 8 more cases than promotional strategy “trt2”. This difference may not be meaningful, especially if the cost of promotional strategy “trt1” is much higher than promotional strategy “trt2”.

So how did we know emmeans was holding “baseline” at 25? We called the ref_grid() function on our model object.

This shows what values emmeans will use to calculate expected means. If we want to hold “baseline” at a different value, say 20, we can use the at argument. It requires values be passed to it as a list object.

Finally we might wish to visualize our ANCOVA model with an effect display. The ggpredict() function from the ggeffects package (Lüdecke, 2018) along with its plot() method make this easy. Once again we use the “Set2” palette from the RColorBrewer package by setting colors = RColorBrewer::brewer.pal(3, "Set2") . The vertical distance between the slopes reveal that promotions 1 and 2 are likely to generate more sales than promotion 3.

Effect plot of additive model with predicted trend lines and associated confidence ribbons, colored by treatment group.

It’s worth noting that Kutner et al. define the ANCOVA model a little differently. In their formulation, they use sum contrasts (or deviation coding) for the categorical variables and center the numeric variable. To fully replicate their example we need to change the contrast of “trt” and center “baseline”. Below we use the contr.sum() function to change the contrast definition for the “trt” variable. We set the function argument to 3 since “trt” has three levels. Next we create a centered version of “baseline” by subtracting the mean from all values. Finally we fit the same model. Notice the test for “trt” effect is identical to our original model (F = 59.483, p < 0.00001).

The estimated means and contrasts are identical as well.

Interested readers may also wish to verify the effect display is no different, other than “baseline” being centered at 0.

So how is this new model different from the first one we fit? The changes can be seen by comparing the model coefficients. Below we use the coef() function to extract the model coefficients and then wrap them in a named list to help us identify which coefficients go with which model.

The coefficients in the first section are from a model using treatment contrasts and an uncentered “baseline”. The (Intercept) is the estimated mean sales of “trt1” when baseline = 0. This is probably not meaningful since it’s doubtful a product would have sales of 0 in the previous observation period. The coefficients for “trt2” and “trt3” are the expected differences from “trt1” sales assuming baseline = 0. Hence the name “treatment contrast”. In this case since “trt” and “baseline” do not interact, these coefficients are interpretable. In fact these are the contrasts estimated by emmeans (with a change in sign since the order of subtraction is reversed). The coefficient for “baseline”, 0.89, is almost 1, suggesting a one-to-one correspondence between “baseline” and “post”. That is, a one-unit increase in “baseline” sales leads to about a one-unit increase in “post” sales, all else held constant.

The coefficients in the second section are from a model using sum contrasts and a centered “baseline”. The (Intercept) is the estimated mean of the “trt” means when “baseline_c” = 0. This comes out to 33.8. Since “baseline_c” = 0 is the mean value of baseline, the (Intercept) is interpretable. The coefficients for “trt1” and “trt2” are the expected differences between the means of those groups and the mean of the “trt” means, assuming baseline is held at the mean level. The “baseline_c” coefficient is the only similarity to the previous model, since centering a variable does not change its estimated coefficient in additive models such as these. This all may be a bit confusing, so let’s show this using predicted means.

Below we use model “m3” to calculate expected means at all treatment levels assuming “baseline_c” is set to 0.

The mean of those means is the intercept in model “m3”.

And subtracting the mean of the means from the model predicted means of trt1 and trt2 give us the “trt” coefficients.

Notice in a sum contrast the final level is never compared to the other levels. For more information on sum contrasts and other types of coding schemes we recommend the article R Library Contrast Coding Systems for categorical variables .

Since a model with sum contrasts is not making simple comparisons between group means, we should technically not use the Tukey adjustment when making multiple comparisons. The Tukey procedure is only appropriate when directly comparing group means. With a sum contrast we are comparing group means to a mean of group means , a subtle distinction. A general procedure for any type of contrasts involving group means is the Scheffe Multiple Comparison procedure. This is the approach presented in Kutner et al. Fortunately this is easy to implement using emmeans. Simply add the argument adjust = "scheffe" . Notice that although the estimates are the same, the confidence intervals are slightly wider.

The choice of contrast and whether or not to center predictors is subjective. Unless you have a good reason to switch, we recommend sticking with treatment contrasts. These are the default contrasts in R, Stata, SAS, and SPSS. Centering predictors makes intercepts interpretable in models without non-linear effects and come sometimes help with convergence issues in complex models. But when it comes to post-hoc analyses such as making pairwise comparisons between group means, it doesn’t matter whether your numeric data is centered or not.

Why not use differences?

Instead of ANCOVA, wouldn’t it be easier to just take the difference between the follow-up measure and baseline measure and analyze the change in sales using a one-way Analysis of Variance (ANOVA)? Let’s try that. Below we derive a new variable, “diff”, by subtracting “baseline” from “post”, and then model “diff” as a function of “trt”.

It seems like “trt” explains a lot of variability in the mean differences.

Let’s estimate treatment effects for the differences and make pairwise comparisons.

The “emmeans” section presents estimated mean differences and the “contrasts” section presents pairwise comparisons of those differences. The results in the “contrasts” section are actually not much different than what we obtained using the ANCOVA model. It turns out that if the slope on the baseline covariate is close to 1, then ANCOVA and ANOVA are basically the same. Recall above that the slope coefficient for “baseline” was 0.89, which is close to 1.

However, using change as a dependent variable can be problematic, especially if the baseline measure is used to exclude subjects from your study. For example, if subjects need to have a baseline measure higher than some threshold to be included in your study, then there’s good a chance that any change from baseline will be due to regression to the mean rather than the experimental condition. In his article, Statistical Errors in the Medical Literature , Frank Harrell lists other reasons why using “change from baseline” as a dependent variable can lead to problems. Long story short, using change from baseline requires a number of additional assumptions to be met that ANCOVA does not require.

Power and sample size considerations

ANCOVA is often used to analyze experimental data, just like the example we presented above. When it comes to designing an experiment, one of the key questions to consider is, “how much data do we need?” Experiments often cost a lot of money and/or require invasive procedures involving humans or animals. It’s in everyone’s best interest to collect just enough data to reliably answer our research question. Estimating sample sizes for ANCOVA models can be a little challenging. Fortunately, the Superpower package in R (Lakens and Caldwell, 2021) provides the power_oneway_ancova() function to help guide us.

Let’s imagine we’re designing an experiment to assess which of three different training programs best improve the length of time someone can hang from a pull-up bar. Before we begin the experiment, we’ll measure everyone’s baseline time of how long they can hang before exhaustion. We’ll then randomize participants to one of the three training programs, have them follow the program, and then measure how long they can hang at the end of the training program. How many subjects should we recruit?

To analyze this using the power_oneway_ancova() function, we need to hypothesize some quantities:

  • the hypothesized mean follow-up hang times in the three groups: mu
  • the number of covariates (i.e., numeric predictors in our model): n_cov
  • the estimated squared correlation between baseline and follow-up measures: r2
  • the significance level at which we’ll reject the null hypothesis of no treatment effect: alpha_level
  • the Type II error we’re willing to accept: beta_level

Perhaps we think the mean follow-up hang times for the three training programs will be something like 30 seconds, 40 seconds, and 50 seconds. So we set mu = c(30, 40, 50) . Next we will only use one covariate, the baseline measure, so we set n_cov = 1 . We hypothesize that baseline and follow-up hang times will have a mild positive correlation of about 0.3, so we set r2 = 0.3^2 . (“r2”, or R-squared, is correlation squared.) We imagine our ANCOVA model will have a residual standard error of about 15 seconds, so we set sd = 15 . (This is basically how precise we think our model will be; e.g., estimated mean plus/minus 15 seconds.) Our significance level to reject the null hypothesis of no treatment effect will be 0.05, so we enter alpha_level = 0.05 . Finally we want to have 0.9 probability (i.e., power) of correctly rejecting the null assuming the null truly is false, so we set our desired Type II error to 1 - 0.9: beta_level = 0.10 .

The result says we need about 45 subjects, or 15 in each group, if our hypothesized means, R-squared, and residual standard error are correct. How do we know if they’re correct? We don’t. We just have to do our best to estimate realistic and important values, perhaps using previous studies or pilot data. For more details on power and sample size analysis of ANCOVA models see Shieh (2020) and the vignettes that accompany the Superpower package.

Hopefully you now have a better understanding of how to plan, execute, and interpret an ANCOVA model.

  • Fox J, Weisberg S (2019). An R Companion to Applied Regression , Third edition. Sage, Thousand Oaks CA. https://socialsciences.mcmaster.ca/jfox/Books/Companion/ .
  • Harrell F (2017). Statistical Errors in the Medical Literature. (n.d.). https://www.fharrell.com/post/errmed/ (accessed April 14, 2023).
  • Kutner et al (2005). Applied Linear Statistical Models . McGraw-Hill. (Chapter 22)
  • Neuwirth E (2022). RColorBrewer: ColorBrewer Palettes . R package version 1.1-3, https://CRAN.R-project.org/package=RColorBrewer .
  • Lakens D & Caldwell AR. (2021). Simulation-Based Power Analysis for Factorial Analysis of Variance Designs. Advances in Methods and Practices in Psychological Science , 4(1), 251524592095150. https://doi.org/10.1177/2515245920951503 (version 0.2.0)
  • Lenth R (2023). emmeans: Estimated Marginal Means, aka Least-Squares Means . R package version 1.8.5, https://CRAN.R-project.org/package=emmeans .
  • Lüdecke D (2018). “ggeffects: Tidy Data Frames of Marginal Effects from Regression Models.” Journal of Open Source Software , 3 (26), 772. doi:10.21105/joss.00772 https://doi.org/10.21105/joss.00772 . (version 1.2.1)
  • R Library Contrast Coding Systems for categorical variables. UCLA: Statistical Consulting Group. from https://stats.oarc.ucla.edu/r/library/r-library-contrast-coding-systems-for-categorical-variables/ (accessed April 17, 2023)
  • R Core Team (2023). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/ . (version 4.2.3)
  • Shieh G. (2020). Power analysis and sample size planning in ANCOVA designs. Psychometrika , 85(1), 101-120
  • Wickham H. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2016. (version 3.4.2)

Clay Ford Statistical Research Consultant University of Virginia Library April 17, 2023

For questions or clarifications regarding this article, contact  [email protected] .

View the entire collection  of UVA Library StatLab articles, or learn how to cite .

Research Data Services

Want updates in your inbox? Subscribe to our monthly Research Data Services Newsletter!

Related categories:

COMMENTS

  1. What Is Analysis of Covariance (ANCOVA) and How to Correctly ...

    The statistical method that can combine ANOVA and Regression for adjusting linear effect of covariate and make a clearer picture is called the analysis of covariance (ANCOVA) (1).

  2. ANCOVA: Uses, Assumptions & Example - Statistics By Jim

    ANCOVA, or the analysis of covariance, is a powerful statistical method that analyzes the differences between three or more group means while controlling for the effects of at least one continuous covariate.

  3. An Introduction to ANCOVA (Analysis of Variance) - Statology

    An ANCOVA is an extension of an ANOVA in which we’d like to determine if there is a statistically significant difference between three or more independent groups after accounting for one or more covariates. A covariate is a continuous variable that co-varies with the response variable.

  4. Analysis of Covariance (ANCOVA) - SpringerLink

    The analysis of covariance (ANCOVA) is a technique that merges the analysis of variance (ANOVA) and the linear regression. The ANCOVA analyzes grouped data having a response (the dependent variable) and two or more predictor variables (called covariates) where at least one of them is continuous (quantitative, scaled) and one of them is ...

  5. Analysis of covariance - Wikipedia

    Analysis of covariance (ANCOVA) is a general linear model that blends ANOVA and regression. ANCOVA evaluates whether the means of a dependent variable (DV) are equal across levels of one or more categorical independent variables (IV) and across one or more continuous variables.

  6. Analysis of Covariance (ANCOVA) - Psychology - Oxford ...

    The analysis of covariance (ANCOVA) is a method for testing the hypothesis of the equality of two or more population means, ideally in the context of a designed experiment.

  7. 4.4: Analysis of Covariance (ANCOVA) - Statistics LibreTexts

    One is through design, such as randomized block design. The other is through statistical control, known as analysis of covariance. The control variables are called covariates. Covariates usually have an impact on the dependent variable and thus can be included into an ANOVA analysis.

  8. Analysis of Covariance - SpringerLink

    The Analysis of Covariance (generally known as ANCOVA) is a statistical methodology for incorporating quantitatively measured independent observed (not controlled) variables in a designed experiment.

  9. Analysis of Covariance - Peter - Major Reference Works ...

    Analysis of covariance (ANCOVA) is a statistical method that allows accounting for third variables when investigating the relationship between an independent and a dependent variable. These third v...

  10. Getting Started with Analysis of Covariance - UVA Library

    The Analysis of Covariance, or ANCOVA, is a regression model that includes both categorical and numeric predictors, often just one of each. It is commonly used to analyze a follow-up numeric response after exposure to various treatments, controlling for a baseline measure of that same response.