Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on March 8, 2021 by Pritha Bhandari . Revised on June 22, 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomization.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomized designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors, not research biases like sampling bias or selection bias .

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, other interesting articles, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment and avoid biases.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • a control group that’s given a placebo (no dosage, to control for a placebo effect ),
  • an experimental group that’s given a low dosage,
  • a second experimental group that’s given a high dosage.

Random assignment to helps you make sure that the treatment groups don’t differ in systematic ways at the start of the experiment, as this can seriously affect (and even invalidate) your work.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • participants recruited from cafes are placed in the control group ,
  • participants recruited from local community centers are placed in the low dosage experimental group,
  • participants recruited from gyms are placed in the high dosage group.

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym-users may tend to engage in more healthy behaviors than people who frequent cafes or community centers, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Prevent plagiarism. Run a free check.

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalizability of your results, because it helps ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • a control group that receives no intervention.
  • an experimental group that has a remote team-building intervention every week for a month.

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually in a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomized block design involves placing participants into blocks based on a shared characteristic (e.g., college students versus graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing men and women or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women, etc.). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviors, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers). These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

why is random assignment critical for research studies

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved August 21, 2024, from https://www.scribbr.com/methodology/random-assignment/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, guide to experimental design | overview, steps, & examples, confounding variables | definition, examples & controls, control groups and treatment groups | uses & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Random Assignment in Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

In psychology, random assignment refers to the practice of allocating participants to different experimental groups in a study in a completely unbiased way, ensuring each participant has an equal chance of being assigned to any group.

In experimental research, random assignment, or random placement, organizes participants from your sample into different groups using randomization. 

Random assignment uses chance procedures to ensure that each participant has an equal opportunity of being assigned to either a control or experimental group.

The control group does not receive the treatment in question, whereas the experimental group does receive the treatment.

When using random assignment, neither the researcher nor the participant can choose the group to which the participant is assigned. This ensures that any differences between and within the groups are not systematic at the onset of the study. 

In a study to test the success of a weight-loss program, investigators randomly assigned a pool of participants to one of two groups.

Group A participants participated in the weight-loss program for 10 weeks and took a class where they learned about the benefits of healthy eating and exercise.

Group B participants read a 200-page book that explains the benefits of weight loss. The investigator randomly assigned participants to one of the two groups.

The researchers found that those who participated in the program and took the class were more likely to lose weight than those in the other group that received only the book.

Importance 

Random assignment ensures that each group in the experiment is identical before applying the independent variable.

In experiments , researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. Random assignment increases the likelihood that the treatment groups are the same at the onset of a study.

Thus, any changes that result from the independent variable can be assumed to be a result of the treatment of interest. This is particularly important for eliminating sources of bias and strengthening the internal validity of an experiment.

Random assignment is the best method for inferring a causal relationship between a treatment and an outcome.

Random Selection vs. Random Assignment 

Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study.

On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. 

Random selection ensures that everyone in the population has an equal chance of being selected for the study. Once the pool of participants has been chosen, experimenters use random assignment to assign participants into groups. 

Random assignment is only used in between-subjects experimental designs, while random selection can be used in a variety of study designs.

Random Assignment vs Random Sampling

Random sampling refers to selecting participants from a population so that each individual has an equal chance of being chosen. This method enhances the representativeness of the sample.

Random assignment, on the other hand, is used in experimental designs once participants are selected. It involves allocating these participants to different experimental groups or conditions randomly.

This helps ensure that any differences in results across groups are due to manipulating the independent variable, not preexisting differences among participants.

When to Use Random Assignment

Random assignment is used in experiments with a between-groups or independent measures design.

In these research designs, researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.

There is usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable at the onset of the study.

How to Use Random Assignment

There are a variety of ways to assign participants into study groups randomly. Here are a handful of popular methods: 

  • Random Number Generator : Give each member of the sample a unique number; use a computer program to randomly generate a number from the list for each group.
  • Lottery : Give each member of the sample a unique number. Place all numbers in a hat or bucket and draw numbers at random for each group.
  • Flipping a Coin : Flip a coin for each participant to decide if they will be in the control group or experimental group (this method can only be used when you have just two groups) 
  • Roll a Die : For each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1, 2, or 3 places them in a control group and rolling 3, 4, 5 lands them in an experimental group.

When is Random Assignment not used?

  • When it is not ethically permissible: Randomization is only ethical if the researcher has no evidence that one treatment is superior to the other or that one treatment might have harmful side effects. 
  • When answering non-causal questions : If the researcher is just interested in predicting the probability of an event, the causal relationship between the variables is not important and observational designs would be more suitable than random assignment. 
  • When studying the effect of variables that cannot be manipulated: Some risk factors cannot be manipulated and so it would not make any sense to study them in a randomized trial. For example, we cannot randomly assign participants into categories based on age, gender, or genetic factors.

Drawbacks of Random Assignment

While randomization assures an unbiased assignment of participants to groups, it does not guarantee the equality of these groups. There could still be extraneous variables that differ between groups or group differences that arise from chance. Additionally, there is still an element of luck with random assignments.

Thus, researchers can not produce perfectly equal groups for each specific study. Differences between the treatment group and control group might still exist, and the results of a randomized trial may sometimes be wrong, but this is absolutely okay.

Scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when data is aggregated in a meta-analysis.

Additionally, external validity (i.e., the extent to which the researcher can use the results of the study to generalize to the larger population) is compromised with random assignment.

Random assignment is challenging to implement outside of controlled laboratory conditions and might not represent what would happen in the real world at the population level. 

Random assignment can also be more costly than simple observational studies, where an investigator is just observing events without intervening with the population.

Randomization also can be time-consuming and challenging, especially when participants refuse to receive the assigned treatment or do not adhere to recommendations. 

What is the difference between random sampling and random assignment?

Random sampling refers to randomly selecting a sample of participants from a population. Random assignment refers to randomly assigning participants to treatment groups from the selected sample.

Does random assignment increase internal validity?

Yes, random assignment ensures that there are no systematic differences between the participants in each group, enhancing the study’s internal validity .

Does random assignment reduce sampling error?

Yes, with random assignment, participants have an equal chance of being assigned to either a control group or an experimental group, resulting in a sample that is, in theory, representative of the population.

Random assignment does not completely eliminate sampling error because a sample only approximates the population from which it is drawn. However, random sampling is a way to minimize sampling errors. 

When is random assignment not possible?

Random assignment is not possible when the experimenters cannot control the treatment or independent variable.

For example, if you want to compare how men and women perform on a test, you cannot randomly assign subjects to these groups.

Participants are not randomly assigned to different groups in this study, but instead assigned based on their characteristics.

Does random assignment eliminate confounding variables?

Yes, random assignment eliminates the influence of any confounding variables on the treatment because it distributes them at random among the study groups. Randomization invalidates any relationship between a confounding variable and the treatment.

Why is random assignment of participants to treatment conditions in an experiment used?

Random assignment is used to ensure that all groups are comparable at the start of a study. This allows researchers to conclude that the outcomes of the study can be attributed to the intervention at hand and to rule out alternative explanations for study results.

Further Reading

  • Bogomolnaia, A., & Moulin, H. (2001). A new solution to the random assignment problem .  Journal of Economic theory ,  100 (2), 295-328.
  • Krause, M. S., & Howard, K. I. (2003). What random assignment does and does not do .  Journal of Clinical Psychology ,  59 (7), 751-766.

Print Friendly, PDF & Email

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Random Assignment in Experiments

By Jim Frost 4 Comments

Random assignment uses chance to assign subjects to the control and treatment groups in an experiment. This process helps ensure that the groups are equivalent at the beginning of the study, which makes it safer to assume the treatments caused any differences between groups that the experimenters observe at the end of the study.

photogram of tumbling dice to illustrate a process for random assignment.

Huh? That might be a big surprise! At this point, you might be wondering about all of those studies that use statistics to assess the effects of different treatments. There’s a critical separation between significance and causality:

  • Statistical procedures determine whether an effect is significant.
  • Experimental designs determine how confidently you can assume that a treatment causes the effect.

In this post, learn how using random assignment in experiments can help you identify causal relationships.

Correlation, Causation, and Confounding Variables

Random assignment helps you separate causation from correlation and rule out confounding variables. As a critical component of the scientific method , experiments typically set up contrasts between a control group and one or more treatment groups. The idea is to determine whether the effect, which is the difference between a treatment group and the control group, is statistically significant. If the effect is significant, group assignment correlates with different outcomes.

However, as you have no doubt heard, correlation does not necessarily imply causation. In other words, the experimental groups can have different mean outcomes, but the treatment might not be causing those differences even though the differences are statistically significant.

The difficulty in definitively stating that a treatment caused the difference is due to potential confounding variables or confounders. Confounders are alternative explanations for differences between the experimental groups. Confounding variables correlate with both the experimental groups and the outcome variable. In this situation, confounding variables can be the actual cause for the outcome differences rather than the treatments themselves. As you’ll see, if an experiment does not account for confounding variables, they can bias the results and make them untrustworthy.

Related posts : Understanding Correlation in Statistics , Causation versus Correlation , and Hill’s Criteria for Causation .

Example of Confounding in an Experiment

A photograph of vitamin capsules to represent our experiment.

  • Control group: Does not consume vitamin supplements
  • Treatment group: Regularly consumes vitamin supplements.

Imagine we measure a specific health outcome. After the experiment is complete, we perform a 2-sample t-test to determine whether the mean outcomes for these two groups are different. Assume the test results indicate that the mean health outcome in the treatment group is significantly better than the control group.

Why can’t we assume that the vitamins improved the health outcomes? After all, only the treatment group took the vitamins.

Related post : Confounding Variables in Regression Analysis

Alternative Explanations for Differences in Outcomes

The answer to that question depends on how we assigned the subjects to the experimental groups. If we let the subjects decide which group to join based on their existing vitamin habits, it opens the door to confounding variables. It’s reasonable to assume that people who take vitamins regularly also tend to have other healthy habits. These habits are confounders because they correlate with both vitamin consumption (experimental group) and the health outcome measure.

Random assignment prevents this self sorting of participants and reduces the likelihood that the groups start with systematic differences.

In fact, studies have found that supplement users are more physically active, have healthier diets, have lower blood pressure, and so on compared to those who don’t take supplements. If subjects who already take vitamins regularly join the treatment group voluntarily, they bring these healthy habits disproportionately to the treatment group. Consequently, these habits will be much more prevalent in the treatment group than the control group.

The healthy habits are the confounding variables—the potential alternative explanations for the difference in our study’s health outcome. It’s entirely possible that these systematic differences between groups at the start of the study might cause the difference in the health outcome at the end of the study—and not the vitamin consumption itself!

If our experiment doesn’t account for these confounding variables, we can’t trust the results. While we obtained statistically significant results with the 2-sample t-test for health outcomes, we don’t know for sure whether the vitamins, the systematic difference in habits, or some combination of the two caused the improvements.

Learn why many randomized clinical experiments use a placebo to control for the Placebo Effect .

Experiments Must Account for Confounding Variables

Your experimental design must account for confounding variables to avoid their problems. Scientific studies commonly use the following methods to handle confounders:

  • Use control variables to keep them constant throughout an experiment.
  • Statistically control for them in an observational study.
  • Use random assignment to reduce the likelihood that systematic differences exist between experimental groups when the study begins.

Let’s take a look at how random assignment works in an experimental design.

Random Assignment Can Reduce the Impact of Confounding Variables

Note that random assignment is different than random sampling. Random sampling is a process for obtaining a sample that accurately represents a population .

Photo of a coin toss to represent how we can incorporate random assignment in our experiment.

Random assignment uses a chance process to assign subjects to experimental groups. Using random assignment requires that the experimenters can control the group assignment for all study subjects. For our study, we must be able to assign our participants to either the control group or the supplement group. Clearly, if we don’t have the ability to assign subjects to the groups, we can’t use random assignment!

Additionally, the process must have an equal probability of assigning a subject to any of the groups. For example, in our vitamin supplement study, we can use a coin toss to assign each subject to either the control group or supplement group. For more complex experimental designs, we can use a random number generator or even draw names out of a hat.

Random Assignment Distributes Confounders Equally

The random assignment process distributes confounding properties amongst your experimental groups equally. In other words, randomness helps eliminate systematic differences between groups. For our study, flipping the coin tends to equalize the distribution of subjects with healthier habits between the control and treatment group. Consequently, these two groups should start roughly equal for all confounding variables, including healthy habits!

Random assignment is a simple, elegant solution to a complex problem. For any given study area, there can be a long list of confounding variables that you could worry about. However, using random assignment, you don’t need to know what they are, how to detect them, or even measure them. Instead, use random assignment to equalize them across your experimental groups so they’re not a problem.

Because random assignment helps ensure that the groups are comparable when the experiment begins, you can be more confident that the treatments caused the post-study differences. Random assignment helps increase the internal validity of your study.

Comparing the Vitamin Study With and Without Random Assignment

Let’s compare two scenarios involving our hypothetical vitamin study. We’ll assume that the study obtains statistically significant results in both cases.

Scenario 1: We don’t use random assignment and, unbeknownst to us, subjects with healthier habits disproportionately end up in the supplement treatment group. The experimental groups differ by both healthy habits and vitamin consumption. Consequently, we can’t determine whether it was the habits or vitamins that improved the outcomes.

Scenario 2: We use random assignment and, consequently, the treatment and control groups start with roughly equal levels of healthy habits. The intentional introduction of vitamin supplements in the treatment group is the primary difference between the groups. Consequently, we can more confidently assert that the supplements caused an improvement in health outcomes.

For both scenarios, the statistical results could be identical. However, the methodology behind the second scenario makes a stronger case for a causal relationship between vitamin supplement consumption and health outcomes.

How important is it to use the correct methodology? Well, if the relationship between vitamins and health outcomes is not causal, then consuming vitamins won’t cause your health outcomes to improve regardless of what the study indicates. Instead, it’s probably all the other healthy habits!

Learn more about Randomized Controlled Trials (RCTs) that are the gold standard for identifying causal relationships because they use random assignment.

Drawbacks of Random Assignment

Random assignment helps reduce the chances of systematic differences between the groups at the start of an experiment and, thereby, mitigates the threats of confounding variables and alternative explanations. However, the process does not always equalize all of the confounding variables. Its random nature tends to eliminate systematic differences, but it doesn’t always succeed.

Sometimes random assignment is impossible because the experimenters cannot control the treatment or independent variable. For example, if you want to determine how individuals with and without depression perform on a test, you cannot randomly assign subjects to these groups. The same difficulty occurs when you’re studying differences between genders.

In other cases, there might be ethical issues. For example, in a randomized experiment, the researchers would want to withhold treatment for the control group. However, if the treatments are vaccinations, it might be unethical to withhold the vaccinations.

Other times, random assignment might be possible, but it is very challenging. For example, with vitamin consumption, it’s generally thought that if vitamin supplements cause health improvements, it’s only after very long-term use. It’s hard to enforce random assignment with a strict regimen for usage in one group and non-usage in the other group over the long-run. Or imagine a study about smoking. The researchers would find it difficult to assign subjects to the smoking and non-smoking groups randomly!

Fortunately, if you can’t use random assignment to help reduce the problem of confounding variables, there are different methods available. The other primary approach is to perform an observational study and incorporate the confounders into the statistical model itself. For more information, read my post Observational Studies Explained .

Read About Real Experiments that Used Random Assignment

I’ve written several blog posts about studies that have used random assignment to make causal inferences. Read studies about the following:

  • Flu Vaccinations
  • COVID-19 Vaccinations

Sullivan L.  Random assignment versus random selection . SAGE Glossary of the Social and Behavioral Sciences, SAGE Publications, Inc.; 2009.

Share this:

why is random assignment critical for research studies

Reader Interactions

' src=

November 13, 2019 at 4:59 am

Hi Jim, I have a question of randomly assigning participants to one of two conditions when it is an ongoing study and you are not sure of how many participants there will be. I am using this random assignment tool for factorial experiments. http://methodologymedia.psu.edu/most/rannumgenerator It asks you for the total number of participants but at this point, I am not sure how many there will be. Thanks for any advice you can give me, Floyd

' src=

May 28, 2019 at 11:34 am

Jim, can you comment on the validity of using the following approach when we can’t use random assignments. I’m in education, we have an ACT prep course that we offer. We can’t force students to take it and we can’t keep them from taking it either. But we want to know if it’s working. Let’s say that by senior year all students who are going to take the ACT have taken it. Let’s also say that I’m only including students who have taking it twice (so I can show growth between first and second time taking it). What I’ve done to address confounders is to go back to say 8th or 9th grade (prior to anyone taking the ACT or the ACT prep course) and run an analysis showing the two groups are not significantly different to start with. Is this valid? If the ACT prep students were higher achievers in 8th or 9th grade, I could not assume my prep course is effecting greater growth, but if they were not significantly different in 8th or 9th grade, I can assume the significant difference in ACT growth (from first to second testing) is due to the prep course. Yes or no?

' src=

May 26, 2019 at 5:37 pm

Nice post! I think the key to understanding scientific research is to understand randomization. And most people don’t get it.

' src=

May 27, 2019 at 9:48 pm

Thank you, Anoop!

I think randomness in an experiment is a funny thing. The issue of confounding factors is a serious problem. You might not even know what they are! But, use random assignment and, voila, the problem usually goes away! If you can’t use random assignment, suddenly you have a whole host of issues to worry about, which I’ll be writing about in more detail in my upcoming post about observational experiments!

Comments and Questions Cancel reply

  • Yale Directories

Institution for Social and Policy Studies

Advancing research • shaping policy • developing leaders, why randomize.

About Randomized Field Experiments Randomized field experiments allow researchers to scientifically measure the impact of an intervention on a particular outcome of interest.

What is a randomized field experiment? In a randomized experiment, a study sample is divided into one group that will receive the intervention being studied (the treatment group) and another group that will not receive the intervention (the control group). For instance, a study sample might consist of all registered voters in a particular city. This sample will then be randomly divided into treatment and control groups. Perhaps 40% of the sample will be on a campaign’s Get-Out-the-Vote (GOTV) mailing list and the other 60% of the sample will not receive the GOTV mailings. The outcome measured –voter turnout– can then be compared in the two groups. The difference in turnout will reflect the effectiveness of the intervention.

What does random assignment mean? The key to randomized experimental research design is in the random assignment of study subjects – for example, individual voters, precincts, media markets or some other group – into treatment or control groups. Randomization has a very specific meaning in this context. It does not refer to haphazard or casual choosing of some and not others. Randomization in this context means that care is taken to ensure that no pattern exists between the assignment of subjects into groups and any characteristics of those subjects. Every subject is as likely as any other to be assigned to the treatment (or control) group. Randomization is generally achieved by employing a computer program containing a random number generator. Randomization procedures differ based upon the research design of the experiment. Individuals or groups may be randomly assigned to treatment or control groups. Some research designs stratify subjects by geographic, demographic or other factors prior to random assignment in order to maximize the statistical power of the estimated effect of the treatment (e.g., GOTV intervention). Information about the randomization procedure is included in each experiment summary on the site.

What are the advantages of randomized experimental designs? Randomized experimental design yields the most accurate analysis of the effect of an intervention (e.g., a voter mobilization phone drive or a visit from a GOTV canvasser, on voter behavior). By randomly assigning subjects to be in the group that receives the treatment or to be in the control group, researchers can measure the effect of the mobilization method regardless of other factors that may make some people or groups more likely to participate in the political process. To provide a simple example, say we are testing the effectiveness of a voter education program on high school seniors. If we allow students from the class to volunteer to participate in the program, and we then compare the volunteers’ voting behavior against those who did not participate, our results will reflect something other than the effects of the voter education intervention. This is because there are, no doubt, qualities about those volunteers that make them different from students who do not volunteer. And, most important for our work, those differences may very well correlate with propensity to vote. Instead of letting students self-select, or even letting teachers select students (as teachers may have biases in who they choose), we could randomly assign all students in a given class to be in either a treatment or control group. This would ensure that those in the treatment and control groups differ solely due to chance. The value of randomization may also be seen in the use of walk lists for door-to-door canvassers. If canvassers choose which houses they will go to and which they will skip, they may choose houses that seem more inviting or they may choose houses that are placed closely together rather than those that are more spread out. These differences could conceivably correlate with voter turnout. Or if house numbers are chosen by selecting those on the first half of a ten page list, they may be clustered in neighborhoods that differ in important ways from neighborhoods in the second half of the list. Random assignment controls for both known and unknown variables that can creep in with other selection processes to confound analyses. Randomized experimental design is a powerful tool for drawing valid inferences about cause and effect. The use of randomized experimental design should allow a degree of certainty that the research findings cited in studies that employ this methodology reflect the effects of the interventions being measured and not some other underlying variable or variables.

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Definition of Random Assignment According to Psychology

Materio / Getty Images

Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the control group. In clinical research, randomized clinical trials are known as the gold standard for meaningful results.

Simple random assignment techniques might involve tactics such as flipping a coin, drawing names out of a hat, rolling dice, or assigning random numbers to a list of participants. It is important to note that random assignment differs from random selection .

While random selection refers to how participants are randomly chosen from a target population as representatives of that population, random assignment refers to how those chosen participants are then assigned to experimental groups.

Random Assignment In Research

To determine if changes in one variable will cause changes in another variable, psychologists must perform an experiment. Random assignment is a critical part of the experimental design that helps ensure the reliability of the study outcomes.

Researchers often begin by forming a testable hypothesis predicting that one variable of interest will have some predictable impact on another variable.

The variable that the experimenters will manipulate in the experiment is known as the independent variable , while the variable that they will then measure for different outcomes is known as the dependent variable. While there are different ways to look at relationships between variables, an experiment is the best way to get a clear idea if there is a cause-and-effect relationship between two or more variables.

Once researchers have formulated a hypothesis, conducted background research, and chosen an experimental design, it is time to find participants for their experiment. How exactly do researchers decide who will be part of an experiment? As mentioned previously, this is often accomplished through something known as random selection.

Random Selection

In order to generalize the results of an experiment to a larger group, it is important to choose a sample that is representative of the qualities found in that population. For example, if the total population is 60% female and 40% male, then the sample should reflect those same percentages.

Choosing a representative sample is often accomplished by randomly picking people from the population to be participants in a study. Random selection means that everyone in the group stands an equal chance of being chosen to minimize any bias. Once a pool of participants has been selected, it is time to assign them to groups.

By randomly assigning the participants into groups, the experimenters can be fairly sure that each group will have the same characteristics before the independent variable is applied.

Participants might be randomly assigned to the control group , which does not receive the treatment in question. The control group may receive a placebo or receive the standard treatment. Participants may also be randomly assigned to the experimental group , which receives the treatment of interest. In larger studies, there can be multiple treatment groups for comparison.

There are simple methods of random assignment, like rolling the die. However, there are more complex techniques that involve random number generators to remove any human error.

There can also be random assignment to groups with pre-established rules or parameters. For example, if you want to have an equal number of men and women in each of your study groups, you might separate your sample into two groups (by sex) before randomly assigning each of those groups into the treatment group and control group.

Random assignment is essential because it increases the likelihood that the groups are the same at the outset. With all characteristics being equal between groups, other than the application of the independent variable, any differences found between group outcomes can be more confidently attributed to the effect of the intervention.

Example of Random Assignment

Imagine that a researcher is interested in learning whether or not drinking caffeinated beverages prior to an exam will improve test performance. After randomly selecting a pool of participants, each person is randomly assigned to either the control group or the experimental group.

The participants in the control group consume a placebo drink prior to the exam that does not contain any caffeine. Those in the experimental group, on the other hand, consume a caffeinated beverage before taking the test.

Participants in both groups then take the test, and the researcher compares the results to determine if the caffeinated beverage had any impact on test performance.

A Word From Verywell

Random assignment plays an important role in the psychology research process. Not only does this process help eliminate possible sources of bias, but it also makes it easier to generalize the results of a tested sample of participants to a larger population.

Random assignment helps ensure that members of each group in the experiment are the same, which means that the groups are also likely more representative of what is present in the larger population of interest. Through the use of this technique, psychology researchers are able to study complex phenomena and contribute to our understanding of the human mind and behavior.

Lin Y, Zhu M, Su Z. The pursuit of balance: An overview of covariate-adaptive randomization techniques in clinical trials . Contemp Clin Trials. 2015;45(Pt A):21-25. doi:10.1016/j.cct.2015.07.011

Sullivan L. Random assignment versus random selection . In: The SAGE Glossary of the Social and Behavioral Sciences. SAGE Publications, Inc.; 2009. doi:10.4135/9781412972024.n2108

Alferes VR. Methods of Randomization in Experimental Design . SAGE Publications, Inc.; 2012. doi:10.4135/9781452270012

Nestor PG, Schutt RK. Research Methods in Psychology: Investigating Human Behavior. (2nd Ed.). SAGE Publications, Inc.; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Elements of Research

                                                                                   

Random assignment is a procedure used in experiments to create multiple study groups that include participants with similar characteristics so that the groups are equivalent at the beginning of the study. The procedure involves assigning individuals to an experimental treatment or program at random, or by chance (like the flip of a coin). This means that each individual has an equal chance of being assigned to either group. Usually in studies that involve random assignment, participants will receive a new treatment or program, will receive nothing at all or will receive an existing treatment. When using random assignment, neither the researcher nor the participant can choose the group to which the participant is assigned.

The benefit of using random assignment is that it “evens the playing field.” This means that the groups will differ only in the program or treatment to which they are assigned. If both groups are equivalent except for the program or treatment that they receive, then any change that is observed after comparing information collected about individuals at the beginning of the study and again at the end of the study can be attributed to the program or treatment. This way, the researcher has more confidence that any changes that might have occurred are due to the treatment under study and not to the characteristics of the group.

A potential problem with random assignment is the temptation to ignore the random assignment procedures. For example, it may be tempting to assign an overweight participant to the treatment group that includes participation in a weight-loss program. Ignoring random assignment procedures in this study limits the ability to determine whether or not the weight loss program is effective because the groups will not be randomized. Research staff must follow random assignment protocol, if that is part of the study design, to maintain the integrity of the research. Failure to follow procedures used for random assignment prevents the study outcomes from being meaningful and applicable to the groups represented.

                                

                                                                                                          

 

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 6: Data Collection Strategies

6.1.1 Random Assignation

As previously mentioned, one of the characteristics of a true experiment is that researchers use a random process to decide which participants are tested under which conditions. Random assignation is a powerful research technique that addresses the assumption of pre-test equivalence – that the experimental and control group are equal in all respects before the administration of the independent variable (Palys & Atchison, 2014).

Random assignation is the primary way that researchers attempt to control extraneous variables across conditions. Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria.  One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus, one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands on the heads side, the participant is assigned to Condition A, and if it lands on the tails side, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and, if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested.

However, one problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible.

One approach is block randomization. In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. When the procedure is computerized, the computer program often handles the random assignment, which is obviously much easier. You can also find programs online to help you randomize your random assignation. For example, the Research Randomizer website will generate block randomization sequences for any number of participants and conditions.

Random assignation is not guaranteed to control all extraneous variables across conditions. It is always possible that, just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this may not be a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population take the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

Research Methods for the Social Sciences: An Introduction Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

As previously mentioned, one of the characteristics of a true experiment is that researchers use a random process to decide which participants are tested under which conditions. Random assignation is a powerful research technique that addresses the assumption of pre-test equivalence – that the experimental and control group are equal in all respects before the administration of the independent variable (Palys & Atchison, 2014).

Random assignation is the primary way that researchers attempt to control extraneous variables across conditions. Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria.  One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus, one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands on the heads side, the participant is assigned to Condition A, and if it lands on the tails side, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and, if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested.

However, one problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible.

One approach is block randomization. In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. When the procedure is computerized, the computer program often handles the random assignment, which is obviously much easier. You can also find programs online to help you randomize your random assignation. For example, the Research Randomizer website will generate block randomization sequences for any number of participants and conditions ( Research Randomizer ).

Random assignation is not guaranteed to control all extraneous variables across conditions. It is always possible that, just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this may not be a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population take the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this confound is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

Research Methods, Data Collection and Ethics Copyright © 2020 by Valerie Sheppard is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Random Assignment in Psychology (Definition + 40 Examples)

practical psychology logo

Have you ever wondered how researchers discover new ways to help people learn, make decisions, or overcome challenges? A hidden hero in this adventure of discovery is a method called random assignment, a cornerstone in psychological research that helps scientists uncover the truths about the human mind and behavior.

Random Assignment is a process used in research where each participant has an equal chance of being placed in any group within the study. This technique is essential in experiments as it helps to eliminate biases, ensuring that the different groups being compared are similar in all important aspects.

By doing so, researchers can be confident that any differences observed are likely due to the variable being tested, rather than other factors.

In this article, we’ll explore the intriguing world of random assignment, diving into its history, principles, real-world examples, and the impact it has had on the field of psychology.

History of Random Assignment

two women in different conditions

Stepping back in time, we delve into the origins of random assignment, which finds its roots in the early 20th century.

The pioneering mind behind this innovative technique was Sir Ronald A. Fisher , a British statistician and biologist. Fisher introduced the concept of random assignment in the 1920s, aiming to improve the quality and reliability of experimental research .

His contributions laid the groundwork for the method's evolution and its widespread adoption in various fields, particularly in psychology.

Fisher’s groundbreaking work on random assignment was motivated by his desire to control for confounding variables – those pesky factors that could muddy the waters of research findings.

By assigning participants to different groups purely by chance, he realized that the influence of these confounding variables could be minimized, paving the way for more accurate and trustworthy results.

Early Studies Utilizing Random Assignment

Following Fisher's initial development, random assignment started to gain traction in the research community. Early studies adopting this methodology focused on a variety of topics, from agriculture (which was Fisher’s primary field of interest) to medicine and psychology.

The approach allowed researchers to draw stronger conclusions from their experiments, bolstering the development of new theories and practices.

One notable early study utilizing random assignment was conducted in the field of educational psychology. Researchers were keen to understand the impact of different teaching methods on student outcomes.

By randomly assigning students to various instructional approaches, they were able to isolate the effects of the teaching methods, leading to valuable insights and recommendations for educators.

Evolution of the Methodology

As the decades rolled on, random assignment continued to evolve and adapt to the changing landscape of research.

Advances in technology introduced new tools and techniques for implementing randomization, such as computerized random number generators, which offered greater precision and ease of use.

The application of random assignment expanded beyond the confines of the laboratory, finding its way into field studies and large-scale surveys.

Researchers across diverse disciplines embraced the methodology, recognizing its potential to enhance the validity of their findings and contribute to the advancement of knowledge.

From its humble beginnings in the early 20th century to its widespread use today, random assignment has proven to be a cornerstone of scientific inquiry.

Its development and evolution have played a pivotal role in shaping the landscape of psychological research, driving discoveries that have improved lives and deepened our understanding of the human experience.

Principles of Random Assignment

Delving into the heart of random assignment, we uncover the theories and principles that form its foundation.

The method is steeped in the basics of probability theory and statistical inference, ensuring that each participant has an equal chance of being placed in any group, thus fostering fair and unbiased results.

Basic Principles of Random Assignment

Understanding the core principles of random assignment is key to grasping its significance in research. There are three principles: equal probability of selection, reduction of bias, and ensuring representativeness.

The first principle, equal probability of selection , ensures that every participant has an identical chance of being assigned to any group in the study. This randomness is crucial as it mitigates the risk of bias and establishes a level playing field.

The second principle focuses on the reduction of bias . Random assignment acts as a safeguard, ensuring that the groups being compared are alike in all essential aspects before the experiment begins.

This similarity between groups allows researchers to attribute any differences observed in the outcomes directly to the independent variable being studied.

Lastly, ensuring representativeness is a vital principle. When participants are assigned randomly, the resulting groups are more likely to be representative of the larger population.

This characteristic is crucial for the generalizability of the study’s findings, allowing researchers to apply their insights broadly.

Theoretical Foundation

The theoretical foundation of random assignment lies in probability theory and statistical inference .

Probability theory deals with the likelihood of different outcomes, providing a mathematical framework for analyzing random phenomena. In the context of random assignment, it helps in ensuring that each participant has an equal chance of being placed in any group.

Statistical inference, on the other hand, allows researchers to draw conclusions about a population based on a sample of data drawn from that population. It is the mechanism through which the results of a study can be generalized to a broader context.

Random assignment enhances the reliability of statistical inferences by reducing biases and ensuring that the sample is representative.

Differentiating Random Assignment from Random Selection

It’s essential to distinguish between random assignment and random selection, as the two terms, while related, have distinct meanings in the realm of research.

Random assignment refers to how participants are placed into different groups in an experiment, aiming to control for confounding variables and help determine causes.

In contrast, random selection pertains to how individuals are chosen to participate in a study. This method is used to ensure that the sample of participants is representative of the larger population, which is vital for the external validity of the research.

While both methods are rooted in randomness and probability, they serve different purposes in the research process.

Understanding the theories, principles, and distinctions of random assignment illuminates its pivotal role in psychological research.

This method, anchored in probability theory and statistical inference, serves as a beacon of reliability, guiding researchers in their quest for knowledge and ensuring that their findings stand the test of validity and applicability.

Methodology of Random Assignment

woman sleeping with a brain monitor

Implementing random assignment in a study is a meticulous process that involves several crucial steps.

The initial step is participant selection, where individuals are chosen to partake in the study. This stage is critical to ensure that the pool of participants is diverse and representative of the population the study aims to generalize to.

Once the pool of participants has been established, the actual assignment process begins. In this step, each participant is allocated randomly to one of the groups in the study.

Researchers use various tools, such as random number generators or computerized methods, to ensure that this assignment is genuinely random and free from biases.

Monitoring and adjusting form the final step in the implementation of random assignment. Researchers need to continuously observe the groups to ensure that they remain comparable in all essential aspects throughout the study.

If any significant discrepancies arise, adjustments might be necessary to maintain the study’s integrity and validity.

Tools and Techniques Used

The evolution of technology has introduced a variety of tools and techniques to facilitate random assignment.

Random number generators, both manual and computerized, are commonly used to assign participants to different groups. These generators ensure that each individual has an equal chance of being placed in any group, upholding the principle of equal probability of selection.

In addition to random number generators, researchers often use specialized computer software designed for statistical analysis and experimental design.

These software programs offer advanced features that allow for precise and efficient random assignment, minimizing the risk of human error and enhancing the study’s reliability.

Ethical Considerations

The implementation of random assignment is not devoid of ethical considerations. Informed consent is a fundamental ethical principle that researchers must uphold.

Informed consent means that every participant should be fully informed about the nature of the study, the procedures involved, and any potential risks or benefits, ensuring that they voluntarily agree to participate.

Beyond informed consent, researchers must conduct a thorough risk and benefit analysis. The potential benefits of the study should outweigh any risks or harms to the participants.

Safeguarding the well-being of participants is paramount, and any study employing random assignment must adhere to established ethical guidelines and standards.

Conclusion of Methodology

The methodology of random assignment, while seemingly straightforward, is a multifaceted process that demands precision, fairness, and ethical integrity. From participant selection to assignment and monitoring, each step is crucial to ensure the validity of the study’s findings.

The tools and techniques employed, coupled with a steadfast commitment to ethical principles, underscore the significance of random assignment as a cornerstone of robust psychological research.

Benefits of Random Assignment in Psychological Research

The impact and importance of random assignment in psychological research cannot be overstated. It is fundamental for ensuring the study is accurate, allowing the researchers to determine if their study actually caused the results they saw, and making sure the findings can be applied to the real world.

Facilitating Causal Inferences

When participants are randomly assigned to different groups, researchers can be more confident that the observed effects are due to the independent variable being changed, and not other factors.

This ability to determine the cause is called causal inference .

This confidence allows for the drawing of causal relationships, which are foundational for theory development and application in psychology.

Ensuring Internal Validity

One of the foremost impacts of random assignment is its ability to enhance the internal validity of an experiment.

Internal validity refers to the extent to which a researcher can assert that changes in the dependent variable are solely due to manipulations of the independent variable , and not due to confounding variables.

By ensuring that each participant has an equal chance of being in any condition of the experiment, random assignment helps control for participant characteristics that could otherwise complicate the results.

Enhancing Generalizability

Beyond internal validity, random assignment also plays a crucial role in enhancing the generalizability of research findings.

When done correctly, it ensures that the sample groups are representative of the larger population, so can allow researchers to apply their findings more broadly.

This representative nature is essential for the practical application of research, impacting policy, interventions, and psychological therapies.

Limitations of Random Assignment

Potential for implementation issues.

While the principles of random assignment are robust, the method can face implementation issues.

One of the most common problems is logistical constraints. Some studies, due to their nature or the specific population being studied, find it challenging to implement random assignment effectively.

For instance, in educational settings, logistical issues such as class schedules and school policies might stop the random allocation of students to different teaching methods .

Ethical Dilemmas

Random assignment, while methodologically sound, can also present ethical dilemmas.

In some cases, withholding a potentially beneficial treatment from one of the groups of participants can raise serious ethical questions, especially in medical or clinical research where participants' well-being might be directly affected.

Researchers must navigate these ethical waters carefully, balancing the pursuit of knowledge with the well-being of participants.

Generalizability Concerns

Even when implemented correctly, random assignment does not always guarantee generalizable results.

The types of people in the participant pool, the specific context of the study, and the nature of the variables being studied can all influence the extent to which the findings can be applied to the broader population.

Researchers must be cautious in making broad generalizations from studies, even those employing strict random assignment.

Practical and Real-World Limitations

In the real world, many variables cannot be manipulated for ethical or practical reasons, limiting the applicability of random assignment.

For instance, researchers cannot randomly assign individuals to different levels of intelligence, socioeconomic status, or cultural backgrounds.

This limitation necessitates the use of other research designs, such as correlational or observational studies , when exploring relationships involving such variables.

Response to Critiques

In response to these critiques, people in favor of random assignment argue that the method, despite its limitations, remains one of the most reliable ways to establish cause and effect in experimental research.

They acknowledge the challenges and ethical considerations but emphasize the rigorous frameworks in place to address them.

The ongoing discussion around the limitations and critiques of random assignment contributes to the evolution of the method, making sure it is continuously relevant and applicable in psychological research.

While random assignment is a powerful tool in experimental research, it is not without its critiques and limitations. Implementation issues, ethical dilemmas, generalizability concerns, and real-world limitations can pose significant challenges.

However, the continued discourse and refinement around these issues underline the method's enduring significance in the pursuit of knowledge in psychology.

By being careful with how we do things and doing what's right, random assignment stays a really important part of studying how people act and think.

Real-World Applications and Examples

man on a treadmill

Random assignment has been employed in many studies across various fields of psychology, leading to significant discoveries and advancements.

Here are some real-world applications and examples illustrating the diversity and impact of this method:

  • Medicine and Health Psychology: Randomized Controlled Trials (RCTs) are the gold standard in medical research. In these studies, participants are randomly assigned to either the treatment or control group to test the efficacy of new medications or interventions.
  • Educational Psychology: Studies in this field have used random assignment to explore the effects of different teaching methods, classroom environments, and educational technologies on student learning and outcomes.
  • Cognitive Psychology: Researchers have employed random assignment to investigate various aspects of human cognition, including memory, attention, and problem-solving, leading to a deeper understanding of how the mind works.
  • Social Psychology: Random assignment has been instrumental in studying social phenomena, such as conformity, aggression, and prosocial behavior, shedding light on the intricate dynamics of human interaction.

Let's get into some specific examples. You'll need to know one term though, and that is "control group." A control group is a set of participants in a study who do not receive the treatment or intervention being tested , serving as a baseline to compare with the group that does, in order to assess the effectiveness of the treatment.

  • Smoking Cessation Study: Researchers used random assignment to put participants into two groups. One group received a new anti-smoking program, while the other did not. This helped determine if the program was effective in helping people quit smoking.
  • Math Tutoring Program: A study on students used random assignment to place them into two groups. One group received additional math tutoring, while the other continued with regular classes, to see if the extra help improved their grades.
  • Exercise and Mental Health: Adults were randomly assigned to either an exercise group or a control group to study the impact of physical activity on mental health and mood.
  • Diet and Weight Loss: A study randomly assigned participants to different diet plans to compare their effectiveness in promoting weight loss and improving health markers.
  • Sleep and Learning: Researchers randomly assigned students to either a sleep extension group or a regular sleep group to study the impact of sleep on learning and memory.
  • Classroom Seating Arrangement: Teachers used random assignment to place students in different seating arrangements to examine the effect on focus and academic performance.
  • Music and Productivity: Employees were randomly assigned to listen to music or work in silence to investigate the effect of music on workplace productivity.
  • Medication for ADHD: Children with ADHD were randomly assigned to receive either medication, behavioral therapy, or a placebo to compare treatment effectiveness.
  • Mindfulness Meditation for Stress: Adults were randomly assigned to a mindfulness meditation group or a waitlist control group to study the impact on stress levels.
  • Video Games and Aggression: A study randomly assigned participants to play either violent or non-violent video games and then measured their aggression levels.
  • Online Learning Platforms: Students were randomly assigned to use different online learning platforms to evaluate their effectiveness in enhancing learning outcomes.
  • Hand Sanitizers in Schools: Schools were randomly assigned to use hand sanitizers or not to study the impact on student illness and absenteeism.
  • Caffeine and Alertness: Participants were randomly assigned to consume caffeinated or decaffeinated beverages to measure the effects on alertness and cognitive performance.
  • Green Spaces and Well-being: Neighborhoods were randomly assigned to receive green space interventions to study the impact on residents’ well-being and community connections.
  • Pet Therapy for Hospital Patients: Patients were randomly assigned to receive pet therapy or standard care to assess the impact on recovery and mood.
  • Yoga for Chronic Pain: Individuals with chronic pain were randomly assigned to a yoga intervention group or a control group to study the effect on pain levels and quality of life.
  • Flu Vaccines Effectiveness: Different groups of people were randomly assigned to receive either the flu vaccine or a placebo to determine the vaccine’s effectiveness.
  • Reading Strategies for Dyslexia: Children with dyslexia were randomly assigned to different reading intervention strategies to compare their effectiveness.
  • Physical Environment and Creativity: Participants were randomly assigned to different room setups to study the impact of physical environment on creative thinking.
  • Laughter Therapy for Depression: Individuals with depression were randomly assigned to laughter therapy sessions or control groups to assess the impact on mood.
  • Financial Incentives for Exercise: Participants were randomly assigned to receive financial incentives for exercising to study the impact on physical activity levels.
  • Art Therapy for Anxiety: Individuals with anxiety were randomly assigned to art therapy sessions or a waitlist control group to measure the effect on anxiety levels.
  • Natural Light in Offices: Employees were randomly assigned to workspaces with natural or artificial light to study the impact on productivity and job satisfaction.
  • School Start Times and Academic Performance: Schools were randomly assigned different start times to study the effect on student academic performance and well-being.
  • Horticulture Therapy for Seniors: Older adults were randomly assigned to participate in horticulture therapy or traditional activities to study the impact on cognitive function and life satisfaction.
  • Hydration and Cognitive Function: Participants were randomly assigned to different hydration levels to measure the impact on cognitive function and alertness.
  • Intergenerational Programs: Seniors and young people were randomly assigned to intergenerational programs to study the effects on well-being and cross-generational understanding.
  • Therapeutic Horseback Riding for Autism: Children with autism were randomly assigned to therapeutic horseback riding or traditional therapy to study the impact on social communication skills.
  • Active Commuting and Health: Employees were randomly assigned to active commuting (cycling, walking) or passive commuting to study the effect on physical health.
  • Mindful Eating for Weight Management: Individuals were randomly assigned to mindful eating workshops or control groups to study the impact on weight management and eating habits.
  • Noise Levels and Learning: Students were randomly assigned to classrooms with different noise levels to study the effect on learning and concentration.
  • Bilingual Education Methods: Schools were randomly assigned different bilingual education methods to compare their effectiveness in language acquisition.
  • Outdoor Play and Child Development: Children were randomly assigned to different amounts of outdoor playtime to study the impact on physical and cognitive development.
  • Social Media Detox: Participants were randomly assigned to a social media detox or regular usage to study the impact on mental health and well-being.
  • Therapeutic Writing for Trauma Survivors: Individuals who experienced trauma were randomly assigned to therapeutic writing sessions or control groups to study the impact on psychological well-being.
  • Mentoring Programs for At-risk Youth: At-risk youth were randomly assigned to mentoring programs or control groups to assess the impact on academic achievement and behavior.
  • Dance Therapy for Parkinson’s Disease: Individuals with Parkinson’s disease were randomly assigned to dance therapy or traditional exercise to study the effect on motor function and quality of life.
  • Aquaponics in Schools: Schools were randomly assigned to implement aquaponics programs to study the impact on student engagement and environmental awareness.
  • Virtual Reality for Phobia Treatment: Individuals with phobias were randomly assigned to virtual reality exposure therapy or traditional therapy to compare effectiveness.
  • Gardening and Mental Health: Participants were randomly assigned to engage in gardening or other leisure activities to study the impact on mental health and stress reduction.

Each of these studies exemplifies how random assignment is utilized in various fields and settings, shedding light on the multitude of ways it can be applied to glean valuable insights and knowledge.

Real-world Impact of Random Assignment

old lady gardening

Random assignment is like a key tool in the world of learning about people's minds and behaviors. It’s super important and helps in many different areas of our everyday lives. It helps make better rules, creates new ways to help people, and is used in lots of different fields.

Health and Medicine

In health and medicine, random assignment has helped doctors and scientists make lots of discoveries. It’s a big part of tests that help create new medicines and treatments.

By putting people into different groups by chance, scientists can really see if a medicine works.

This has led to new ways to help people with all sorts of health problems, like diabetes, heart disease, and mental health issues like depression and anxiety.

Schools and education have also learned a lot from random assignment. Researchers have used it to look at different ways of teaching, what kind of classrooms are best, and how technology can help learning.

This knowledge has helped make better school rules, develop what we learn in school, and find the best ways to teach students of all ages and backgrounds.

Workplace and Organizational Behavior

Random assignment helps us understand how people act at work and what makes a workplace good or bad.

Studies have looked at different kinds of workplaces, how bosses should act, and how teams should be put together. This has helped companies make better rules and create places to work that are helpful and make people happy.

Environmental and Social Changes

Random assignment is also used to see how changes in the community and environment affect people. Studies have looked at community projects, changes to the environment, and social programs to see how they help or hurt people’s well-being.

This has led to better community projects, efforts to protect the environment, and programs to help people in society.

Technology and Human Interaction

In our world where technology is always changing, studies with random assignment help us see how tech like social media, virtual reality, and online stuff affect how we act and feel.

This has helped make better and safer technology and rules about using it so that everyone can benefit.

The effects of random assignment go far and wide, way beyond just a science lab. It helps us understand lots of different things, leads to new and improved ways to do things, and really makes a difference in the world around us.

From making healthcare and schools better to creating positive changes in communities and the environment, the real-world impact of random assignment shows just how important it is in helping us learn and make the world a better place.

So, what have we learned? Random assignment is like a super tool in learning about how people think and act. It's like a detective helping us find clues and solve mysteries in many parts of our lives.

From creating new medicines to helping kids learn better in school, and from making workplaces happier to protecting the environment, it’s got a big job!

This method isn’t just something scientists use in labs; it reaches out and touches our everyday lives. It helps make positive changes and teaches us valuable lessons.

Whether we are talking about technology, health, education, or the environment, random assignment is there, working behind the scenes, making things better and safer for all of us.

In the end, the simple act of putting people into groups by chance helps us make big discoveries and improvements. It’s like throwing a small stone into a pond and watching the ripples spread out far and wide.

Thanks to random assignment, we are always learning, growing, and finding new ways to make our world a happier and healthier place for everyone!

Related posts:

  • 19+ Experimental Design Examples (Methods + Types)
  • Cluster Sampling vs Stratified Sampling
  • 41+ White Collar Job Examples (Salary + Path)
  • 47+ Blue Collar Job Examples (Salary + Path)
  • McDonaldization of Society (Definition + Examples)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Random Assignment

  • First Online: 17 May 2019

Cite this chapter

why is random assignment critical for research studies

  • Gideon J. Mellenbergh 2  

535 Accesses

A substantial part of behavioral research is aimed at the testing of substantive hypotheses. In general, a hypothesis testing study investigates the causal influence of an independent variable (IV) on a dependent variable (DV) . The discussion is restricted to IVs that can be manipulated by the researcher, such as, experimental (E- ) and control (C- ) conditions. Association between IV and DV does not imply that the IV has a causal influence on the DV . The association can be spurious because it is caused by an other variable (OV). OVs that cause spurious associations come from the (1) participant, (2) research situation, and (3) reactions of the participants to the research situation. If participants select their own (E- or C- ) condition or others select a condition for them, the assignment to conditions is usually biased (e.g., males prefer the E-condition and females the C-condition), and participant variables (e.g., participants’ sex) may cause a spurious association between the IV and DV . This selection bias is a systematic error of a design. It is counteracted by random assignment of participants to conditions. Random assignment guarantees that all participant variables are related to the IV by chance, and turns systematic error into random error. Random errors decrease the precision of parameter estimates. Random error variance is reduced by including auxiliary variables into the randomized design. A randomized block design includes an auxiliary variable to divide the participants into relatively homogeneous blocks, and randomly assigns participants to the conditions per block. A covariate is an auxiliary variable that is used in the statistical analysis of the data to reduce the error variance. Cluster randomization randomly assigns clusters (e.g., classes of students) to conditions, which yields specific problems. Random assignment should not be confused with random selection. Random assignment controls for selection bias , whereas random selection makes possible to generalize study results of a sample to the population.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Cox, D. R. (2006). Principles of statistical inference . Cambridge, UK: Cambridge University Press.

Google Scholar  

Hox, J. (2002). Multilevel analysis: Techniques and applications . Mahwah, NJ: Erlbaum.

Lai, K., & Kelley, K. (2012). Accuracy in parameter estimation for ANCOVA and ANOVA contrasts: Sample size planning via narrow confidence intervals. British Journal of Mathematical and Statistical Psychology, 65, 350–370.

PubMed   Google Scholar  

McNeish, D., Stapleton, L. M., & Silverman, R. D. (2017). On the unnecessary ubiquity of hierarchical linear modelling. Psychological Methods, 22, 114–140.

Murray, D. M., Varnell, S. P., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94, 423–432.

PubMed   PubMed Central   Google Scholar  

Snijders, T. A. B., & Bosker, R. J. (1999). Multilevel analysis . London, UK: Sage.

van Belle, G. (2002). Statistical rules of thumb . New York, NY: Wiley.

Download references

Author information

Authors and affiliations.

Emeritus Professor Psychological Methods, Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands

Gideon J. Mellenbergh

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Gideon J. Mellenbergh .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Mellenbergh, G.J. (2019). Random Assignment. In: Counteracting Methodological Errors in Behavioral Research. Springer, Cham. https://doi.org/10.1007/978-3-030-12272-0_4

Download citation

DOI : https://doi.org/10.1007/978-3-030-12272-0_4

Published : 17 May 2019

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-74352-3

Online ISBN : 978-3-030-12272-0

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 13 February 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomisation.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomised designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors.

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • A control group that’s given a placebo (no dosage)
  • An experimental group that’s given a low dosage
  • A second experimental group that’s given a high dosage

Random assignment to helps you make sure that the treatment groups don’t differ in systematic or biased ways at the start of the experiment.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • Participants recruited from pubs are placed in the control group
  • Participants recruited from local community centres are placed in the low-dosage experimental group
  • Participants recruited from gyms are placed in the high-dosage group

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym users may tend to engage in more healthy behaviours than people who frequent pubs or community centres, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Prevent plagiarism, run a free check.

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalisability of your results, because it helps to ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8,000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • A control group that receives no intervention
  • An experimental group that has a remote team-building intervention every week for a month

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually into a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a die to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomised block design involves placing participants into blocks based on a shared characteristic (e.g., college students vs graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing children and adults or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviours, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers).

These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, February 13). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved 21 August 2024, from https://www.scribbr.co.uk/research-methods/random-assignment-experiments/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, controlled experiments | methods & examples of control, control groups and treatment groups | uses & examples.

Explore Psychology

What Is Random Assignment in Psychology?

Categories Research Methods

Random assignment means that every participant has the same chance of being chosen for the experimental or control group. It involves using procedures that rely on chance to assign participants to groups. Doing this means that every participant in a study has an equal opportunity to be assigned to any group.

For example, in a psychology experiment, participants might be assigned to either a control or experimental group. Some experiments might only have one experimental group, while others may have several treatment variations.

Using random assignment means that each participant has the same chance of being assigned to any of these groups.

Table of Contents

How to Use Random Assignment

So what type of procedures might psychologists utilize for random assignment? Strategies can include:

  • Flipping a coin
  • Assigning random numbers
  • Rolling dice
  • Drawing names out of a hat

How Does Random Assignment Work?

A psychology experiment aims to determine if changes in one variable lead to changes in another variable. Researchers will first begin by coming up with a hypothesis. Once researchers have an idea of what they think they might find in a population, they will come up with an experimental design and then recruit participants for their study.

Once they have a pool of participants representative of the population they are interested in looking at, they will randomly assign the participants to their groups.

  • Control group : Some participants will end up in the control group, which serves as a baseline and does not receive the independent variables.
  • Experimental group : Other participants will end up in the experimental groups that receive some form of the independent variables.

By using random assignment, the researchers make it more likely that the groups are equal at the start of the experiment. Since the groups are the same on other variables, it can be assumed that any changes that occur are the result of varying the independent variables.

After a treatment has been administered, the researchers will then collect data in order to determine if the independent variable had any impact on the dependent variable.

Random Assignment vs. Random Selection

It is important to remember that random assignment is not the same thing as random selection , also known as random sampling.

Random selection instead involves how people are chosen to be in a study. Using random selection, every member of a population stands an equal chance of being chosen for a study or experiment.

So random sampling affects how participants are chosen for a study, while random assignment affects how participants are then assigned to groups.

Examples of Random Assignment

Imagine that a psychology researcher is conducting an experiment to determine if getting adequate sleep the night before an exam results in better test scores.

Forming a Hypothesis

They hypothesize that participants who get 8 hours of sleep will do better on a math exam than participants who only get 4 hours of sleep.

Obtaining Participants

The researcher starts by obtaining a pool of participants. They find 100 participants from a local university. Half of the participants are female, and half are male.

Randomly Assign Participants to Groups

The researcher then assigns random numbers to each participant and uses a random number generator to randomly assign each number to either the 4-hour or 8-hour sleep groups.

Conduct the Experiment

Those in the 8-hour sleep group agree to sleep for 8 hours that night, while those in the 4-hour group agree to wake up after only 4 hours. The following day, all of the participants meet in a classroom.

Collect and Analyze Data

Everyone takes the same math test. The test scores are then compared to see if the amount of sleep the night before had any impact on test scores.

Why Is Random Assignment Important in Psychology Research?

Random assignment is important in psychology research because it helps improve a study’s internal validity. This means that the researchers are sure that the study demonstrates a cause-and-effect relationship between an independent and dependent variable.

Random assignment improves the internal validity by minimizing the risk that there are systematic differences in the participants who are in each group.

Key Points to Remember About Random Assignment

  • Random assignment in psychology involves each participant having an equal chance of being chosen for any of the groups, including the control and experimental groups.
  • It helps control for potential confounding variables, reducing the likelihood of pre-existing differences between groups.
  • This method enhances the internal validity of experiments, allowing researchers to draw more reliable conclusions about cause-and-effect relationships.
  • Random assignment is crucial for creating comparable groups and increasing the scientific rigor of psychological studies.

Purpose and Limitations of Random Assignment

In an experimental study, random assignment is a process by which participants are assigned, with the same chance, to either a treatment or a control group. The goal is to assure an unbiased assignment of participants to treatment options.

Random assignment is considered the gold standard for achieving comparability across study groups, and therefore is the best method for inferring a causal relationship between a treatment (or intervention or risk factor) and an outcome.

Representation of random assignment in an experimental study

Random assignment of participants produces comparable groups regarding the participants’ initial characteristics, thereby any difference detected in the end between the treatment and the control group will be due to the effect of the treatment alone.

How does random assignment produce comparable groups?

1. random assignment prevents selection bias.

Randomization works by removing the researcher’s and the participant’s influence on the treatment allocation. So the allocation can no longer be biased since it is done at random, i.e. in a non-predictable way.

This is in contrast with the real world, where for example, the sickest people are more likely to receive the treatment.

2. Random assignment prevents confounding

A confounding variable is one that is associated with both the intervention and the outcome, and thus can affect the outcome in 2 ways:

Causal diagram representing how confounding works

Either directly:

Direct influence of confounding on the outcome

Or indirectly through the treatment:

Indirect influence of confounding on the outcome

This indirect relationship between the confounding variable and the outcome can cause the treatment to appear to have an influence on the outcome while in reality the treatment is just a mediator of that effect (as it happens to be on the causal pathway between the confounder and the outcome).

Random assignment eliminates the influence of the confounding variables on the treatment since it distributes them at random between the study groups, therefore, ruling out this alternative path or explanation of the outcome.

How random assignment protects from confounding

3. Random assignment also eliminates other threats to internal validity

By distributing all threats (known and unknown) at random between study groups, participants in both the treatment and the control group become equally subject to the effect of any threat to validity. Therefore, comparing the outcome between the 2 groups will bypass the effect of these threats and will only reflect the effect of the treatment on the outcome.

These threats include:

  • History: This is any event that co-occurs with the treatment and can affect the outcome.
  • Maturation: This is the effect of time on the study participants (e.g. participants becoming wiser, hungrier, or more stressed with time) which might influence the outcome.
  • Regression to the mean: This happens when the participants’ outcome score is exceptionally good on a pre-treatment measurement, so the post-treatment measurement scores will naturally regress toward the mean — in simple terms, regression happens since an exceptional performance is hard to maintain. This effect can bias the study since it represents an alternative explanation of the outcome.

Note that randomization does not prevent these effects from happening, it just allows us to control them by reducing their risk of being associated with the treatment.

What if random assignment produced unequal groups?

Question: What should you do if after randomly assigning participants, it turned out that the 2 groups still differ in participants’ characteristics? More precisely, what if randomization accidentally did not balance risk factors that can be alternative explanations between the 2 groups? (For example, if one group includes more male participants, or sicker, or older people than the other group).

Short answer: This is perfectly normal, since randomization only assures an unbiased assignment of participants to groups, i.e. it produces comparable groups, but it does not guarantee the equality of these groups.

A more complete answer: Randomization will not and cannot create 2 equal groups regarding each and every characteristic. This is because when dealing with randomization there is still an element of luck. If you want 2 perfectly equal groups, you better match them manually as is done in a matched pairs design (for more information see my article on matched pairs design ).

This is similar to throwing a die: If you throw it 10 times, the chance of getting a specific outcome will not be 1/6. But it will approach 1/6 if you repeat the experiment a very large number of times and calculate the average number of times the specific outcome turned up.

So randomization will not produce perfectly equal groups for each specific study, especially if the study has a small sample size. But do not forget that scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when a meta-analysis aggregates the results of a large number of randomized studies.

So for each individual study, differences between the treatment and control group will exist and will influence the study results. This means that the results of a randomized trial will sometimes be wrong, and this is absolutely okay.

BOTTOM LINE:

Although the results of a particular randomized study are unbiased, they will still be affected by a sampling error due to chance. But the real benefit of random assignment will be when data is aggregated in a meta-analysis.

Limitations of random assignment

Randomized designs can suffer from:

1. Ethical issues:

Randomization is ethical only if the researcher has no evidence that one treatment is superior to the other.

Also, it would be unethical to randomly assign participants to harmful exposures such as smoking or dangerous chemicals.

2. Low external validity:

With random assignment, external validity (i.e. the generalizability of the study results) is compromised because the results of a study that uses random assignment represent what would happen under “ideal” experimental conditions, which is in general very different from what happens at the population level.

In the real world, people who take the treatment might be very different from those who don’t – so the assignment of participants is not a random event, but rather under the influence of all sort of external factors.

External validity can be also jeopardized in cases where not all participants are eligible or willing to accept the terms of the study.

3. Higher cost of implementation:

An experimental design with random assignment is typically more expensive than observational studies where the investigator’s role is just to observe events without intervening.

Experimental designs also typically take a lot of time to implement, and therefore are less practical when a quick answer is needed.

4. Impracticality when answering non-causal questions:

A randomized trial is our best bet when the question is to find the causal effect of a treatment or a risk factor.

Sometimes however, the researcher is just interested in predicting the probability of an event or a disease given some risk factors. In this case, the causal relationship between these variables is not important, making observational designs more suitable for such problems.

5. Impracticality when studying the effect of variables that cannot be manipulated:

The usual objective of studying the effects of risk factors is to propose recommendations that involve changing the level of exposure to these factors.

However, some risk factors cannot be manipulated, and so it does not make any sense to study them in a randomized trial. For example it would be impossible to randomly assign participants to age categories, gender, or genetic factors.

6. Difficulty to control participants:

These difficulties include:

  • Participants refusing to receive the assigned treatment.
  • Participants not adhering to recommendations.
  • Differential loss to follow-up between those who receive the treatment and those who don’t.

All of these issues might occur in a randomized trial, but might not affect an observational study.

  • Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference . 2nd edition. Cengage Learning; 2001.
  • Friedman LM, Furberg CD, DeMets DL, Reboussin DM, Granger CB. Fundamentals of Clinical Trials . 5th ed. 2015 edition. Springer; 2015.

Further reading

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Randomized Block Design

Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

6.2 Experimental Design

Learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 college students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. Table 6.2 “Block Randomization Sequence for Assigning Nine Participants to Three Conditions” shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Table 6.2 Block Randomization Sequence for Assigning Nine Participants to Three Conditions

Participant Condition
4 B
5 C
6 A

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008).

Placebo effects are interesting in their own right (see Note 6.28 “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 6.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This is what is shown by a comparison of the two outer bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?”

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999). There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002). The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Doctors treating a patient in Surgery

Research has shown that patients with osteoarthritis of the knee who receive a “sham surgery” experience reductions in pain and improvement in knee function similar to those of patients who receive a real surgery.

Army Medicine – Surgery – CC BY 2.0.

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A carryover effect is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This is called a context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this, he asked one group of participants to rate how large the number 9 was on a 1-to-10 rating scale and another group to rate how large the number 221 was on the same 1-to-10 rating scale (Birnbaum, 1999). Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often do exactly this.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.

Discussion: For each of the following topics, list the pros and cons of a between-subjects and within-subjects design and decide which would be better.

  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g., dog ) are recalled better than abstract nouns (e.g., truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.

Birnbaum, M. H. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4 , 243–249.

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Preference in Random Assignment: Implications for the Interpretation of Randomized Trials

Cathaleene macias.

Community Intervention Research, McLean Hospital, Belmont, MA 02478, USA, ude.dravrah.naelcm@saicamc

Paul B. Gold

Department of Counseling and Personnel Services, University of Maryland, College Park, MD 20742, USA, ude.dmu@dlogp

William A. Hargreaves

Department of Psychiatry, University of California, San Francisco, CA, USA, ten.tsacmoc@grahllib

Elliot Aronson

Department of Psychology, University of California, Santa Cruz, CA, USA, ude.cscu.STAC@toille

Leonard Bickman

Center for Evaluation and Program Improvement, Vanderbilt University, Nashville, TN, USA, [email protected]

Paul J. Barreira

Harvard University Health Services, Harvard University, Boston, MA, USA, ude.dravrah.shu@arierrabp

Danson R. Jones

Institutional Research, Wharton County Junior College, Wharton, TX 77488, USA, ude.cjcw@dsenoj

Charles F. Rodican

Community Intervention Research, McLean Hospital, Belmont, MA 02478, USA, ude.dravrah.naelcm@nacidorc

William H. Fisher

Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA, [email protected]

Random assignment to a preferred experimental condition can increase service engagement and enhance outcomes, while assignment to a less-preferred condition can discourage service receipt and limit outcome attainment. We examined randomized trials for one prominent psychiatric rehabilitation intervention, supported employment, to gauge how often assignment preference might have complicated the interpretation of findings. Condition descriptions, and greater early attrition from services-as-usual comparison conditions, suggest that many study enrollees favored assignment to new rapid-job-placement supported employment, but no study took this possibility into account. Reviews of trials in other service fields are needed to determine whether this design problem is widespread.

The validity of research in any field depends on the extent to which studies rule out alternative explanations for findings and provide meaningful explanations of how and why predicted outcomes were attained (e.g., Bickman 1987 ; Lewin 1943 ; Shadish et al. 2002 ; Trist and Sofer 1959 ). In mental health services research, participants’ expectations about the pros and cons of being randomly assigned to each experimental intervention can offer post hoc explanations for study findings that rival the explanations derived from study hypotheses. Unlike most drug studies that can ‘blind’ participants to their condition assignment, studies that evaluate behavioral or psychosocial interventions typically tell each participant his or her experimental assignment soon after randomization, and being assigned to a non-preferred intervention could be disappointing, or even demoralizing ( Shapiro et al. 2002 ), and thereby reduce participants’ interest in services or motivation to pursue service goals ( Cook and Campbell 1979 ; Shadish 2002 ). On the other hand, if most participants randomly assigned to one experimental condition believe they are fortunate, this condition may have an unfair advantage in outcome comparisons.

Reasons for preferring assignment to a particular experimental condition can be idiosyncratic and diverse, but as long as each condition is assigned the same percentage of participants who are pleased or displeased with their condition assignment, then there will be no overall pattern of condition preferences that could explain differences in outcomes. The greater threat to a valid interpretation of findings occurs when most study enrollees share a general preference for random assignment to one particular condition. Greater preference for one experimental condition over another could stem from general impressions of relative service model effectiveness, or from information that is tangential, e.g., program location on a main bus route or in a safer area of town. Even if random assignment distributes service preferences in equal proportions across conditions, the less attractive experimental condition will receive a higher percentage of participants who are mismatched to their preference, and the more attractive condition will receive a higher percentage of participants matched to their preference. For example, if 60% of all study enrollees prefer condition A and 40% prefer condition B, then, with true equivalence across conditions, service A would have 60% pleased and 40% disappointed assignees, while service B would have 40% pleased and 60% disappointed assignees.

There is potential to engender a general preference for assignment to a particular experimental intervention whenever a study’s recruitment posters, information sheets, or consent documents depict one intervention as newer or seemingly better, even if no evidence yet supports a difference in intervention effectiveness. For instance, in a supported housing study, if a comparison condition is described as offering familiar ‘services-as-usual’ help with moving into supervised housing, participants might reasonably prefer assignment to a more innovative experimental intervention designed to help individuals find their own independent apartments.

Methodologists have proposed protocol adaptations to the typical randomized trial to measure and monitor the impact of participants’ intervention preferences on study enrollment and engagement in assigned experimental conditions ( Braver and Smith 1996 ; Corrigan and Salzer 2003 ; Lambert and Wood 2000 ; Marcus 1997 ; Staines et al. 1999 ; TenHave et al. 2003 ). Nevertheless, few mental health service studies have adopted these design modifications, and even fewer have followed recommendations to measure, report, and, if necessary, statistically control for enrollees’ expressed preferences for assignment to a particular condition ( Halpern 2002 ; King et al. 2005 ; Shapiro et al. 2002 ; Torgerson et al. 1996 ).

In this article, we begin by describing several ways that participants’ preferences for random assignment to a specific service intervention can complicate the interpretation of findings. We then review one field of services research to estimate the prevalence of some of these problems. Obstacles to a valid interpretation of findings include the likelihood of (a) lower service engagement and/or greater early attrition from less-preferred conditions, and (b) similarities among people who refuse or leave a non-preferred program and, hence, condition differences in types of retained participants. Even if all randomized participants do receive assigned services, those who preferred assignment to a certain condition may be unique in ways (e.g., functioning, motivation) that predict outcomes over and above the impact of services, and (c) certain program designs may ameliorate or intensify the effects of disappointment in service assignment. Finally, (d) preference for assignment to one condition over another may reflect a clash between program characteristics (e.g., attendance requirements) and participants’ situational considerations (e.g., time constraints, residential location) so that participants assigned to a non-preferred condition may tend to encounter similar difficulties in attaining outcomes and may choose the same alternative activities. We now discuss each of these issues.

How Participants’ Service Preferences Can Influence Outcomes

Impact of assignment preference on service engagement and retention.

Research participants who are disappointed in their random assignment to a non-preferred experimental condition may refuse to participate, or else withdraw from assigned services or treatment early in the study ( Hofmann et al. 1998 ; Kearney and Silverman 1998 ; Laengle et al. 2000 ; Macias et al. 2005 ; Shadish et al. 2000 ; Wahlbeck et al. 2001 ). If this occurs more often for one experimental condition than another, such differential early attrition can quickly transform a randomized controlled trial into a quasi-experiment ( Corrigan and Salzer 2003 ; Essock et al. 2003 ; West and Sagarin 2000 ). Unless participants’ preferences for assignment to experimental interventions are measured prior to randomization, it will be impossible to distinguish the emotional impact on participants of being matched or mismatched to intervention preference from each intervention’s true ability to engage and retain its assigned participants. If participants who reject their service assignments tend to refuse research interviews, the least-preferred intervention may also have a disproportionately higher incidence of ‘false negatives’ (undetected positive outcomes), and this can further bias the interpretation of findings.

Researchers can statistically control for intervention preferences if these attitudes are measured prior to randomization and one intervention is not greatly preferred over another. Even if a study is unable to measure and statistically control participants’ pre-existing preferences for assignment to experimental conditions, statistically adjusting for differential attrition from assigned services can help to rule out disappointment or satisfaction with random assignment as an alternative explanation for findings. However, rather than statistically controlling (erasing) the impact of intervention preferences on service retention and outcomes, it may be far more informative to investigate whether preference in random assignment might have modified a program’s potential to engage and motivate participants ( Sosin 2002 ). For instance, a statistically significant ‘program assignment-by-program preference’ interaction term in a regression analysis ( Aguinis 2004 ; Aiken and West 1991 ) might reveal a demoralization effect (e.g., a combination of less effort, lower service satisfaction, poorer outcomes) for participants randomly assigned to a comparison condition that was not their preference. A more complex program-by-preference interaction analysis might reveal that an assertive program is better at engaging and retaining consumers who are disappointed in their service assignment, while a less assertive program, when it is able to hang onto its disappointed assignees, is better at helping them attain service goals ( Delucchi and Bostrom 2004 ; Lachenbruch 2002 ). Ability to engage and retain participants is a prerequisite for effectiveness, but, in the same way that medication compliance is distinguished from medication efficacy in pharmaceutical trials, service retention should not be confused with the impact of services received ( Little and Rubin 2000 ).

Similarities Between People with the Same Preferences

Even if rates of early attrition are comparable across study conditions, experimental groups may differ in the types of people who decide to accept assigned services ( Magidson 2000 ). If participants who reject a service intervention resemble one another in some way, then the intervention samples of service-active participants will likely differ on these same dimensions.

As yet, we know very little about the effectiveness of different types of community interventions for engaging various types of consumers ( Cook 1999a , b ; Mark et al. 1992 ), but mobile services and programs that provide assertive community outreach appear to have stronger engagement and retention, presumably because staff schedule and initiate most service contacts on a routine basis ( McGrew et al. 2003 ). If these program characteristics match participants’ reasons for preferring one experimental condition over another, then a bias can exist whether or not intervention preference is balanced across conditions. For instance, consumers who are physically disabled, old, or agoraphobic may prefer home-based service delivery and are likely to be disappointed if assigned to a program that requires regular attendance. Greater retention of these more disabled individuals could put a mobile intervention at a disadvantage in a direct evaluation of service outcomes, like employment, that favor able-bodied, younger, or less anxious individuals. On the other hand, in rehabilitation fields like supported housing, education, or employment that depend strongly on consumer initiative and self-determination, higher functioning or better educated consumers may drop out of control conditions because they are not offered needed opportunities soon enough ( Shadish 2002 ). This was evident in a recent study of supported housing ( McHugo et al. 2004 ), which reported a higher proportion of ‘shelter or street’ homeless participants in the control condition relative to a focal supported housing condition, presumably because participants who were more familiar with local services (e.g., those temporarily homeless following eviction or hospital discharge) considered the control condition services inadequate and sought housing on their own outside the research project.

Service model descriptions and intervention theories suggest many interactions between program characteristics and participant preferences that could be tested as research hypotheses if proposed prior to data analysis. Unfortunately, such hypotheses are rarely formulated and tested.

It is also rare for a randomized trial to compare experimental interventions on sample characteristics at a point in time later than baseline, after every participant has had an opportunity to accept or reject his or her experimental assignment, so that sample differences that emerge early in the project can be statistically controlled in outcome analyses.

Interaction Between Responses to Service Assignment and Service Characteristics

A more subtle threat to research validity exists whenever participants disappointed in their intervention assignment do not drop out of services, but instead remain half-heartedly engaged ( Corrigan and Salzer 2003 ). Participants randomized to a preferred intervention are likely to be pleased and enthusiastic, ready to engage with service providers, while those randomized to a non-preferred intervention are more likely to be disappointed and less motivated to succeed. However, the strength of participant satisfaction or disappointment in service assignment can vary greatly depending on service program characteristics ( Brown et al. 2002 ; Calsyn et al. 2000 ; Grilo et al. 1998 ; Macias et al. 2005 ; Meyer et al. 2002 ). For instance, in a randomized comparison of assertive community treatment (PACT) to a certified clubhouse ( Macias et al. 2009 ), we found that being randomly assigned to the less preferred program decreased service engagement more often in the clubhouse condition than in PACT. However, clubhouse members who had not wanted this service assignment, but nevertheless used clubhouse services to find a job, ended up employed longer and were more satisfied with services than other study enrollees. Study hypotheses based on program differences in staff assertiveness (PACT) and consumer self-determination (clubhouse) predicted this rare three-way interaction prior to data collection, and offer a theory-based (dissonance theory; Aronson 1999 ; Festinger 1957 ) explanation of the complex finding. Presumably, clubhouse members not wanting assignment to this service needed to rationalize their voluntary participation in a non-preferred program by viewing the clubhouse as a means-to-an-end. They tried harder than usual to get a job and stay employed, and gave the clubhouse some credit for their personal success. By contrast, PACT participants who had not wanted this service assignment could credit assertive program staff for keeping them involved, so they experienced less cognitive dissonance and had less need to justify their continued receipt of a non-preferred service. Whether being assigned to a non-preferred program turns out to have negative or positive consequences can depend on a complex interplay between participant motivation and program characteristics. The generation of useful hypotheses for any mental health service trial depends on thoughtful reflection on experimental program differences, as well as familiarity with research in disciplines that study human motivation, such as psychiatry, social psychology, and advertising ( Krause and Howard 2003 ).

Alternative Outcomes Related to Service Preferences

If participants who prefer a certain service condition share similar characteristics, they may also share similar life circumstances and make similar life choices. Individuals who have the same personal responsibilities or physical limitations may prefer not to be assigned to a particular intervention because they cannot fully comply with the requirements for participation, even if they try to do so. For instance, some research participants may have difficulty with regular program attendance because they have competing time commitments, such as caring for an infant or seriously ill relative, or attending school to pursue work credentials ( Collins et al. 2000 ; Mowbray et al. 1999 ; Wolf et al. 2001 ). These productive alternative activities could also compete with the research study’s targeted outcomes, and be misinterpreted as outcome ‘failures.’ For instance, in supported employment trials, unemployment should not be considered a negative outcome if the individual is attending college or pursuing job-related training, or if she has chosen to opt out of the job market for a while to take care of small children or an ill or handicapped relative. These alternative pursuits will be coded simply as ‘unemployed,’ and interpreted as program failure, unless they are tracked and reported as explanations for why work was not obtained. For this reason, it is important to examine relationships between participant circumstances and service preferences at the outset of a study to identify what additional life events and occupations might need to be documented to fully explain intervention outcome differences.

Scope of the Assignment Preference Problem

Regardless of the reason for research participant preference in random assignment, condition differences in service attractiveness can be statistically controlled if (a) preference is measured prior to randomization and (b) if there is sufficient variability in preferences so that the vast majority of study enrollees do not prefer the same intervention. Unfortunately, most randomized service trials have neither measured pre-randomization service preference nor taken it into account when comparing intervention outcomes. Therefore, it is important to assess whether undetected participant preference in random assignment might have existed in published randomized trials, and, if so, whether it might have compromised the interpretation of findings.

As a case example, we review the empirical support for one evidence-based practice, supported employment for adults with severe mental illness, to obtain a qualitative estimate of the extent to which unmeasured service preference for a focal intervention might offer an alternative explanation for published findings. Supported employment offers an ideal starting point for our inquiry given its extensive body of research, which includes a $20 million multi-site randomized study (EIDP, Cook et al. 2002 ), and consensus among psychiatric rehabilitation stakeholders that supported employment is an evidence-based practice ready for dissemination and implementation (Bond et al. 2001). Consumer receptivity and participation in supported employment has been studied in depth through ethnographies ( Alverson et al. 1998 ; Alverson et al. 1995 ; Quimby et al. 2001 ), structured interviews ( McQuilken et al. 2003 ; Secker et al. 2002 ), and personal essays ( Honey 2000 ), and these publications suggest that most consumers know what they need and should expect from a quality vocational program. For this reason, consumer service preferences should be a salient consideration in the design of supported employment research.

Sample of Randomized Trials of Supported Employment

The evidence base for supported employment effectiveness consists of a series of randomized controlled studies of the Individual Placement and Support (IPS) service model (Bond et al. 1997, 2001). One reason research on supported employment has been restricted to a single service delivery model is the ready availability of standardized IPS training and fidelity measures ( Bond et al. 2002 ; McGrew and Griss 2005 ). As a result of a substantial body of research evidence that IPS produces good employment outcomes, this service model has become synonymous with ‘supported employment’ in much of the psychiatric rehabilitation literature (Bond et al. 1997; Crowther et al. 2001 ; Drake et al. 2003 ), and many state departments of mental health in the United States now endorse a definition of supported employment as Individual Placement and Support (IPS).

Table 1 presents a recently published list of all randomized controlled trials of supported employment programs recognized as having high fidelity to Individual Placement and Support (IPS) by the designers of this service delivery model (Bond et al. 2008). Every study has the IPS model as its focal intervention, and IPS experts provided staff training and verified the fidelity of each focal intervention using a supported employment (IPS) fidelity scale (Bond et al. 2008). Research study eligibility was generally limited to unemployed individuals with severe mental illness who had an expressed interest in finding a mainstream job. Most study samples had a mean age of about 40, except that the Twamley et al. (2008) sample was older ( M = 50 years) and the Killackey et al. (2008) sample was younger ( M = 21 years). Except for the study by Lehman et al. (2002) , all studies discouraged enrollment of participants who had major physical limitations or substance use problems.

Randomized trials of high fidelity IPS supported employment: indicators of possible participant preference in condition assignment

RCT study/ locationComparison condition(s)Comparison condition descriptionVoc service retention Research study retention
New Hampshire, USAJob skills trainingBoston ‘choose-get-keep’ model / ‘pre-employment skills training in a group format’√2 months18 months
E: 100%E: 99%
C: 62%C: 97%
Washington, DC USASheltered workshop‘several well-established agencies’/‘primarily paid work adjustment training in a sheltered workshop’2 months18 months
E: 95%99% total sample
C: 84%
Maryland, USAPsychosocial rehabilitation program‘in-house skill training, sheltered work, factory enclaves’ ‘socialization, education, housing’√ any voc service24 months
E: 93%E: 74%
C: 33%C: 60%
Connecticut, USAMultiple sites: 1. ‘standard vocational services’ 2. typical ‘PSR center’ providing ‘social, recreational, educational, & vocational’ services,’ e.g., skills training, program-owned jobs.√ a few weeks24 months
E: 90%E: 96%
C: 50%C: 98%
South Carolina, USASheltered workshop‘traditional vocational rehabilitation’ ‘staff-supervised set-aside jobs’6 months24 months
E: 86%E: 82%
C: 83%C: 70%
Quebec CanadaTraditional vocational services“sheltered workshop, creative workshops, client-run boutique and horticulture;’ ‘job-finding skills training;’ government sponsored set-aside jobs√ 6 months12 months
E: 91%E: 79%
C: 30%C: 89%
Indiana, USA‘Diversified placement’ at Thresholds, Inc.‘existing Thresholds services’ ‘prevocational work crews,’ ‘groups,’ temporary set-aside work√ 6 months24 months
E: 82%97% total sample
C: 65%
6 Nations EuropeTraditional, ‘typical and dominant’ voc rehab serviceDaily ‘structured training combating deficits,’ ‘time structuring,’ and computer skills, usually provided in a ‘day centre’√ any voc service18 months
E: 100%E: 100%
C: 76%C: 100%
Hong Kong, ChinaStepwise conventional voc services‘Occupational Therapy Department of local hospital’ ‘work groups in a simulated environment’18 months18 months:
E: 100%E: 100%
C: 100%C: 98%
California, USAConventional voc rehab referralsDept of Rehab referral to ‘job readiness coaching’ and ‘prevocational classes’√ any voc service12 months
E: 100%E: 79%
C: 41%C: 77%
Victoria, AustraliaTraditional vocational services‘treatment-as-usual’ referral to voc agency with ‘vocationally oriented group programme’√ 6 months:6 months
E: 95%E: 100%
C: 76%C: 100%

As reported in the IPS review article by Bond et al. (2008), or in these original study publications

Possible Indicators of Differential Service Preference

Table 1 lists verbatim service descriptions of the comparison condition in each of the eleven original study publications, along with condition labels derived from the Bond et al. (2008) review of these same randomized trials. Although we do not know the language used to describe the service interventions in recruitment flyers or induction meetings, we assumed there was a strong possibility that most study enrollees would prefer assignment to a new IPS program whenever the comparison condition was an existing sheltered workshop, traditional psychosocial rehabilitation program, or conventional vocational rehabilitation that had been routinely offered by the state or a local authority over several previous years. Since all study enrollees had an expressed interest in obtaining competitive work, we also assumed the possibility of greater preference for IPS if the comparison condition were designed primarily to provide non-competitive jobs, or if program activities delayed entry into competitive employment. Most studies (8 of 11) reported mandatory attendance of all study applicants at one or more research project induction groups in which the experimental conditions were described and questions answered ( Drake et al. 1994 ).

Next, we documented whether each study reported greater early service attrition, or a lower service engagement, for its comparison condition. We report the percentage of study enrollees who were ever active in assigned services at the earliest post-randomization point reported in the original publication or in the summary review article by Bond et al.(2008). We chose the earliest report period so that it would be reasonable to attribute low service contact to disappointment in service assignment. Early service attrition can also be attributed to service ineffectiveness (e.g., poor outreach, slow development of staff-client relationships, or lack of immediate efforts to help participants get a job), so we assume that lower engagement in comparison services is a probable, but not conclusive indication that a comparison condition was generally less appealing than IPS. Our assumption that disappointment in service assignment is a reasonable explanation for early service attrition is based on a demonstrated temporal relationship between random assignment to a non-preferred intervention and subsequently low rates of service engagement within two very different supported employment interventions that had comparable employment rates ( Macias et al. 2005 ).

We also provide research study retention rates for each condition at the latest measurement point as a check on the possibility that loss of participants from services was attributable to the same causes that prevented participation in research interviews and/or the tracking of study outcomes. If research study retention rates at a later point in time are as good or better than service intervention retention rates at an earlier time point, we will assume that factors that typically restrict or enhance research study participation (e.g., program differences in outcome tracking, deaths, hospitalizations, residential mobility) do not account for early differential attrition from experimental and control conditions.

We will consider a study to be at high risk for misinterpretation of findings if the condition labels or descriptions were less favorable for the comparison condition(s), and if there is greater early attrition from comparison services in spite of high research retention.

Review Findings

Descriptions of comparison conditions.

The comparison condition for every study listed in Table 1 was a pre-existing conventional or traditional vocational rehabilitation service that would have been familiar to many participants and did not prioritize rapid placement into mainstream employment. By contrast, each IPS program was a new intervention introduced to the local service system through the research project that was designed to offer fast entry into mainstream work. Although no study recorded participants’ service assignment preference prior to research enrollment or randomization, we might reasonably assume that, in some studies, satisfaction with service assignment to IPS, or disappointment in assignment to the non-supported employment comparison condition, contributed to differences in mainstream employment rates between experimental conditions.

Differential Early Attrition/Retention

Six of the eleven studies reported a 20% or greater advantage in service retention for the focal IPS intervention within the first 8 weeks following randomization. Two other studies that assessed service retention at the 6-months point in the research project reported 17 and 19% differences in favor of IPS. Only the South Carolina and Hong Kong studies ( Gold et al. 2006 ; Wong et al. 2008 ) reported comparably high rates of service retention across experimental interventions, possibly because both studies required all participants to be active in a local mental health program at the time of research study enrollment.

Overall, the majority of participants remained active in each research study for the duration of the trial, with comparable research retention across study conditions. This comparability suggests that factors known to increase research attrition (e.g., residential mobility, chronic illness) cannot explain early differential attrition from services.

IPS interventions may have had better service retention rates in eight of these eleven randomized trials because IPS had more assertive outreach, provided more useful services, or IPS staff collaborated more closely with clinicians than staff in the comparison conditions (Bond et al. 2008; Gold et al. 2006 ; McGurk et al. 2007 ). However, greater intensity or quality of IPS services cannot reasonably account for the very low service retention rates for most comparison conditions relative to research project retention, so disappointment in assignment remains a credible additional explanation for greater early attrition from comparison services.

Only the South Carolina study statistically controlled for variation in participant exposure to vocational services, which might be considered a proxy for the effects of differential attrition attributable to service preference. No study reported whether early attrition resulted in the loss of different types of people from each study condition, and every study compared study conditions on participant characteristics only at baseline.

Our review of research in one dominant field of adult psychiatric rehabilitation reveals that every randomized controlled trial of high-fidelity supported employment had a ‘services-as-usual’ comparison condition that might have predisposed work-interested participants to prefer random assignment to the new ‘rapid-job-placement’ IPS intervention. We cannot be certain that IPS was preferred by most participants over comparison conditions in any of these studies because no study measured participants’ pre-randomization service preferences or satisfaction with condition assignment. However, neither does any original publication offer evidence that would refute our assumption of greater preference for IPS. Eight of these 11 studies reported 15% or greater service attrition from the comparison condition early in the project that could reflect disappointment in service assignment, but no study reporting early differential attrition statistically controlled for exposure to services, examined how attrition might have changed sample characteristics, or distinguished between service retention and outcome attainment in data analyses.

We cannot conclude that the outcomes for any of these eleven studies would differ from the reported findings if service preference, service receipt, or the effects of early attrition on sample characteristics had been measured and, assuming sufficient variability in these measures, intervention differences had been statistically controlled. Moreover, design factors other than program descriptions provided in study advertisements, research induction sessions, or consent documents might have engendered a general preference for assignment to IPS. For instance, in the Bond et al. (2007) study, IPS services were located at the same health center that provided most participants’ clinical care, while comparison services were off-site, and so condition differences in service convenience could also explain better retention rates and outcomes for IPS. Regardless, the published labels and descriptions of comparison interventions presented in Table 1 , and early condition differences in service retention rates, suggest the possibility that outcome differences between study conditions that consistently favor IPS might be partially explained by corresponding differences in participant expectations about services, and, ultimately, satisfaction or disappointment in service assignment. If these same research designs problems are prevalent in other fields of mental health services research, we need to consider what widespread impact these alternative explanations may have had on the interpretation of research findings.

Variability in Impact of Participant Preferences on Outcomes

Unmeasured participant preference in random assignment may not pose the same threat in other service trials, even if informed consent procedures are similar to those used in these supported employment trials, and even if service descriptions inadvertently induce a general preference for one intervention over another. The direct impact of service preference on outcomes may depend a great deal on whether the primary study outcome is measured subjectively or objectively, and on the type of intervention under evaluation, including its frequency, intensity, or duration ( Torgerson and Moffett 2005 ). Moreover, if study outcomes do not depend on participant attitudes or motivation, then disappointment in service assignment may have no impact on outcomes at all.

A mismatch to service preference is likely to have the strongest impact on study outcomes whenever participants are expected to improve their own lives in observable ways that demand strong commitment and self-determination, as is the case for supported employment. By contrast, the impact of a mismatch to service preference on outcomes is probably least discernable when participation is passive or condition assignment remains unknown, as is the case in most drug and medical treatment trials ( King et al. 2005 ; Leykin et al. 2007 ). Whether disappointment in service assignment reduces or enhances outcomes may also depend on prevailing attitudes toward cooperation with service professionals ( Nichols and Maner 2008 ) or perceived pressure to participate from program staff ( Macias et al. 2009 ). However, the impact of service preference on outcomes should almost always be strong when the reason for preferring an intervention is based on expectations of relative efficacy, since even medication trials have shown better outcomes when participants believe a drug will be efficacious ( Krell et al. 2004 ), as well as worse outcomes when they suspect a drug is a placebo ( Sneed et al. 2008 ).

Research reviews are needed to estimate the potential impact of unmeasured service preference in other service fields, and to identify moderating variables that deserve further study. Until the relative threat of participant service preference can be determined for a specific field, pre-randomization service preference should be measured routinely in every randomized controlled trial and, if there is sufficient variability in preference measurement, condition differences in preference should be statistically controlled, and tests of interaction effects conducted to identify moderating variables. Examples of statistical control for service preference in logistic regression and event history analysis can be found in reports on a supported employment randomized trial that compared two SE interventions ( Macias et al. 2005 ; Macias et al. 2006 ). A third publication from this same trial illustrates a theory-driven test of moderation effects ( Macias et al. 2009 ). However, whenever one experimental condition is greatly preferred over another, there is no statistical remedy that will allow an unbiased comparison of outcomes.

New Directions for Employment Research

The body of research on supported employment (SE) offers compelling evidence that most adults with severe mental illness do not find prevocational training or standard vocational rehabilitation attractive routes to mainstream employment ( Cook 1999a , b ; Noble et al. 1997 ). It may be time to relinquish ‘SE vs. no SE’ research designs that evoke preference for assignment to SE and move on to compare different ways of delivering the same high quality SE job-hunting services and on-the-job supports ( Bickman 2002 ; Lavori 2000 ). Comparisons of alternative modalities of the same service typically provide less striking, statistically weaker contrasts in outcomes, but they preserve the ethical principle of equipoise and help to ensure that all participants receive adequate care and comparable opportunities for life improvement ( Lavori et al. 2001 ; Lilford and Jackson 1995 ; Schwartz and Sprangers 1999 ).

We would learn more about why supported employment is effective, and what aspects of SE are most attractive to prospective research participants, if studies would provide more detailed descriptions of service implementation so that the same key concepts (e.g., rapid job placement, service integration, frequent contact) could be compared across separate studies and in meta-analyses ( Campbell and Fiske 1959 ; Sechrest et al. 2000 ; TenHave et al. 2003 ). Such studies would also help to justify specificity in fidelity measurement during dissemination and implementation of evidence-based practices ( Rosenheck 2001 ; West et al. 2002 ). It would be especially advantageous to compare ways to increase access to mainstream work in specific service environments, since the heterogeneity in IPS employment rates, internationally and across the USA, suggests that social, political, economic, and organizational factors are far greater predictors of the work attainment of disabled individuals than receipt of employment services, or even disability itself.

Conclusions

The randomized controlled trial is still the gold standard of research designs ( Cook 1999a , b ), and randomization greatly strengthens causal inference ( Abelson 1995 ; Beveridge 1950 ). However, cause-effect inference depends on the measurement of all plausibly potent causal factors, including study participants’ attitudes toward their assigned interventions. Ironically, while consumer advocates champion the individual’s right to choose services, researchers rarely examine the contribution of consumer self-direction to outcomes considered indicative of service effectiveness. It may well be a legitimate responsibility of institutional review boards to assess the potential impact of study designs and research enrollment documents on participants’ preferences in random assignment and, hence, their eventual well-being as research participants and role in determining study outcomes ( Adair et al. 1983 ).

Our review of one prominent field of mental health services research suggests a general need to reexamine published randomized controlled trials to gauge the extent to which research protocols or descriptions of experimental conditions might have predisposed participants to prefer assignment to one particular condition over another, and whether participant responses to these research design elements might have moderated, or even mediated, service effectiveness.

Acknowledgments

Work on this article was funded by National Institute of Mental Health grants to the first and second authors (MH62628; MH01903). We are indebted to Ann Hohmann, Ph.D. for her supportive monitoring of the NIMH research grant that fostered this interdisciplinary collaboration, and to anonymous reviewers who offered invaluable insights during manuscript preparation.

Contributor Information

Cathaleene Macias, Community Intervention Research, McLean Hospital, Belmont, MA 02478, USA, ude.dravrah.naelcm@saicamc .

Paul B. Gold, Department of Counseling and Personnel Services, University of Maryland, College Park, MD 20742, USA, ude.dmu@dlogp .

William A. Hargreaves, Department of Psychiatry, University of California, San Francisco, CA, USA, ten.tsacmoc@grahllib .

Elliot Aronson, Department of Psychology, University of California, Santa Cruz, CA, USA, ude.cscu.STAC@toille .

Leonard Bickman, Center for Evaluation and Program Improvement, Vanderbilt University, Nashville, TN, USA, [email protected] .

Paul J. Barreira, Harvard University Health Services, Harvard University, Boston, MA, USA, ude.dravrah.shu@arierrabp .

Danson R. Jones, Institutional Research, Wharton County Junior College, Wharton, TX 77488, USA, ude.cjcw@dsenoj .

Charles F. Rodican, Community Intervention Research, McLean Hospital, Belmont, MA 02478, USA, ude.dravrah.naelcm@nacidorc .

William H. Fisher, Department of Psychiatry, University of Massachusetts Medical School, Worcester, MA, USA, [email protected] .

  • Abelson RP. Statistics as principled argument. Hillsdale, NJ: Lawrence Erlbaum; 1995. [ Google Scholar ]
  • Adair JG, Lindsay RCL, Carlopio J. Social artifact research and ethical regulation: Their impact on the teaching of experimental methods. Teaching of Psychology. 1983; 10 :159–162. doi: 10.1207/s15328023top1003_10. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Aguinis H. Regression analysis for categorical moderators. New York: Guilford Press; 2004. [ Google Scholar ]
  • Aiken LS, West SG. Multiple regression: Testing and interpreting interactions. Newbury Park, CA: Sage; 1991. [ Google Scholar ]
  • Alverson H, Alverson M, Drake RE, Becker DR. Social correlates of competitive employment among people with severe mental illness. Psychosocial Rehabilitation Journal. 1998; 22 (1):34–40. [ Google Scholar ]
  • Alverson M, Becker DR, Drake RE. An ethnographic study of coping strategies used by people with severe mental illness participating in supported employment. Psychosocial Rehabilitation Journal. 1995; 18 (4):115–127. [ Google Scholar ]
  • Aronson E. The power of self-persuasion. The American Psychologist. 1999; 54 (11):873–875. doi: 10.1037/h0088188. [ CrossRef ] [ Google Scholar ]
  • Beveridge WIB. The art of scientific investigation. New York: Vintage Books; 1950. [ Google Scholar ]
  • Bickman L. The functions of program theory. In: Bickman L, editor. Using program theory in evaluation. San Francisco: Jossey-Bass; 1987. [ Google Scholar ]
  • Bickman L. The death of treatment as usual: An excellent first step on a long road. Clinical Psychology: Science and Practice. 2002; 9 (2):195–199. doi: 10.1093/clipsy/9.2.195. [ CrossRef ] [ Google Scholar ]
  • Bond GR, Becker DR, Drake RE, Rapp C, Meisler N, Lehman AF. Implementing supported employment as an evidence-based practice. Psychiatric Services. 2001a; 52 (3):313–322. doi: 10.1176/appi.ps.52.3.313. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bond GR, Becker DR, Drake RE, Vogler K. A fidelity scale for the individual placement and support model of supported employment. Rehabilitation Counseling Bulletin. 1997a; 40 (4):265–284. [ Google Scholar ]
  • Bond GR, Campbell K, Evans LJ, Gervey R, Pascaris A, Tice S, et al. A scale to measure quality of supported employment for persons with severe mental illness. Journal of Vocational Rehabilitation. 2002; 17 (4):239–250. [ Google Scholar ]
  • Bond GR, Drake R, Becker D. An update on randomized controlled trials of evidence-based supported employment. Psychiatric Rehabilitation Journal. 2008a; 31 (4):280–290. doi: 10.2975/31.4.2008.280.290. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bond GR, Drake RE, Mueser KT, Becker DR. An update on supported employment for people with severe mental illness. Psychiatric Services. 1997b; 48 (3):335–346. [ PubMed ] [ Google Scholar ]
  • Bond GR, McHugo GH, Becker D, Rapp CA, Whitley R. Fidelity of supported employment: Lessons learned from the national evidence-based practice project. Psychiatric Rehabilitation Journal. 2008b; 31 (4):300–305. doi: 10.2975/31.4.2008.300.305. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bond GR, Salyers MP, Roudebush RL, Dincin J, Drake RE, Becker DR, et al. A randomized controlled trial comparing two vocational models for persons with severe mental illness. Journal of Consulting and Clinical Psychology. 2007; 75 (6):968–982. doi: 10.1037/0022-006X.75.6.968. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bond GR, Vogler K, Resnick SG, Evans L, Drake R, Becker D. Dimensions of supported employment: Factor structure of the IPS fidelity scale. Journal of Mental Health. 2001b; 10 (4):383–393. doi: 10.1080/09638230120041146. [ CrossRef ] [ Google Scholar ]
  • Braver SL, Smith MC. Maximizing both external and internal validity in longitudinal true experiments with voluntary treatments: The ‘combined modified’ design. Evaluation and Program Planning. 1996; 19 :287–300. doi: 10.1016/S0149-7189(96)00029-8. [ CrossRef ] [ Google Scholar ]
  • Brown TG, Seraganian P, Tremblay J, Annis H. Matching substance abuse aftercare treatments to client characteristics. Addictive Behaviors. 2002; 27 :585–604. doi: 10.1016/S0306-4603(01)00195-2. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Burns T, Catty J, Becker T, Drake RE, Fioritti A, Knapp M, et al. The effectiveness of supported employment for people with severe mental illness: A randomised controlled trial. Lancet. 2007; 370 :1146–1152. doi: 10.1016/S0140-6736(07)61516-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Calsyn R, Winter J, Morse G. Do consumers who have a choice of treatment have better outcomes? Community Mental Health Journal. 2000; 36 (2):149–160. doi: 10.1023/A:1001890210218. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait–multimethod matrix. Psychological Bulletin. 1959; 56 :81–105. doi: 10.1037/h0046016. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Collins ME, Mowbray C, Bybee D. Characteristics predicting successful outcomes of participants with severe mental illness in supported education. Psychiatric Services. 2000; 51 (6):774–780. doi: 10.1176/appi.ps.51.6.774. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cook JA. Understanding the failure of vocational rehabilitation: What do we need to know and how can we learn it? Journal of Disability Policy Studies. 1999a; 10 (1):127–132. [ Google Scholar ]
  • Cook TD. Considering the major arguments against random assignment: An analysis of the intellectual culture surrounding evaluation in American schools of education. Paper presented at the Harvard Faculty Seminar on Experiments in Education; Cambridge, MA. 1999b. [ Google Scholar ]
  • Cook TD, Campbell DT. Quasi-experimentation: Design & analysis issues for field settings. Boston: Houghton Mifflin; 1979. [ Google Scholar ]
  • Cook JA, Carey MA, Razzano L, Burke J, Blyler CR. The pioneer: The employment intervention demonstration program. New Directions for Evaluation. 2002; 94 :31–44. doi: 10.1002/ev.49. [ CrossRef ] [ Google Scholar ]
  • Corrigan PW, Salzer MS. The conflict between random assignment and treatment preference: Implications for internal validity. Evaluation and Program Planning. 2003; 26 :109–121. doi: 10.1016/S0149-7189(03)00014-4. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Crowther RE, Marshall M, Bond GR, Huxley P. Helping people with severe mental illness to obtain work: Systematic review. BMJ: British Medical Journal. 2001; 322 (7280):204–208. doi: 10.1136/bmj.322.7280.204. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Delucchi KL, Bostrom A. Methods for analysis of skewed data distributions in psychiatric clinical studies: Working with many zero values. The American Journal of Psychiatry. 2004; 161 (7):1159–1168. doi: 10.1176/appi.ajp.161.7.1159. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Drake RE, Becker DR, Anthony WA. A research induction group for clients entering a mental health services research project. Hospital & Community Psychiatry. 1994; 45 (5):487–489. [ PubMed ] [ Google Scholar ]
  • Drake RE, Becker D, Bond GR. Recent research on vocational rehabilitation for persons with severe mental illness. Current Opinion in Psychiatry. 2003; 16 :451–455. doi: 10.1097/00001504-200307000-00012. [ CrossRef ] [ Google Scholar ]
  • Drake RE, McHugo GJ, Bebout RR, Becher DR, Harris M, Bond GR, et al. A randomized clinical trial of supported employment for inner-city patients with severe mental disorders. Archives of General Psychiatry. 1999; 56 :627–633. doi: 10.1001/archpsyc.56.7.627. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Drake RE, McHugo GJ, Becker D, Anthony WA, Clark RE. The new Hampshire study of supported employment for people with severe mental illness. Journal of Consulting and Clinical Psychology. 1996; 64 (2):391–399. doi: 10.1037/0022-006X.64.2.391. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Essock SM, Drake R, Frank RG, McGuire TG. Randomized controlled trials in evidence-based mental health care: Getting the right answer to the right question. Schizophrenia Bulletin. 2003; 29 (1):115–123. [ PubMed ] [ Google Scholar ]
  • Festinger L. A theory of cognitive dissonance. Stanford, CA: Stanford University Press; 1957. [ Google Scholar ]
  • Gold PB, Meisler N, Santos AB, Carnemolla MA, Williams OH, Keleher J. Randomized trial of supported employment integrated with assertive community treatment for rural adults with severe mental illness. Schizophrenia Bulletin. 2006; 32 (2):378–395. doi: 10.1093/schbul/sbi056. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grilo CM, Money R, Barlow DH, Goddard AW, Gorman JM, Hofmann SG, et al. Pretreatment patient factors predicting attrition from a multicenter randomized controlled treatment study for panic disorder. Comprehensive Psychiatry. 1998; 39 (6):323–332. doi: 10.1016/S0010-440X(98)90043-8. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern SD. Prospective preference assessment: a method to enhance the ethics and efficiency of randomized controlled trials. Controlled Clinical Trials. 2002; 23 :274–288. doi: 10.1016/S0197-2456(02)00191-5. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hofmann SG, Barlow DH, Papp LA, Detweiler MF, Ray SE, Shear MK, et al. Pretreatment attrition in a comparative treatment outcome study on panic disorder. The American Journal of Psychiatry. 1998; 155 (1):43–47. [ PubMed ] [ Google Scholar ]
  • Honey A. Psychiatric vocational rehabilitation: Where are the customers’ views. Psychiatric Rehabilitation Journal. 2000; 23 (3):270–279. [ PubMed ] [ Google Scholar ]
  • Kearney C, Silverman W. A critical review of pharmacotherapy for youth with anxiety disorders: Things are not as they seem. Journal of Anxiety Disorders. 1998; 12 (2):83–102. doi: 10.1016/S0887-6185(98)00005-X. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Killackey E, Jackson HJ, McGorry PD. Vocational intervention in first-episode psychosis: Individual placement and support versus treatment as usual. The British Journal of Psychiatry. 2008; 193 :114–120. doi: 10.1192/bjp.bp.107.043109. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • King M, Nazareth I, Lampe F, Bower P, Chandler M, Morou M, et al. Impact of participant and physician intervention preferences on randomized trials: A systematic review. Journal of the American Medical Association. 2005; 293 (9):1089–1099. doi: 10.1001/jama.293.9.1089. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Krause MS, Howard KI. What random assignment does and does not do. Journal of Clinical Psychology. 2003; 59 :751–766. doi: 10.1002/jclp.10170. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Krell HV, Leuchter AF, Morgan M, Cook IA, Abrams M. Subject expectations of treatment effectiveness and outcome of treatment with an experimental antidepressant. The Journal of Clinical Psychiatry. 2004; 65 (9):1174–1179. [ PubMed ] [ Google Scholar ]
  • Lachenbruch PA. Analysis of data with excess zeros. Statistical Methods in Medical Research. 2002; 11 :297–302. doi: 10.1191/0962280202sm289ra. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Laengle G, Welte W, Roesger U, Guenthner A, U’Ren R. Chronic psychiatric patients without psychiatric care: A pilot study. Social Psychiatry and Psychiatric Epidemiology. 2000; 35 (10):457–462. doi: 10.1007/s001270050264. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lambert MF, Wood J. Incorporating patient preferences into randomized trials. Journal of Clinical Epidemiology. 2000; 53 :163–166. doi: 10.1016/S0895-4356(99)00146-8. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Latimer EA, LeCompte MD, Becker DR, Drake RE, Duclos I, Piat M, et al. Generalizability of the individual placement and support model of supported employment: Results of a canadian randomised controlled trial. The British Journal of Psychiatry. 2006; 189 :65–73. doi: 10.1192/bjp.bp.105.012641. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lavori PW. Placebo control groups in randomized treatment trials: A statistician’s perspective. Biological Psychiatry. 2000; 47 :717–723. doi: 10.1016/S0006-3223(00)00838-6. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lavori PW, Rush AJ, Wisniewski SR, Alpert J, Fava M, Kupfer DJ, et al. Strengthening clinical effectiveness trials: Equipoise-stratified randomization. Biological Psychiatry. 2001; 50 (10):792–801. doi: 10.1016/S0006-3223(01)01223-9. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lehman AF, Goldberg RW, Dixon LB, NcNary S, Postrado L, Hackman A, et al. Improving employment outcomes for persons with severe mental illnesses. Archives of General Psychiatry. 2002; 59 (2):165–172. doi: 10.1001/archpsyc.59.2.165. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lewin K. Forces behind food habits and methods of change. Bulletin of the National Research Council. 1943; 108 :35–65. [ Google Scholar ]
  • Leykin Y, DeRubeis RJ, Gallop R, Amsterdam JD, Shelton RC, Hollon SD. The relation of patients’ treatment preferences to outcome in a randomized clinical trial. Behavior Therapy. 2007; 38 :209–217. doi: 10.1016/j.beth.2006.08.002. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lilford R, Jackson J. Equipoise and the ethics of randomisation. Journal of the Royal Society of Medicine. 1995; 88 :552–559. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Little RJ, Rubin DB. Causal effects in clinical and epidemiological studies via potential outcomes: Concepts and analytical approaches. Annual Review of Public Health. 2000; 21 :121–145. doi: 10.1146/annurev.publhealth.21.1.121. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Macias C, Aronson E, Hargreaves W, Weary G, Barreira P, Harvey JH, et al. Transforming dissatisfaction with services into self-determination: A social psychological perspective on community program effectiveness. Journal of Applied Social Psychology. 2009; 39 (7) [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Macias C, Barreira P, Hargreaves W, Bickman L, Fisher WH, Aronson E. Impact of referral source and study applicants’ preference for randomly assigned service on research enrollment, service engagement, and evaluative outcomes. The American Journal of Psychiatry. 2005; 162 (4):781–787. doi: 10.1176/appi.ajp.162.4.781. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Macias C, Rodican CF, Hargreaves WA, Jones DR, Barreira PJ, Wang Q. Supported employment outcomes of a randomized controlled trial of assertive community treatment and clubhouse models. Psychiatric Services. 2006; 57 (10):1406–1415. doi: 10.1176/appi.ps.57.10.1406. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Magidson J. On models used to adjust for preexisting differences. In: Bickman L, editor. Research design. Vol. 2. Thousand Oaks, CA: Sage; 2000. [ Google Scholar ]
  • Marcus SM. Assessing non-consent bias with parallel randomized and nonrandomized clinical trials. Journal of Clinical Epidemiology. 1997; 50 (7):823–828. doi: 10.1016/S0895-4356(97)00068-1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mark MM, Hofmann DA, Reichardt CS. Testing theories in theory-driven evaluations: Tests of moderation in all things. In: Chen H, Rossi PH, editors. Using theory to improve program and policy evaluations. New York: Greenwood Press; 1992. [ Google Scholar ]
  • McGrew JH, Griss ME. Concurrent and predictive validity of two scales to assess the fidelity of implementation of supported employment. Psychiatric Rehabilitation Journal. 2005; 29 (1):41–47. doi: 10.2975/29.2005.41.47. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McGrew JH, Pescosolido BA, Wright E. Case managers’ perspectives on critical ingredients of Assertive Community Treatment and on its implementation. Psychiatric Services. 2003; 54 (3):370–376. doi: 10.1176/appi.ps.54.3.370. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McGurk S, Mueser K, Feldman K, Wolfe R, Pascaris A. Cognitive training for supported employment: 2–3 year outcomes of a randomized controlled trial. The American Journal of Psychiatry. 2007; 164 :437–441. doi: 10.1176/appi.ajp.164.3.437. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McHugo GJ, Bebout RR, Harris M, Cleghorn S, Herring G, Xie H, et al. A randomized controlled trial of integrated versus parallel housing services for homeless adults with severe mental illness. Schizophrenia Bulletin. 2004; 30 (4):969–982. [ PubMed ] [ Google Scholar ]
  • McQuilken M, Zahniser JH, Novak J, Starks RD, Olmos A, Bond GR. The work project survey: Consumer perspectives on work. Journal of Vocational Rehabilitation. 2003; 18 :59–68. [ Google Scholar ]
  • Meyer B, Pilkonis PA, Krupnick JL, Egan MK, Simmens SJ, Sotsky SM. Treatment expectancies, patient alliance and outcome: Further analyses from the national institute of mental health treatment of depression collaborative research program. Journal of Consulting and Clinical Psychology. 2002; 70 (4):1051–1055. doi: 10.1037/0022-006X.70.4.1051. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mowbray CT, Collins M, Deborah B. Supported education for individuals with psychiatric disabilities: Long-term outcomes from an experimental study. Social Work Research. 1999; 23 (2):89–100. [ Google Scholar ]
  • Mueser KT, Clark RE, Haines M, Drake RE, McHugo GJ, GR B, et al. The Hartford study of supported employment for persons with severe mental illness. Journal of Consulting and Clinical Psychology. 2004; 72 (3):479–490. doi: 10.1037/0022-006X.72.3.479. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nichols AL, Maner JK. The good-subject effect: Investigating participant demand characteristics. The Journal of General Psychology. 2008; 135 (2):151–165. doi: 10.3200/GENP.135.2.151-166. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Noble JH, Honberg RS, Hall LL, Flynn LM. A legacy of failure: The inability of the federal-state vocational rehabilitation system to serve people with severe mental illness. Arlington: VA: National Alliance for the Mentally Ill; 1997. [ Google Scholar ]
  • Quimby E, Drake R, Becker D. Ethnographic findings from the Washington, DC vocational services study. Psychiatric Rehabilitation Journal. 2001; 24 (4):368–374. [ PubMed ] [ Google Scholar ]
  • Rosenheck RA. Organizational process: A missing link between research and practice. Psychiatric Services. 2001; 52 (12):1607–1612. doi: 10.1176/appi.ps.52.12.1607. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schwartz CE, Sprangers M. Methodological approaches for assessing response shift in longitudinal quality of life research. Social Science & Medicine. 1999; 48 :1531–1548. doi: 10.1016/S0277-9536(99)00047-7. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sechrest L, Davis M, Stickle T, McKnight P. Understanding ‘method’ variance. In: Bickman L, editor. Research design. Thousand Oaks, CA: Sage; 2000. [ Google Scholar ]
  • Secker J, Membrey H, Grove B, Seebohm P. Recovering from illness or recovering your life? Implications of clinical versus social models of recovery from mental health problems for employment support services. Disability & Society. 2002; 17 (4):403–418. doi: 10.1080/09687590220140340. [ CrossRef ] [ Google Scholar ]
  • Shadish WR. Revisiting field experimentation: Field notes for the future. Psychological Methods. 2002; 7 (1):3–18. doi: 10.1037/1082-989X.7.1.3. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. New York: Houghton Mifflin; 2002. [ Google Scholar ]
  • Shadish WR, Matt GE, Navarro AM, Phillips G. The effects of psychological therapies under clinically representative conditions: A meta-analysis. Psychological Bulletin. 2000; 126 (4):512–529. doi: 10.1037/0033-2909.126.4.512. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shapiro SL, Figueredo AJ, Caspi O, Schwartz GE, Bootzin RR, Lopez AM, et al. Going quasi: The premature disclosure effect in a randomized clinical trial. Journal of Behavioral Medicine. 2002; 25 (6):605–621. doi: 10.1023/A:1020693417427. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sneed JR, Rutherford BR, Rindskopf D, Lane DT, Sackeim HA, Roose SP. Design makes a difference: A meta-analysis of antidepressant response rates in placebo-controlled versus comparator trials in late-life depression. The American Journal of Geriatric Psychiatry. 2008; 16 :65–73. doi: 10.1097/JGP.0b013e3181256b1d. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sosin MR. Outcomes and sample selection: The case of a homelessness and substance abuse intervention. The British Journal of Mathematical and Statistical Psychology. 2002; 55 (1):63–92. doi: 10.1348/000711002159707. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Staines G, McKendrick K, Perlis T, Sacks S, DeLeon G. Sequential assignment and treatment as usual: Alternatives to standard experimental designs in field studies of treatment efficacy. Evaluation Review. 1999; 23 (1):47–76. doi: 10.1177/0193841X9902300103. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • TenHave T, Coyne J, Salzer M, Katz I. Research to improve the quality of care for depression: Alternatives to the simple randomized clinical trial. General Hospital Psychiatry. 2003; 25 :115–123. doi: 10.1016/S0163-8343(02)00275-X. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Torgerson D, Klaber Moffett JA, Russell IT. Including patient preferences in randomized clinical trials. Journal of Health Services Research & Policy. 1996; 1 :194–197. [ PubMed ] [ Google Scholar ]
  • Torgerson D, Moffett JK. Patient Preference and Validity of Randomized Controlled Trials: Letter to the Editor. Journal of the American Medical Association. 2005; 294 (1):41. doi: 10.1001/jama.294.1.41-b. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Trist E, Sofer C. Exploration in group relations. Leicester: Leicester University Press; 1959. [ Google Scholar ]
  • Twamley EW, Narvaez JM, Becker DR, Bartels SJ, Jeste DV. Supported employment for middle-aged and older people with schizophrenia. American Journal of Psychiatric Rehabilitation. 2008; 11 (1):76–89. doi: 10.1080/15487760701853326. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wahlbeck K, Tuunainen A, Ahokas A, Leucht S. Dropout rates in randomised antipsychotic drug trials. Psychopharmacology. 2001; 155 (3):230–233. doi: 10.1007/s002130100711. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • West SG, Aiken LS, Todd M. Probing the effects of individual components in multiple component prevention programs. In: Revenson & T, D’Agostino RB, editors. Ecological research to promote social change: Methodological advances from community psychology. New York, NY: Kluwer; 2002. [ PubMed ] [ Google Scholar ]
  • West SG, Sagarin BJ. Participant selection and loss in randomized experiments. In: Bickman L, editor. Research design: Donald Campbell’s legacy. II. Thousand Oaks, CA: Sage; 2000. pp. 117–154. [ Google Scholar ]
  • Wolf J, Coba C, Cirella M. Education as psychosocial rehabilitation: Supported education program partnerships with mental health and behavioral healthcare certificate programs. Psychiatric Rehabilitation Skills. 2001; 5 (3):455–476. [ Google Scholar ]
  • Wong KK, Chiu R, Tang B, Mak D, Liu J, Chiu SN. A randomized controlled trial of a supported employment program for persons with long-term mental illness in Hong Kong. Psychiatric Services (Washington, DC) 2008; 59 (1):84–90. doi: 10.1176/appi.ps.59.1.84. [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. Random Assignment Is Used in Experiments Because Researchers Want to

    why is random assignment critical for research studies

  2. 15 Random Assignment Examples (2024)

    why is random assignment critical for research studies

  3. Random Assignment in Experiments

    why is random assignment critical for research studies

  4. Random Assignment in Experiments

    why is random assignment critical for research studies

  5. Quiz & Worksheet

    why is random assignment critical for research studies

  6. Random Assignment in Psychology

    why is random assignment critical for research studies

COMMENTS

  1. Random Assignment in Experiments

    Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment and avoid biases. In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.

  2. Random Assignment in Psychology: Definition & Examples

    Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study. On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. Random selection ensures that everyone in the population has an equal ...

  3. Random Assignment in Experiments

    Correlation, Causation, and Confounding Variables. Random assignment helps you separate causation from correlation and rule out confounding variables. As a critical component of the scientific method, experiments typically set up contrasts between a control group and one or more treatment groups. The idea is to determine whether the effect, which is the difference between a treatment group and ...

  4. Why randomize?

    What does random assignment mean? The key to randomized experimental research design is in the random assignment of study subjects - for example, individual voters, precincts, media markets or some other group - into treatment or control groups. Randomization has a very specific meaning in this context.

  5. The Definition of Random Assignment In Psychology

    Random Assignment In Research . To determine if changes in one variable will cause changes in another variable, psychologists must perform an experiment. Random assignment is a critical part of the experimental design that helps ensure the reliability of the study outcomes.

  6. Elements of Research : Random Assignment

    Random assignment is a procedure used in experiments to create multiple study groups that include participants with similar characteristics so that the groups are equivalent at the beginning of the study. The procedure involves assigning individuals to an experimental treatment or program at random, or by chance (like the flip of a coin).

  7. Issues in Outcomes Research: An Overview of Randomization Techniques

    One critical component of clinical trials that strengthens results is random assignment of participants to control and treatment groups. Although randomization appears to be a simple concept, issues of balancing sample sizes and controlling the influence of covariates a priori are important. ... Outcomes research is critical in the evidence ...

  8. PDF Random Assignment Why Is Random Assignment Crucial for Statistical

    Introduction. Statistical inference is based on the theory of probability, and effects investigated in psycholog-ical studies are de fined by measures that are treated as random variables. The inference about the probability of a given result with regard to an assumed population and the popular term "signif-icance are only meaningful and ...

  9. Random Assignment

    Random distribution can be achieved by an automatized process, like a random number generator, pre-existing lists for assigning the subjects to the groups in a certain order, or even a coin flip. Importantly, random assignment cannot ensure that the distributions of gender, age, and other potential confounds are the same across all groups.

  10. 6.1.1 Random Assignation

    The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design. Note: Do not confuse random assignation with random sampling. Random sampling is a method for selecting a sample from a population; we will talk about this in Chapter 7.

  11. 6.1.1 Random Assignation

    Random assignation is associated with experimental research methods. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a ...

  12. Random Assignment in Psychology (Definition + 40 Examples)

    Random Assignment is a process used in research where each participant has an equal chance of being placed in any group within the study. This technique is essential in experiments as it helps to eliminate biases, ensuring that the different groups being compared are similar in all important aspects.

  13. Random Assignment

    However, many research studies use groups of individuals (clusters), for example, educational research studies use school classes, and family therapy studies use couples. Cluster randomization means that clusters are randomly assigned to the study conditions. Two types of cluster randomization are distinguished.

  14. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  15. What Is Random Assignment in Psychology?

    Research Methods. Random assignment means that every participant has the same chance of being chosen for the experimental or control group. It involves using procedures that rely on chance to assign participants to groups. Doing this means that every participant in a study has an equal opportunity to be assigned to any group.

  16. Random Assignment in Psychology

    Random assignment is a critical part of any experimental design in science, especially random assignment in psychology. The simplest random assignment definition is that every participant in the ...

  17. Purpose and Limitations of Random Assignment

    1. Random assignment prevents selection bias. Randomization works by removing the researcher's and the participant's influence on the treatment allocation. So the allocation can no longer be biased since it is done at random, i.e. in a non-predictable way. This is in contrast with the real world, where for example, the sickest people are ...

  18. 6.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  19. Preference in Random Assignment: Implications for the Interpretation of

    The validity of research in any field depends on the extent to which studies rule out alternative explanations for findings and provide meaningful explanations of how and why predicted outcomes were attained (e.g., Bickman 1987; Lewin 1943; Shadish et al. 2002; Trist and Sofer 1959).In mental health services research, participants' expectations about the pros and cons of being randomly ...

  20. Random assignment

    Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. [1] This ensures that each participant or subject has an equal chance of being placed ...

  21. Random Assignment in Research: Definition and Importance

    Cite this lesson. Researchers rely on random assignment--a type of randomization--to get the most accurate results. Learn the definition of random assignment in research, and explore the process ...

  22. How often does random assignment fail? Estimates and recommendations

    In the language of social science research, random assignment to conditions is when a random process (e.g., a random number generator, the flip of a coin, choosing from a shuffled deck of cards) is used to assign participants to experimental conditions, giving all participants an equal chance of being assigned to either condition. Fisher (1937; p.

  23. Random assignment evaluation studies Flashcards

    Keys for random assignment experimental studies. Both experimental group/ control group are the same. Why are random assignment studies important. -Randomization is the best way to create equal groups. -A random assignment experimental study is the only way to be sure about cause and effect. The treatment.