• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Controversial and Unethical Psychology Experiments

There have been a number of famous psychology experiments that are considered controversial, inhumane, unethical, and even downright cruel—here are five examples. Thanks to ethical codes and institutional review boards, most of these experiments could never be performed today.

At a Glance

Some of the most controversial and unethical experiments in psychology include Harlow's monkey experiments, Milgram's obedience experiments, Zimbardo's prison experiment, Watson's Little Albert experiment, and Seligman's learned helplessness experiment.

These and other controversial experiments led to the formation of rules and guidelines for performing ethical and humane research studies.

Harlow's Pit of Despair

Psychologist Harry Harlow performed a series of experiments in the 1960s designed to explore the powerful effects that love and attachment have on normal development. In these experiments, Harlow isolated young rhesus monkeys, depriving them of their mothers and keeping them from interacting with other monkeys.

The experiments were often shockingly cruel, and the results were just as devastating.

The Experiment

The infant monkeys in some experiments were separated from their real mothers and then raised by "wire" mothers. One of the surrogate mothers was made purely of wire.

While it provided food, it offered no softness or comfort. The other surrogate mother was made of wire and cloth, offering some degree of comfort to the infant monkeys.

Harlow found that while the monkeys would go to the wire mother for nourishment, they preferred the soft, cloth mother for comfort.

Some of Harlow's experiments involved isolating the young monkey in what he termed a "pit of despair." This was essentially an isolation chamber. Young monkeys were placed in the isolation chambers for as long as 10 weeks.

Other monkeys were isolated for as long as a year. Within just a few days, the infant monkeys would begin huddling in the corner of the chamber, remaining motionless.

The Results

Harlow's distressing research resulted in monkeys with severe emotional and social disturbances. They lacked social skills and were unable to play with other monkeys.

They were also incapable of normal sexual behavior, so Harlow devised yet another horrifying device, which he referred to as a "rape rack." The isolated monkeys were tied down in a mating position to be bred.

Not surprisingly, the isolated monkeys also ended up being incapable of taking care of their offspring, neglecting and abusing their young.

Harlow's experiments were finally halted in 1985 when the American Psychological Association passed rules regarding treating people and animals in research.

Milgram's Shocking Obedience Experiments

Isabelle Adam/Flickr/CC BY-NC-ND 2.0

If someone told you to deliver a painful, possibly fatal shock to another human being, would you do it? The vast majority of us would say that we absolutely would never do such a thing, but one controversial psychology experiment challenged this basic assumption.

Social psychologist Stanley Milgram conducted a series of experiments to explore the nature of obedience . Milgram's premise was that people would often go to great, sometimes dangerous, or even immoral, lengths to obey an authority figure.

The Experiments

In Milgram's experiment, subjects were ordered to deliver increasingly strong electrical shocks to another person. While the person in question was simply an actor who was pretending, the subjects themselves fully believed that the other person was actually being shocked.

The voltage levels started out at 30 volts and increased in 15-volt increments up to a maximum of 450 volts. The switches were also labeled with phrases including "slight shock," "medium shock," and "danger: severe shock." The maximum shock level was simply labeled with an ominous "XXX."​

The results of the experiment were nothing short of astonishing. Many participants were willing to deliver the maximum level of shock, even when the person pretending to be shocked was begging to be released or complaining of a heart condition.

Milgram's experiment revealed stunning information about the lengths that people are willing to go in order to obey, but it also caused considerable distress for the participants involved.

Zimbardo's Simulated Prison Experiment

 Darrin Klimek / Getty Images

Psychologist Philip Zimbardo went to high school with Stanley Milgram and had an interest in how situational variables contribute to social behavior.

In his famous and controversial experiment, he set up a mock prison in the basement of the psychology department at Stanford University. Participants were then randomly assigned to be either prisoners or guards. Zimbardo himself served as the prison warden.

The researchers attempted to make a realistic situation, even "arresting" the prisoners and bringing them into the mock prison. Prisoners were placed in uniforms, while the guards were told that they needed to maintain control of the prison without resorting to force or violence.

When the prisoners began to ignore orders, the guards began to utilize tactics that included humiliation and solitary confinement to punish and control the prisoners.

While the experiment was originally scheduled to last two full weeks it had to be halted after just six days. Why? Because the prison guards had started abusing their authority and were treating the prisoners cruelly. The prisoners, on the other hand, started to display signs of anxiety and emotional distress.

It wasn't until a graduate student (and Zimbardo's future wife) Christina Maslach visited the mock prison that it became clear that the situation was out of control and had gone too far. Maslach was appalled at what was going on and voiced her distress. Zimbardo then decided to call off the experiment.

Zimbardo later suggested that "although we ended the study a week earlier than planned, we did not end it soon enough."

Watson and Rayner's Little Albert Experiment

If you have ever taken an Introduction to Psychology class, then you are probably at least a little familiar with Little Albert.

Behaviorist John Watson  and his assistant Rosalie Rayner conditioned a boy to fear a white rat, and this fear even generalized to other white objects including stuffed toys and Watson's own beard.

Obviously, this type of experiment is considered very controversial today. Frightening an infant and purposely conditioning the child to be afraid is clearly unethical.

As the story goes, the boy and his mother moved away before Watson and Rayner could decondition the child, so many people have wondered if there might be a man out there with a mysterious phobia of furry white objects.

Controversy

Some researchers have suggested that the boy at the center of the study was actually a cognitively impaired boy who ended up dying of hydrocephalus when he was just six years old. If this is true, it makes Watson's study even more disturbing and controversial.

However, more recent evidence suggests that the real Little Albert was actually a boy named William Albert Barger.

Seligman's Look Into Learned Helplessness

During the late 1960s, psychologists Martin Seligman and Steven F. Maier conducted experiments that involved conditioning dogs to expect an electrical shock after hearing a tone. Seligman and Maier observed some unexpected results.

When initially placed in a shuttle box in which one side was electrified, the dogs would quickly jump over a low barrier to escape the shocks. Next, the dogs were strapped into a harness where the shocks were unavoidable.

After being conditioned to expect a shock that they could not escape, the dogs were once again placed in the shuttlebox. Instead of jumping over the low barrier to escape, the dogs made no efforts to escape the box.

Instead, they simply lay down, whined and whimpered. Since they had previously learned that no escape was possible, they made no effort to change their circumstances. The researchers called this behavior learned helplessness .

Seligman's work is considered controversial because of the mistreating the animals involved in the study.

Impact of Unethical Experiments in Psychology

Many of the psychology experiments performed in the past simply would not be possible today, thanks to ethical guidelines that direct how studies are performed and how participants are treated. While these controversial experiments are often disturbing, we can still learn some important things about human and animal behavior from their results.

Perhaps most importantly, some of these controversial experiments led directly to the formation of rules and guidelines for performing psychology studies.

Blum, Deborah.  Love at Goon Park: Harry Harlow and the science of affection . New York: Basic Books; 2011.

Sperry L.  Mental Health and Mental Disorders: an Encyclopedia of Conditions, Treatments, and Well-Being . Santa Barbara, CA: Greenwood, an imprint of ABC-CLIO, LLC; 2016.

Marcus S. Obedience to Authority An Experimental View. By Stanley Milgram. illustrated . New York: Harper &. The New York Times. 

Le Texier T. Debunking the Stanford Prison Experiment .  Am Psychol . 2019;74(7):823‐839. doi:10.1037/amp0000401

Fridlund AJ, Beck HP, Goldie WD, Irons G.  Little Albert: A neurologically impaired child .  Hist Psychol.  2012;15(4):302-27. doi:10.1037/a0026720

Powell RA, Digdon N, Harris B, Smithson C. Correcting the record on Watson, Rayner, and Little Albert: Albert Barger as "psychology's lost boy" .  Am Psychol . 2014;69(6):600‐611. doi:10.1037/a0036854

Seligman ME. Learned helplessness .  Annu Rev Med . 1972;23:407‐412. doi:10.1146/annurev.me.23.020172.002203

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Search form

How to Live Better, Longer

grandparents

Study Identifies A Factor That Affects Aging- But It's Out Of Your Control

flexibility

Is Flexibility Key To Longevity? Study Links It To Survival In Middle-Aged Adults

healthy old age

Breakthrough Monthly Treatment May Boost Longevity, Vitality In Old Age: Mice Study Reveals

spinach

Can Folate Intake Affect Aging? Here's What Study Says

anxiety

People Living With Brain Aneurysm At High Risk Of Anxiety, Depression: Study

watching video

Skipping Through Online Videos Can Boost Boredom, Try Watching Them Fully: Study

friendship

Friend's Genetic Traits Can Influence Your Mental Health Risk: Study

eating disorder

Watching Even 8 Minutes Of TikTok Glamorizing Disordered Eating Harms Body Image, Study Finds

Karen Corona

Explore The Healing Power Of Expressive Arts With Wellness Coach Karen Corona

Dr Jason

Dr. Jason Shumard Revolutionizes Holistic Healing And Transformative Wellness

Thermal Earring: Low-power Wireless Earring for Longitudinal Earlobe Temperature Sensing

Thermal Earring To Monitor Temperature: Experts Say It Could Also Track Ovulation And Stress

pregnancy test

First Saliva-Based Pregnancy Tests: Everything To Know

sleep

Sleep Debt Raises Heart Disease Risk, Study Says Here's How To Mitigate It

brain

Diabetes, Pre-Diabetes Linked To Brain Aging, Healthy Lifestyle Counteracts The Effect: Study

smoking

Study Finds Reduced Academic Achievements In Kids Exposed To Prenatal Smoking

dengue fever

Dengue Survivors Face Higher Long-Term Risks Than COVID-19 Recovery Patients: Study

overweight

Obesity Raises Risk Of COVID Infection By 34%: Study Says

vaccine

Your Mindset Could Shape Vaccine Response, Side Effects After COVID-19 Vaccination: Study

COVID 19

Higher Incidence Of Mental Illness After Severe COVID In Unvaccinated Adults, Says Study

child

Kids Get Less Severe COVID-19 Compared To Adults; Here's Why

Mad scientists: 10 most unethical social experiments gone horribly wrong.

Scientist in lab

  • Share on Twitter
  • Share on Facebook
  • Share on Pocket

Curiosity is the fuel that drives social experiments performed in the world of science. Today, experiments must abide by the American Psychological Association’s (APA) Code of Conduct , which pertains to everything from confidentiality, to consent, to overall beneficence. However, the standards weren’t always so high. In their latest video , “10 Evil Social Experiments,” Alltime10s highlights the most famous and disturbing experiments that took place all around the world that could never happen today.

Adults, children, and even animals were a part of the inhumane practices of several mad scientists. In 1939, psychologist Wendell Johnson at the University of Iowa performed "The Monster Study," a stuttering experiment on 22 orphaned children. The children were divided into two groups. The first received positive speech therapy , in which the children's successes were praised. The other group had negative therapy and were told off for every mistake they made.

The effects on the children who had negative speech therapy were horrible. Their schoolwork suffered, their behavior became more timid, and they developed speech impediments. In 2007, six of the children were awarded $925,000 for life-long psychological damage.

In the 1970s to 1980s, during the Apartheid era, the South African army forced suspected gay and lesbian soldiers to undergo sex-change operations , chemical castrations, electrical shocks, and other forms of unethical medication, in an attempt to cure their illegal sexuality, which became known as “The Aversion Project.”

Inevitably, this became psychologically damaging to about 900 individuals who underwent reassignment operations carried out in military hospitals in South Africa throughout this period. The patients were abandoned, and often unable to pay for the hormones needed to maintain their new identity, leading some to commit suicide.

Animals could not escape the wrath of mad scientist Dr. Harry Harlow in his experiment “The Pit of Despair.” Harlow experimented on baby rhesus monkeys to study social interaction and isolation in the 1970s. He would select baby monkeys who had bonded with their mother and separate them, placing the infants in little steel chambers with no contact with anything else. He kept them in there for up to a year, causing irreparable psychosis in many of the monkeys.

The monkeys were later returned to a group, but were bullied; others starved themselves to death. When the test subjects later became mothers, they would chew off the fingers of their offspring or crush their heads. Not only were Harlow’s experiments extreme, they revealed nothing new about social interactions.

Luckily, the APA's Code of Conduct brought ethics in psychological experiments. Review boards enforce these ethics to prevent experiments like the ones listed above from occurring.

View the rest of Alltime10s video to see the most disturbing social experiment in the 20th century.

Allergies

  • Alzheimer's
  • Amputation/Prosthetics

Dental

  • Dengue Fever
  • Dental Health
  • Dermatological Disorders
  • Developmental Disorders
  • Digestive Disorders
  • Down Syndrome

Gerontology

  • Gastrointestinal Disorders
  • Genetic Disorders
  • Genital Warts
  • Geriatric care
  • Gerontology
  • Gum Disease
  • Gynecological Disorders
  • Head And Neck Cancer

Liver Disease

  • Kidney Cancer
  • Kidney Disease
  • Knee Problems
  • Lead Poisoning
  • Liver Disease
  • Low Testosterone
  • Lung Cancer

Mental health

  • Macular Degeneration
  • Men's Health
  • Menstruation/Periods
  • Mental Health
  • Metabolic Disorders

Pain

  • Pancreatic Cancer
  • Parasitic Infections
  • Parkinson's Disease
  • Pediatric Diseases

Sleep

  • Schizophrenia
  • Senior Health
  • Sexual Health
  • Sickle Cell Disease
  • Skin Cancer
  • Sleep Apnea

Women health

  • Uterine Cancer
  • Varicose Veins
  • Viral Infection
  • Women's Health
  • Yeast Infection

Morning Rundown: Trump team downplays Arlington cemetery incident, NASA rover embarks on Mars road trip, and the YouTube star dominating ultimate frisbee

Ugly past of U.S. human experiments uncovered

Shocking as it may seem, U.S. government doctors once thought it was fine to experiment on disabled people and prison inmates. Such experiments included giving hepatitis to mental patients in Connecticut, squirting a pandemic flu virus up the noses of prisoners in Maryland, and injecting cancer cells into chronically ill people at a New York hospital.

Much of this horrific history is 40 to 80 years old, but it is the backdrop for a meeting in Washington this week by a presidential bioethics commission. The meeting was triggered by the government's apology last fall for federal doctors infecting prisoners and mental patients in Guatemala with syphilis 65 years ago.

U.S. officials also acknowledged there had been dozens of similar experiments in the United States — studies that often involved making healthy people sick.

An exhaustive review by The Associated Press of medical journal reports and decades-old press clippings found more than 40 such studies. At best, these were a search for lifesaving treatments; at worst, some amounted to curiosity-satisfying experiments that hurt people but provided no useful results.

Inevitably, they will be compared to the well-known Tuskegee syphilis study. In that episode, U.S. health officials tracked 600 black men in Alabama who already had syphilis but didn't give them adequate treatment even after penicillin became available.

These studies were worse in at least one respect — they violated the concept of "first do no harm," a fundamental medical principle that stretches back centuries.

"When you give somebody a disease — even by the standards of their time — you really cross the key ethical norm of the profession," said Arthur Caplan, director of the University of Pennsylvania's Center for Bioethics.

Attitude similar to Nazi experiments Some of these studies, mostly from the 1940s to the '60s, apparently were never covered by news media. Others were reported at the time, but the focus was on the promise of enduring new cures, while glossing over how test subjects were treated.

Attitudes about medical research were different then. Infectious diseases killed many more people years ago, and doctors worked urgently to invent and test cures. Many prominent researchers felt it was legitimate to experiment on people who did not have full rights in society — people like prisoners, mental patients, poor blacks. It was an attitude in some ways similar to that of Nazi doctors experimenting on Jews.

"There was definitely a sense — that we don't have today — that sacrifice for the nation was important," said Laura Stark, a Wesleyan University assistant professor of science in society, who is writing a book about past federal medical experiments.

The AP review of past research found:

  • A federally funded study begun in 1942 injected experimental flu vaccine in male patients at a state insane asylum in Ypsilanti, Mich., then exposed them to flu several months later. It was co-authored by Dr. Jonas Salk, who a decade later would become famous as inventor of the polio vaccine.

Some of the men weren't able to describe their symptoms, raising serious questions about how well they understood what was being done to them. One newspaper account mentioned the test subjects were "senile and debilitated." Then it quickly moved on to the promising results.

  • In federally funded studies in the 1940s, noted researcher Dr. W. Paul Havens Jr. exposed men to hepatitis in a series of experiments, including one using patients from mental institutions in Middletown and Norwich, Conn. Havens, a World Health Organization expert on viral diseases, was one of the first scientists to differentiate types of hepatitis and their causes.

A search of various news archives found no mention of the mental patients study, which made eight healthy men ill but broke no new ground in understanding the disease.

  • Researchers in the mid-1940s studied the transmission of a deadly stomach bug by having young men swallow unfiltered stool suspension. The study was conducted at the New York State Vocational Institution, a reformatory prison in West Coxsackie. The point was to see how well the disease spread that way as compared to spraying the germs and having test subjects breathe it. Swallowing it was a more effective way to spread the disease, the researchers concluded. The study doesn't explain if the men were rewarded for this awful task.
  • A University of Minnesota study in the late 1940s injected 11 public service employee volunteers with malaria, then starved them for five days. Some were also subjected to hard labor, and those men lost an average of 14 pounds. They were treated for malarial fevers with quinine sulfate. One of the authors was Ancel Keys, a noted dietary scientist who developed K-rations for the military and the Mediterranean diet for the public. But a search of various news archives found no mention of the study.
  • For a study in 1957, when the Asian flu pandemic was spreading, federal researchers sprayed the virus in the noses of 23 inmates at Patuxent prison in Jessup, Md., to compare their reactions to those of 32 virus-exposed inmates who had been given a new vaccine.
  • Government researchers in the 1950s tried to infect about two dozen volunteering prison inmates with gonorrhea using two different methods in an experiment at a federal penitentiary in Atlanta. The bacteria was pumped directly into the urinary tract through the penis, according to their paper.

The men quickly developed the disease, but the researchers noted this method wasn't comparable to how men normally got infected — by having sex with an infected partner. The men were later treated with antibiotics. The study was published in the Journal of the American Medical Association, but there was no mention of it in various news archives.

Though people in the studies were usually described as volunteers, historians and ethicists have questioned how well these people understood what was to be done to them and why, or whether they were coerced.

Victims for science Prisoners have long been victimized for the sake of science. In 1915, the U.S. government's Dr. Joseph Goldberger — today remembered as a public health hero — recruited Mississippi inmates to go on special rations to prove his theory that the painful illness pellagra was caused by a dietary deficiency. (The men were offered pardons for their participation.)

But studies using prisoners were uncommon in the first few decades of the 20th century, and usually performed by researchers considered eccentric even by the standards of the day. One was Dr. L.L. Stanley, resident physician at San Quentin prison in California, who around 1920 attempted to treat older, "devitalized men" by implanting in them testicles from livestock and from recently executed convicts.

Newspapers wrote about Stanley's experiments, but the lack of outrage is striking.

"Enter San Quentin penitentiary in the role of the Fountain of Youth — an institution where the years are made to roll back for men of failing mentality and vitality and where the spring is restored to the step, wit to the brain, vigor to the muscles and ambition to the spirit. All this has been done, is being done ... by a surgeon with a scalpel," began one rosy report published in November 1919 in The Washington Post.

Around the time of World War II, prisoners were enlisted to help the war effort by taking part in studies that could help the troops. For example, a series of malaria studies at Stateville Penitentiary in Illinois and two other prisons was designed to test antimalarial drugs that could help soldiers fighting in the Pacific.

It was at about this time that prosecution of Nazi doctors in 1947 led to the "Nuremberg Code," a set of international rules to protect human test subjects. Many U.S. doctors essentially ignored them, arguing that they applied to Nazi atrocities — not to American medicine.

The late 1940s and 1950s saw huge growth in the U.S. pharmaceutical and health care industries, accompanied by a boom in prisoner experiments funded by both the government and corporations. By the 1960s, at least half the states allowed prisoners to be used as medical guinea pigs.

But two studies in the 1960s proved to be turning points in the public's attitude toward the way test subjects were treated.

The first came to light in 1963. Researchers injected cancer cells into 19 old and debilitated patients at a Jewish Chronic Disease Hospital in the New York borough of Brooklyn to see if their bodies would reject them.

The hospital director said the patients were not told they were being injected with cancer cells because there was no need — the cells were deemed harmless. But the experiment upset a lawyer named William Hyman who sat on the hospital's board of directors. The state investigated, and the hospital ultimately said any such experiments would require the patient's written consent.

At nearby Staten Island, from 1963 to 1966, a controversial medical study was conducted at the Willowbrook State School for children with mental retardation. The children were intentionally given hepatitis orally and by injection to see if they could then be cured with gamma globulin.

Those two studies — along with the Tuskegee experiment revealed in 1972 — proved to be a "holy trinity" that sparked extensive and critical media coverage and public disgust, said Susan Reverby, the Wellesley College historian who first discovered records of the syphilis study in Guatemala.

'My back is on fire!' By the early 1970s, even experiments involving prisoners were considered scandalous. In widely covered congressional hearings in 1973, pharmaceutical industry officials acknowledged they were using prisoners for testing because they were cheaper than chimpanzees.

Holmesburg Prison in Philadelphia made extensive use of inmates for medical experiments. Some of the victims are still around to talk about it. Edward "Yusef" Anthony, featured in a book about the studies, says he agreed to have a layer of skin peeled off his back, which was coated with searing chemicals to test a drug. He did that for money to buy cigarettes in prison.

"I said 'Oh my God, my back is on fire! Take this ... off me!'" Anthony said in an interview with The Associated Press, as he recalled the beginning of weeks of intense itching and agonizing pain.

The government responded with reforms. Among them: The U.S. Bureau of Prisons in the mid-1970s effectively excluded all research by drug companies and other outside agencies within federal prisons.

As the supply of prisoners and mental patients dried up, researchers looked to other countries.

It made sense. Clinical trials could be done more cheaply and with fewer rules. And it was easy to find patients who were taking no medication, a factor that can complicate tests of other drugs.

Additional sets of ethical guidelines have been enacted, and few believe that another Guatemala study could happen today. "It's not that we're out infecting anybody with things," Caplan said.

Still, in the last 15 years, two international studies sparked outrage.

One was likened to Tuskegee. U.S.-funded doctors failed to give the AIDS drug AZT to all the HIV-infected pregnant women in a study in Uganda even though it would have protected their newborns. U.S. health officials argued the study would answer questions about AZT's use in the developing world.

The other study, by Pfizer Inc., gave an antibiotic named Trovan to children with meningitis in Nigeria, although there were doubts about its effectiveness for that disease. Critics blamed the experiment for the deaths of 11 children and the disabling of scores of others. Pfizer settled a lawsuit with Nigerian officials for $75 million but admitted no wrongdoing.

Last year, the U.S. Department of Health and Human Services' inspector general reported that between 40 and 65 percent of clinical studies of federally regulated medical products were done in other countries in 2008, and that proportion probably has grown. The report also noted that U.S. regulators inspected fewer than 1 percent of foreign clinical trial sites.

Monitoring research is complicated, and rules that are too rigid could slow new drug development. But it's often hard to get information on international trials, sometimes because of missing records and a paucity of audits, said Dr. Kevin Schulman, a Duke University professor of medicine who has written on the ethics of international studies.

Syphilis study These issues were still being debated when, last October, the Guatemala study came to light.

In the 1946-48 study, American scientists infected prisoners and patients in a mental hospital in Guatemala with syphilis, apparently to test whether penicillin could prevent some sexually transmitted disease. The study came up with no useful information and was hidden for decades.

Story: U.S. apologizes for Guatemala syphilis experiments

The Guatemala study nauseated ethicists on multiple levels. Beyond infecting patients with a terrible illness, it was clear that people in the study did not understand what was being done to them or were not able to give their consent. Indeed, though it happened at a time when scientists were quick to publish research that showed frank disinterest in the rights of study participants, this study was buried in file drawers.

"It was unusually unethical, even at the time," said Stark, the Wesleyan researcher.

"When the president was briefed on the details of the Guatemalan episode, one of his first questions was whether this sort of thing could still happen today," said Rick Weiss, a spokesman for the White House Office of Science and Technology Policy.

That it occurred overseas was an opening for the Obama administration to have the bioethics panel seek a new evaluation of international medical studies. The president also asked the Institute of Medicine to further probe the Guatemala study, but the IOM relinquished the assignment in November, after reporting its own conflict of interest: In the 1940s, five members of one of the IOM's sister organizations played prominent roles in federal syphilis research and had links to the Guatemala study.

So the bioethics commission gets both tasks. To focus on federally funded international studies, the commission has formed an international panel of about a dozen experts in ethics, science and clinical research. Regarding the look at the Guatemala study, the commission has hired 15 staff investigators and is working with additional historians and other consulting experts.

The panel is to send a report to Obama by September. Any further steps would be up to the administration.

Some experts say that given such a tight deadline, it would be a surprise if the commission produced substantive new information about past studies. "They face a really tough challenge," Caplan said.

We need your support today

Independent journalism is more important than ever. Vox is here to explain this unprecedented election cycle and help you understand the larger stakes. We will break down where the candidates stand on major issues, from economic policy to immigration, foreign policy, criminal justice, and abortion. We’ll answer your biggest questions, and we’ll explain what matters — and why. This timely and essential task, however, is expensive to produce.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

The Stanford Prison Experiment was massively influential. We just learned it was a fraud.

The most famous psychological studies are often wrong, fraudulent, or outdated. Textbooks need to catch up.

by Brian Resnick

Rorschach test 

The Stanford Prison Experiment, one of the most famous and compelling psychological studies of all time, told us a tantalizingly simple story about human nature.

The study took paid participants and assigned them to be “inmates” or “guards” in a mock prison at Stanford University. Soon after the experiment began, the “guards” began mistreating the “prisoners,” implying evil is brought out by circumstance. The authors, in their conclusions, suggested innocent people, thrown into a situation where they have power over others, will begin to abuse that power. And people who are put into a situation where they are powerless will be driven to submission, even madness.

The Stanford Prison Experiment has been included in many, many introductory psychology textbooks and is often cited uncritically . It’s the subject of movies, documentaries, books, television shows, and congressional testimony .

But its findings were wrong. Very wrong. And not just due to its questionable ethics or lack of concrete data — but because of deceit.

  • Philip Zimbardo defends the Stanford Prison Experiment, his most famous work 

A new exposé published by Medium based on previously unpublished recordings of Philip Zimbardo, the Stanford psychologist who ran the study, and interviews with his participants, offers convincing evidence that the guards in the experiment were coached to be cruel. It also shows that the experiment’s most memorable moment — of a prisoner descending into a screaming fit, proclaiming, “I’m burning up inside!” — was the result of the prisoner acting. “I took it as a kind of an improv exercise,” one of the guards told reporter Ben Blum . “I believed that I was doing what the researchers wanted me to do.”

The findings have long been subject to scrutiny — many think of them as more of a dramatic demonstration , a sort-of academic reality show, than a serious bit of science. But these new revelations incited an immediate response. “We must stop celebrating this work,” personality psychologist Simine Vazire tweeted , in response to the article . “It’s anti-scientific. Get it out of textbooks.” Many other psychologists have expressed similar sentiments.

( Update : Since this article published, the journal American Psychologist has published a thorough debunking of the Stanford Prison Experiment that goes beyond what Blum found in his piece. There’s even more evidence that the “guards” knew the results that Zimbardo wanted to produce, and were trained to meet his goals. It also provides evidence that the conclusions of the experiment were predetermined.)

Many of the classic show-stopping experiments in psychology have lately turned out to be wrong, fraudulent, or outdated. And in recent years, social scientists have begun to reckon with the truth that their old work needs a redo, the “ replication crisis .” But there’s been a lag — in the popular consciousness and in how psychology is taught by teachers and textbooks. It’s time to catch up.

Many classic findings in psychology have been reevaluated recently

research studies gone wrong

The Zimbardo prison experiment is not the only classic study that has been recently scrutinized, reevaluated, or outright exposed as a fraud. Recently, science journalist Gina Perry found that the infamous “Robbers Cave“ experiment in the 1950s — in which young boys at summer camp were essentially manipulated into joining warring factions — was a do-over from a failed previous version of an experiment, which the scientists never mentioned in an academic paper. That’s a glaring omission. It’s wrong to throw out data that refutes your hypothesis and only publicize data that supports it.

Perry has also revealed inconsistencies in another major early work in psychology: the Milgram electroshock test, in which participants were told by an authority figure to deliver seemingly lethal doses of electricity to an unseen hapless soul. Her investigations show some evidence of researchers going off the study script and possibly coercing participants to deliver the desired results. (Somewhat ironically, the new revelations about the prison experiment also show the power an authority figure — in this case Zimbardo himself and his “warden” — has in manipulating others to be cruel.)

  • The Stanford Prison Experiment is based on lies. Hear them for yourself.

Other studies have been reevaluated for more honest, methodological snafus. Recently, I wrote about the “marshmallow test,” a series of studies from the early ’90s that suggested the ability to delay gratification at a young age is correlated with success later in life . New research finds that if the original marshmallow test authors had a larger sample size, and greater research controls, their results would not have been the showstoppers they were in the ’90s. I can list so many more textbook psychology findings that have either not replicated, or are currently in the midst of a serious reevaluation.

  • Social priming: People who read “old”-sounding words (like “nursing home”) were more likely to walk slowly — showing how our brains can be subtly “primed” with thoughts and actions.
  • The facial feedback hypothesis: Merely activating muscles around the mouth caused people to become happier — demonstrating how our bodies tell our brains what emotions to feel.
  • Stereotype threat: Minorities and maligned social groups don’t perform as well on tests due to anxieties about becoming a stereotype themselves.
  • Ego depletion: The idea that willpower is a finite mental resource.

Alas, the past few years have brought about a reckoning for these ideas and social psychology as a whole.

Many psychological theories have been debunked or diminished in rigorous replication attempts. Psychologists are now realizing it’s more likely that false positives will make it through to publication than inconclusive results. And they’ve realized that experimental methods commonly used just a few years ago aren’t rigorous enough. For instance, it used to be commonplace for scientists to publish experiments that sampled about 50 undergraduate students. Today, scientists realize this is a recipe for false positives , and strive for sample sizes in the hundreds and ideally from a more representative subject pool.

Nevertheless, in so many of these cases, scientists have moved on and corrected errors, and are still doing well-intentioned work to understand the heart of humanity. For instance, work on one of psychology’s oldest fixations — dehumanization, the ability to see another as less than human — continues with methodological rigor, helping us understand the modern-day maltreatment of Muslims and immigrants in America.

In some cases, time has shown that flawed original experiments offer worthwhile reexamination. The original Milgram experiment was flawed. But at least its study design — which brings in participants to administer shocks (not actually carried out) to punish others for failing at a memory test — is basically repeatable today with some ethical tweaks.

And it seems like Milgram’s conclusions may hold up: In a recent study, many people found demands from an authority figure to be a compelling reason to shock another. However, it’s possible, due to something known as the file-drawer effect, that failed replications of the Milgram experiment have not been published. Replication attempts at the Stanford prison study, on the other hand, have been a mess .

In science, too often, the first demonstration of an idea becomes the lasting one — in both pop culture and academia. But this isn’t how science is supposed to work at all!

Science is a frustrating, iterative process. When we communicate it, we need to get beyond the idea that a single, stunning study ought to last the test of time. Scientists know this as well, but their institutions have often discouraged them from replicating old work, instead of the pursuit of new and exciting, attention-grabbing studies. (Journalists are part of the problem too , imbuing small, insignificant studies with more importance and meaning than they’re due.)

Thankfully, there are researchers thinking very hard, and very earnestly, on trying to make psychology a more replicable, robust science. There’s even a whole Society for the Improvement of Psychological Science devoted to these issues.

Follow-up results tend to be less dramatic than original findings , but they are more useful in helping discover the truth. And it’s not that the Stanford Prison Experiment has no place in a classroom. It’s interesting as history. Psychologists like Zimbardo and Milgram were highly influenced by World War II. Their experiments were, in part, an attempt to figure out why ordinary people would fall for Nazism. That’s an important question, one that set the agenda for a huge amount of research in psychological science, and is still echoed in papers today.

Textbooks need to catch up

Psychology has changed tremendously over the past few years. Many studies used to teach the next generation of psychologists have been intensely scrutinized, and found to be in error. But troublingly, the textbooks have not been updated accordingly .

That’s the conclusion of a 2016 study in Current Psychology. “ By and large,” the study explains (emphasis mine):

introductory textbooks have difficulty accurately portraying controversial topics with care or, in some cases, simply avoid covering them at all. ... readers of introductory textbooks may be unintentionally misinformed on these topics.

The study authors — from Texas A&M and Stetson universities — gathered a stack of 24 popular introductory psych textbooks and began looking for coverage of 12 contested ideas or myths in psychology.

The ideas — like stereotype threat, the Mozart effect , and whether there’s a “narcissism epidemic” among millennials — have not necessarily been disproven. Nevertheless, there are credible and noteworthy studies that cast doubt on them. The list of ideas also included some urban legends — like the one about the brain only using 10 percent of its potential at any given time, and a debunked story about how bystanders refused to help a woman named Kitty Genovese while she was being murdered.

The researchers then rated the texts on how they handled these contested ideas. The results found a troubling amount of “biased” coverage on many of the topic areas.

research studies gone wrong

But why wouldn’t these textbooks include more doubt? Replication, after all, is a cornerstone of any science.

One idea is that textbooks, in the pursuit of covering a wide range of topics, aren’t meant to be authoritative on these individual controversies. But something else might be going on. The study authors suggest these textbook authors are trying to “oversell” psychology as a discipline, to get more undergraduates to study it full time. (I have to admit that it might have worked on me back when I was an undeclared undergraduate.)

There are some caveats to mention with the study: One is that the 12 topics the authors chose to scrutinize are completely arbitrary. “And many other potential issues were left out of our analysis,” they note. Also, the textbooks included were printed in the spring of 2012; it’s possible they have been updated since then.

Recently, I asked on Twitter how intro psychology professors deal with inconsistencies in their textbooks. Their answers were simple. Some say they decided to get rid of textbooks (which save students money) and focus on teaching individual articles. Others have another solution that’s just as simple: “You point out the wrong, outdated, and less-than-replicable sections,” Daniël Lakens , a professor at Eindhoven University of Technology in the Netherlands, said. He offered a useful example of one of the slides he uses in class.

Anecdotally, Illinois State University professor Joe Hilgard said he thinks his students appreciate “the ‘cutting-edge’ feeling from knowing something that the textbook didn’t.” (Also, who really, earnestly reads the textbook in an introductory college course?)

And it seems this type of teaching is catching on. A (not perfectly representative) recent survey of 262 psychology professors found more than half said replication issues impacted their teaching . On the other hand, 40 percent said they hadn’t. So whether students are exposed to the recent reckoning is all up to the teachers they have.

If it’s true that textbooks and teachers are still neglecting to cover replication issues, then I’d argue they are actually underselling the science. To teach the “replication crisis” is to teach students that science strives to be self-correcting. It would instill in them the value that science ought to be reproducible.

Understanding human behavior is a hard problem. Finding out the answers shouldn’t be easy. If anything, that should give students more motivation to become the generation of scientists who get it right.

“Textbooks may be missing an opportunity for myth busting,” the Current Psychology study’s authors write. That’s, ideally, what young scientist ought to learn: how to bust myths and find the truth.

Further reading: Psychology’s “replication crisis”

  • The replication crisis, explained. Psychology is currently undergoing a painful period of introspection. It will emerge stronger than before.
  • The “marshmallow test” said patience was a key to success. A new replication tells us s’more.
  • The 7 biggest problems facing science, according to 270 scientists
  • What a nerdy debate about p-values shows about science — and how to fix it
  • Science is often flawed. It’s time we embraced that.

Most Popular

  • Georgia’s MAGA elections board is laying the groundwork for an actual stolen election
  • Zelenskyy’s new plan to end the war, explained
  • Did Ukraine just call Putin’s nuclear bluff?
  • Mark Zuckerberg’s letter about Facebook censorship is not what it seems
  • How is Kamala Harris getting away with this?

Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.

 alt=

This is the title for the native ad

 alt=

More in Science

SpaceX’s risky mission will go farther into space than we’ve been in 50 years

The privately funded venture will test out new aerospace technology.

The staggering death toll of scientific lies

Scientific fraud kills people. Should it be illegal?

Big Pharma claims lower prices will mean giving up miracle medications. Ignore them.

The case against Medicare drug price negotiations doesn’t add up.

Antibiotics are failing. The US has a plan to launch a research renaissance.

But there might be global consequences.

Why does it feel like everyone is getting Covid?

Covid’s summer surge, explained

Earthquakes are among our deadliest disasters. Scientists are racing to get ahead of them.

Japan’s early-warning system shows a few extra seconds can save scores of lives.

10 Psychological Experiments That Could Never Happen Today

The Chronicle of Higher Education

Nowadays, the American Psychological Association has a Code of Conduct in place when it comes to ethics in psychological experiments. Experimenters must adhere to various rules pertaining to everything from confidentiality to consent to overall beneficence. Review boards are in place to enforce these ethics. But the standards were not always so strict, which is how some of the most famous studies in psychology came about. 

1. The Little Albert Experiment

At Johns Hopkins University in 1920, John B. Watson conducted a study of classical conditioning, a phenomenon that pairs a conditioned stimulus with an unconditioned stimulus until they produce the same result. This type of conditioning can create a response in a person or animal towards an object or sound that was previously neutral. Classical conditioning is commonly associated with Ivan Pavlov, who rang a bell every time he fed his dog until the mere sound of the bell caused his dog to salivate.

Watson tested classical conditioning on a 9-month-old baby he called Albert B. The young boy started the experiment loving animals, particularly a white rat. Watson started pairing the presence of the rat with the loud sound of a hammer hitting metal. Albert began to develop a fear of the white rat as well as most animals and furry objects. The experiment is considered particularly unethical today because Albert was never desensitized to the phobias that Watson produced in him. (The child died of an unrelated illness at age 6, so doctors were unable to determine if his phobias would have lasted into adulthood.)

2. Asch Conformity Experiments

Solomon Asch tested conformity at Swarthmore College in 1951 by putting a participant in a group of people whose task was to match line lengths. Each individual was expected to announce which of three lines was the closest in length to a reference line. But the participant was placed in a group of actors, who were all told to give the correct answer twice then switch to each saying the same incorrect answer. Asch wanted to see whether the participant would conform and start to give the wrong answer as well, knowing that he would otherwise be a single outlier.

Thirty-seven of the 50 participants agreed with the incorrect group despite physical evidence to the contrary. Asch used deception in his experiment without getting informed consent from his participants, so his study could not be replicated today.

3. The Bystander Effect

Some psychological experiments that were designed to test the bystander effect are considered unethical by today’s standards. In 1968, John Darley and Bibb Latané developed an interest in crime witnesses who did not take action. They were particularly intrigued by the murder of Kitty Genovese , a young woman whose murder was witnessed by many, but still not prevented.

The pair conducted a study at Columbia University in which they would give a participant a survey and leave him alone in a room to fill out the paper. Harmless smoke would start to seep into the room after a short amount of time. The study showed that the solo participant was much faster to report the smoke than participants who had the exact same experience, but were in a group.

The studies became progressively unethical by putting participants at risk of psychological harm. Darley and Latané played a recording of an actor pretending to have a seizure in the headphones of a person, who believed he or she was listening to an actual medical emergency that was taking place down the hall. Again, participants were much quicker to react when they thought they were the sole person who could hear the seizure.

4. The Milgram Experiment

Yale psychologist Stanley Milgram hoped to further understand how so many people came to participate in the cruel acts of the Holocaust. He theorized that people are generally inclined to obey authority figures, posing the question , “Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?” In 1961, he began to conduct experiments of obedience.

Participants were under the impression that they were part of a study of memory . Each trial had a pair divided into “teacher” and “learner,” but one person was an actor, so only one was a true participant. The drawing was rigged so that the participant always took the role of “teacher.” The two were moved into separate rooms and the “teacher” was given instructions. He or she pressed a button to shock the “learner” each time an incorrect answer was provided. These shocks would increase in voltage each time. Eventually, the actor would start to complain followed by more and more desperate screaming. Milgram learned that the majority of participants followed orders to continue delivering shocks despite the clear discomfort of the “learner.”

Had the shocks existed and been at the voltage they were labeled, the majority would have actually killed the “learner” in the next room. Having this fact revealed to the participant after the study concluded would be a clear example of psychological harm.

5. Harlow’s Monkey Experiments

In the 1950s, Harry Harlow of the University of Wisconsin tested infant dependency using rhesus monkeys in his experiments rather than human babies. The monkey was removed from its actual mother which was replaced with two “mothers,” one made of cloth and one made of wire. The cloth “mother” served no purpose other than its comforting feel whereas the wire “mother” fed the monkey through a bottle. The monkey spent the majority of his day next to the cloth “mother” and only around one hour a day next to the wire “mother,” despite the association between the wire model and food.

Harlow also used intimidation to prove that the monkey found the cloth “mother” to be superior. He would scare the infants and watch as the monkey ran towards the cloth model. Harlow also conducted experiments which isolated monkeys from other monkeys in order to show that those who did not learn to be part of the group at a young age were unable to assimilate and mate when they got older. Harlow’s experiments ceased in 1985 due to APA rules against the mistreatment of animals as well as humans . However, Department of Psychiatry Chair Ned H. Kalin, M.D. of the University of Wisconsin School of Medicine and Public Health has recently begun similar experiments that involve isolating infant monkeys and exposing them to frightening stimuli. He hopes to discover data on human anxiety, but is meeting with resistance from animal welfare organizations and the general public.

6. Learned Helplessness

The ethics of Martin Seligman’s experiments on learned helplessness would also be called into question today due to his mistreatment of animals. In 1965, Seligman and his team used dogs as subjects to test how one might perceive control. The group would place a dog on one side of a box that was divided in half by a low barrier. Then they would administer a shock, which was avoidable if the dog jumped over the barrier to the other half. Dogs quickly learned how to prevent themselves from being shocked.

Seligman’s group then harnessed a group of dogs and randomly administered shocks, which were completely unavoidable. The next day, these dogs were placed in the box with the barrier. Despite new circumstances that would have allowed them to escape the painful shocks, these dogs did not even try to jump over the barrier; they only cried and did not jump at all, demonstrating learned helplessness.

7. Robbers Cave Experiment

Muzafer Sherif conducted the Robbers Cave Experiment in the summer of 1954, testing group dynamics in the face of conflict. A group of preteen boys were brought to a summer camp, but they did not know that the counselors were actually psychological researchers. The boys were split into two groups, which were kept very separate. The groups only came into contact with each other when they were competing in sporting events or other activities.

The experimenters orchestrated increased tension between the two groups, particularly by keeping competitions close in points. Then, Sherif created problems, such as a water shortage, that would require both teams to unite and work together in order to achieve a goal. After a few of these, the groups became completely undivided and amicable.

Though the experiment seems simple and perhaps harmless, it would still be considered unethical today because Sherif used deception as the boys did not know they were participating in a psychological experiment. Sherif also did not have informed consent from participants.

8. The Monster Study

At the University of Iowa in 1939, Wendell Johnson and his team hoped to discover the cause of stuttering by attempting to turn orphans into stutterers. There were 22 young subjects, 12 of whom were non-stutterers. Half of the group experienced positive teaching whereas the other group dealt with negative reinforcement. The teachers continually told the latter group that they had stutters. No one in either group became stutterers at the end of the experiment, but those who received negative treatment did develop many of the self-esteem problems that stutterers often show. Perhaps Johnson’s interest in this phenomenon had to do with his own stutter as a child , but this study would never pass with a contemporary review board.

Johnson’s reputation as an unethical psychologist has not caused the University of Iowa to remove his name from its Speech and Hearing Clinic .

9. Blue Eyed versus Brown Eyed Students

Jane Elliott was not a psychologist, but she developed one of the most famously controversial exercises in 1968 by dividing students into a blue-eyed group and a brown-eyed group. Elliott was an elementary school teacher in Iowa, who was trying to give her students hands-on experience with discrimination the day after Martin Luther King Jr. was shot, but this exercise still has significance to psychology today. The famous exercise even transformed Elliott’s career into one centered around diversity training.

After dividing the class into groups, Elliott would cite phony scientific research claiming that one group was superior to the other. Throughout the day, the group would be treated as such. Elliott learned that it only took a day for the “superior” group to turn crueler and the “inferior” group to become more insecure. The blue eyed and brown eyed groups then switched so that all students endured the same prejudices.

Elliott’s exercise (which she repeated in 1969 and 1970) received plenty of public backlash, which is probably why it would not be replicated in a psychological experiment or classroom today. The main ethical concerns would be with deception and consent, though some of the original participants still regard the experiment as life-changing .

10. The Stanford Prison Experiment

In 1971, Philip Zimbardo of Stanford University conducted his famous prison experiment, which aimed to examine group behavior and the importance of roles. Zimbardo and his team picked a group of 24 male college students who were considered “healthy,” both physically and psychologically. The men had signed up to participate in a “ psychological study of prison life ,” which would pay them $15 per day. Half were randomly assigned to be prisoners and the other half were assigned to be prison guards. The experiment played out in the basement of the Stanford psychology department where Zimbardo’s team had created a makeshift prison. The experimenters went to great lengths to create a realistic experience for the prisoners, including fake arrests at the participants’ homes.

The prisoners were given a fairly standard introduction to prison life, which included being deloused and assigned an embarrassing uniform. The guards were given vague instructions that they should never be violent with the prisoners, but needed to stay in control. The first day passed without incident, but the prisoners rebelled on the second day by barricading themselves in their cells and ignoring the guards. This behavior shocked the guards and presumably led to the psychological abuse that followed. The guards started separating “good” and “bad” prisoners, and doled out punishments including push ups, solitary confinement, and public humiliation to rebellious prisoners.

Zimbardo explained , “In only a few days, our guards became sadistic and our prisoners became depressed and showed signs of extreme stress.” Two prisoners dropped out of the experiment; one eventually became a psychologist and a consultant for prisons . The experiment was originally supposed to last for two weeks, but it ended early when Zimbardo’s future wife, psychologist Christina Maslach, visited the experiment on the fifth day and told him , “I think it’s terrible what you’re doing to those boys.”

Despite the unethical experiment, Zimbardo is still a working psychologist today. He was even honored by the American Psychological Association with a Gold Medal Award for Life Achievement in the Science of Psychology in 2012 .

Another 14 Iconic Psychology Experiments Have Failed Replication Attempts

Another 14 iconic psychology experiments have failed replication attempts

A lot of what we think we know about psychology might be wrong.

A major research initiative, the second of its kind , tried to reconstruct 28 famous classic psychology experiments.

But of those 28, only 14 of the experiments yielded the same results, according to research published Monday in the journal Advances in Methods and Practices in Psychological Science.

For the past several years, world leaders in psychology research have been scrambling to investigate a looming scandal in their field: many key findings from landmark psychological experiments, in spite of many scientists' best efforts, have never been replicated.

In other words, the insights about the mind uncovered by those experimenters may be totally invalid — but it's difficult to tell who was wrong about what.

That's why many scientists are working to reproduce classic experiments — recreating their conditions and methodologies to see if they arrive at the same results — and reinvestigating the reasons why some studies led to new discoveries and others didn't .

The new paper lays out the 28 studies one after another, comparing and contrasting the original findings with what contemporary scientists discovered.

For example, a 2007 study on the trolley problem held up to scrutiny — just as before, most people found it impermissible to push one person onto the tracks in order to stop a trolley from running over five people.

A common critique of modern psychological research is that participants tend to be WEIRD — a term researchers use to describe subjects who are Western, Educated, Industrialized, Rich, and Democratic nations ; many professors recruit undergraduates to participate in their studies.

But this new massive attempt to replicate existing scientific literature, which began back in 2014 and involved over 60 labs around the world, found little difference among different samples or groups of participants.

Failed Exam

When scientists found that they couldn't replicate a study — which, again, happened for half of the experiments they analyzed — they found that they couldn't do so regardless of where they were conducted or who made up the sample pool.

If replicability truly depended on the WEIRDness of the participants, then there would have been a random smattering of successes and failures among different labs.

But because every attempt to reach the same conclusions as those 14 studies failed, it may be that their findings, and things that scientists claimed to have discovered about the human mind, could be totally invalid.

This article was originally published by Futurism . Read the original article .

Score Card Research NoScript

Every print subscription comes with full digital access

Science News

12 reasons research goes wrong.

Math illustration

FIXING THE NUMBERS   Massaging data, small sample sizes and other issues can affect the statistical analyses of studies and distort the results, and that's not all that can go wrong.

Justine Hirshfeld/ Science News

Share this:

By Tina Hesman Saey

January 13, 2015 at 2:23 pm

For more on reproducibility in science, see SN’s feature “ Is redoing scientific research the best way to find truth? “

Barriers to research replication are based largely in a scientific culture that pits researchers against each other in competition for scarce resources. Any or all of the factors below, plus others, may combine to skew results.

Pressure to publish

Research funds are tighter than ever and good positions are hard to come by. To get grants and jobs, scientists need to publish, preferably in big-name journals. That pressure may lead researchers to publish many low-quality studies instead of aiming for a smaller number of well-done studies. To convince administrators and grant reviewers of the worthiness of their work, scientists have to be cheerleaders for their research; they may not be as critical of their results as they should be.

Impact factor mania

For scientists, publishing in a top journal — such as Nature , Science or Cell  — with high citation rates or “impact factors” is like winning a medal. Universities and funding agencies award jobs and money disproportionately to researchers who publish in these journals. Many researchers say the science in those journals isn’t better than studies published elsewhere, it’s just splashier and tends not to reflect the messy reality of real-world data. Mania linked to publishing in high-impact journals may encourage researchers to do just about anything to publish there, sacrificing the quality of their science as a result.

Tainted cultures

Experiments can get contaminated and cells and animals may not be as advertised. In hundreds of instances since the 1960s, researchers misidentified cells they were working with. Contamination led to the erroneous report that the XMRV virus causes chronic fatigue syndrome, and a recent report suggests that bacterial DNA in lab reagents can interfere with microbiome studies.

Do the wrong kinds of statistical analyses and results may be skewed. Some researchers accuse colleagues of “p-hacking,” massaging data to achieve particular statistical criteria. Small sample sizes and improper randomization of subjects or “blinding” of the researchers can also lead to statistical errors. Data-heavy studies require multiple convoluted steps to analyze, with lots of opportunity for error. Researchers can often find patterns in their mounds of data that have no biological meaning.

Sins of omission

To thwart their competition, some scientists may leave out important details. One study found that 54 percent of research papers fail to properly identify resources, such as the strain of animals or types of reagents or antibodies used in the experiments. Intentional or not, the result is the same: Other researchers can’t replicate the results.

Biology is messy

Variability among and between people, animals and cells means that researchers never get exactly the same answer twice. Unknown variables abound and make replicating in the life and social sciences extremely difficult.

Peer review doesn’t work

Peer reviewers are experts in their field who evaluate research manuscripts and determine whether the science is strong enough to be published in a journal. A sting conducted by Science found some journals that don’t bother with peer review, or use a rubber stamp review process. Another study found that peer reviewers aren’t very good at spotting errors in papers. A high-profile case of misconduct concerning stem cells revealed that even when reviewers do spot fatal flaws, journals sometimes ignore the recommendations and publish anyway ( SN: 12/27/14, p. 25 ).

confidential file cartoon

Some scientists don’t share

Collecting data is hard work and some scientists see a competitive advantage to not sharing their raw data. But selfishness also makes it impossible to replicate many analyses, especially those involving expensive clinical trials or massive amounts of data.

Research never reported

Journals want new findings, not repeats or second-place finishers. That gives researchers little incentive to check previously published work or to try to publish those findings if they do. False findings go unchallenged and negative results — ones that show no evidence to support the scientist’s hypothesis — are rarely published. Some people fear that scientists may leave out important, correct results that don’t fit a given hypothesis and publish only experiments that do.

Poor training produces sloppy scientists

Some researchers complain that young scientists aren’t getting proper training to conduct rigorous work and to critically evaluate their own and others’ studies.

Mistakes happen

Scientists are human, and therefore, fallible. Of 423 papers retracted due to honest error between 1979 and 2011, more than half were pulled because of mistakes, such as measuring a drug incorrectly.

Researchers who make up data or manipulate it produce results no one can replicate. However, fraud is responsible for only a tiny fraction of results that can’t be replicated. 

More Stories from Science News on Science & Society

An artsy food shot shows a white bowl on a gray counter. A spatter of orange coats the bottom of the bowl while a device drips a syrupy dot on top. The orange is a fungus that gave this rice custard a fruity taste.

A fluffy, orange fungus could transform food waste into tasty dishes

Norwegian archipelago of Svalbard

‘Turning to Stone’ paints rocks as storytellers and mentors

A Victorian-era book titled Mohun is propped up to show it's deep yellow cover, which is decorated by a paler flower with green leaves and vines.

Old books can have unsafe levels of chromium, but readers’ risk is low

Astronauts Sunita Williams and Butch Wilmore float in the International Space Station.

Astronauts actually get stuck in space all the time

digital art of an unexplained anomalous phenomena (UAP)

Scientists are getting serious about UFOs. Here’s why

abstract person with wavy colors flowing in and out of brain

‘Then I Am Myself the World’ ponders what it means to be conscious

A horizontal still from the movie 'Twisters' a man and a woman stand next to each other in a field, backs to the camera, and share a look while an active tornado is nearby.

Twisters asks if you can 'tame' a tornado. We have the answer

caravans in Northampton, England, surrounded by floodwater

The world has water problems. This book has solutions

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Online Psychology Degrees Logo

10 Bizarre Psychology Experiments That Completely Crossed the Line

OPD Editor

  • Published Apr 11, 2014
  • Updated Nov 22, 2023
  • In Psychology & Pop Culture

psychological experiments

Experimental psychology and psychological experiments can be key to understanding what makes people tick. Cognitive dissonance, false consensus effect, and classical conditioning are important parts of psychological experiments. However, some individuals have gone about their research and famous psychology experiments in rather unusual, and sometimes morally dubious, ways. These researchers’ findings may increase the sum of knowledge on human behavior; however, the methods that a number of psychologists have used in order to test theories have at times overstepped ethical boundaries. Some might even appear somewhat sadistic. Those taking part in such studies have not always escaped unscathed. In fact, as a result some have suffered lasting emotional damage, or worse. Here are ten bizarre psychology experiments that totally crossed the line.

10. Milgram Experiment (1961)

milgram experiment

The Milgram Experiment is one of the controversial experiments. Yale University social psychology professor Stanley Milgram embarked on his now infamous series of experiments in 1961. Prompted by the trial of high-ranking Nazi and Holocaust-coordinator Adolf Eichmann, Milgram wished to assess whether people really would carry out acts that clashed with their conscience if so directed by an authority figure. For each test, Milgram lined up three people, who were split into the roles of “experimenter” (or authority figure), “teacher” and “learner” (actually an actor). After that, the teacher was separated from the learner. They were then told to comply with the experimenter. The teacher would attempt to tutor the learner in sets of word pairs. The penalty for wrong answers by the learner was shocking in more ways than one. The learner pretended to receive painful and increasingly strong jolts of electricity that the teacher thought they were delivering. Even though no real shocks were inflicted, the ethics of the experiment came under close scrutiny owing to the severe psychological stress placed on its volunteer subjects.

9. Little Albert Experiment (1920)

little albert experiment

The Little Albert Experiment is one of the psychological experiments gone wrong . Things were different in 1920. Back then, you could take a healthy baby and scare it silly in the name of science. That is exactly what American social psychologist John B. Watson did at Johns Hopkins University. Watson was interested to learn if he would be able to condition a child to fear something ordinary. He coupled it with something else that he supposed triggered inborn fear. Watson borrowed eight-month-old baby Albert for an unethical psychological experiment. First, Watson introduced the child to a white rat. Observing that it didn’t scare Albert, Watson then reintroduced the rat, only this time together with a sudden loud noise. Naturally, the noise frightened Albert. Watson then deliberately got Albert to associate the rat with the noise, until the baby couldn’t even see the rat without bursting into tears. Essentially, the psychologist gave Albert a pretty unpleasant phobia. Moreover, Watson went on to make the infant distressed when seeing a rabbit, a dog, and even the furry white beard of Santa Claus. By the end of the experiment, Albert might well have been traumatized for life!

8. Stanford Prison Experiment (1971)

stanford prison experiment

In August 1971 Stanford University psychology professor Philip Zimbardo decided to test the theory that conflict and ill-treatment involving prisoners and prison guards is chiefly down to individuals’ personality traits. This experiment came to be known as the Stanford Prison Experiment. Zimbardo and his team set up a simulated prison in the Stanford psychology building and gave 24 volunteers the roles of either prisoner or guard. The participants were then dressed according to their assigned roles. Zimbardo gave himself the part of superintendent. While Zimbardo had steered the guards towards creating “a sense of powerlessness” among the mock prisoners, what happened was pretty disturbing. Around four of the dozen prison guards became actively sadistic. Prisoners were stripped and humiliated, left in unsanitary conditions and forced to sleep on concrete floors. One was shut in a cupboard. Zimbardo himself was so immersed in his role that he did not notice the severity of what was going on. After six days, his girlfriend’s protests persuaded him to halt the experiment; but, that was not before at least five of the prisoners had suffered emotional trauma.

7. Monkey Drug Trials (1969)

monkey drug trials

The Monkey Drug Trials is a psychology experiment gone wrong . While their findings may have shed light on the psychological aspect of drug addiction, three researchers at the University of Michigan Medical School arguably completely overstepped the mark in 1969 by getting macaque monkeys hooked on illegal substances. G.A. Deneau, T. Yanagita and M.H. Seevers injected the primates with drugs. These drugs included cocaine, amphetamines, morphine, LSD, and alcohol. Why? In order to see if the animals would then go on to freely administer doses of the psychoactive and, in some cases, potentially deadly substances themselves. Many of the monkeys did, which the researchers claimed established a link between drug abuse and psychological dependence. Still, given the fact that the conclusions cannot necessarily be applied to humans, the experiment may have had questionable scientific value. Moreover, even if a link was determined, the method was quite possibly unethical and undoubtedly cruel, especially since some of the monkeys became a danger to themselves and died.

6. Bobo Doll Experiment (1961, 1963)

bobo doll experiment

In the early 1960s, Stanford University psychologist Albert Bandura attempted to demonstrate that human behavior can be learned through observation of reward and punishment. To do this, he acquired 72 nursery-age children together with a large, inflatable toy known as a Bobo doll. He then made a subset of the children watch an aggressive model of behavior. An adult violently beat and verbally abused the toy for around ten minutes. Alarmingly, Bandura found that out of the two-dozen children who witnessed this display, in many cases the behavior was imitated. Left alone in the room with the Bobo doll once the adult had gone, the children exposed to the violence became verbally and physically aggressive towards the doll, attacking it with an intensity arguably frightening to see in ones so young. In 1963 Bandura carried out another Bobo doll experiment that yielded similar results. Nevertheless, the psychology research has since come under fire on ethical grounds, seeing as its subjects were basically trained to act aggressively with possible longer-term consequences and not healthy childhood development.

5. Homosexual Aversion Therapy (1967)

Aversion therapy to “cure” homosexuality was once a prominent subject of research at various universities. A study detailing attempts at “treating” one group of 43 homosexual men was published in the British Medical Journal in 1967. The study recounted researchers M.J. MacCulloch and M.P. Feldman’s experiments in aversion therapy at Manchester, U.K.’s Crumpsall Hospital. The participants watched slides of men that they were told to keep looking at for as long as they considered it appealing. After eight seconds of such a slide being shown, however, the test subjects were given an electric shock. Slides showing women were also presented, and the volunteers were able to look at them without any punishment involved. Although the researchers suggested that the trials had some success in “curing” their participants, in 1994 the American Psychological Association deemed homosexual aversion therapy dangerous and ineffective.

4. The Third Wave (1967)

“How was the Holocaust allowed to happen?” It’s one of history’s burning questions. And when Ron Jones, a teacher at Palo Alto’s Cubberley High School, was struggling to answer it for his sophomore students in 1967, he resolved to show them instead. On the first day of his social experiment, Jones created an authoritarian atmosphere in his class, positioning himself as a sort of World War II-style supreme leader. But as the week progressed, Jones’ one-man brand of fascism turned into a school-wide club. Students came up with their own insignia and adopted a Nazi-like salute. They were taught to firmly obey Jones’ commands and become anti-democratic to the core, even “informing” on one another. Jones’ new ideology was dubbed “The Third Wave” and spread like wildfire. By the fourth day, the teacher was concerned that the movement he had unleashed was getting out of hand. He brought the experiment to a halt. On the fifth day, he told the students that they had invoked a similar feeling of supremacy to that of the German people under the Nazi regime. Thankfully, there were no repercussions.

3. UCLA Schizophrenia Medication Experiment (1983–1994)

UCLA Schizophrenia Medication Experiment is another of the famous psychological studies . From 1983 psychologist Keith H. Nuechterlein and psychiatrist Michael Gitlin from the UCLA Medical Center commenced a now controversial study into the mental processes of schizophrenia. Specifically, they were looking into the ways in which sufferers of the mental disorder relapse. They were trying to find out if there are any predictors of psychosis. To achieve this, they had schizophrenics, from a group of hundreds involved in the program, taken off their medication. Such medication is not without its nasty side effect. The research may hold important findings about the condition. Nevertheless, the experiment has been criticized for not sufficiently protecting the patients in the event of schizophrenic symptoms returning; nor did it clearly determine the point at which the patients should be treated again. What is more, this had tragic consequences in 1991 when former program participant Antonio Lamadrid killed himself by jumping from nine floors up despite having been open about his suicidal state of mind and supposedly under the study’s watch.

2. The Monster Study (1939)

Appropriately branded the “Monster Study” by its contemporaries, psychologist Dr. Wendell Johnson’s speech therapy experiment was at first kept a secret in case it damaged his professional reputation. It is now one of the famous experiments . The University of Iowa’s Johnson drafted in graduate student Mary Tudor to carry out the 1939 experiment for her master’s thesis, whilst Johnson himself supervised. Twenty-two orphaned children, ten of whom had issues with stuttering, were put into two groups, each containing a mix of those with and without speech disorders. One of the two groups was given positive, encouraging feedback about their verbal communication, while the other was utterly disparaged for their (sometimes-non-existent) speech problems. The findings were recorded. This six-month study had a major impact on the human subjects. It even had impact on those who had no prior talking difficulties, making some insecure and withdrawn. In 2007 half a dozen of the former subjects were given a large payout by the state of Iowa for what they had endured, with the claimants reporting “lifelong psychological and emotional scars.”

1. David Reimer (1967–1977)

Canadian David Reimer’s life was changed drastically on account of one Johns Hopkins University professor and one of these famous psychology studies . After a botched circumcision procedure left Reimer with disfiguring genital damage at six months old, his parents took him to be seen by John Money. Money was a professor of medical psychology and pediatrics who advocated the theory of gender neutrality. He argued that gender identity is first and foremost learned socially from a young age. Money suggested that although Reiner’s penis could not be repaired, he could and should undergo sex reassignment surgery and be raised as a female. In 1967 Reimer began the treatment that would turn him into “Brenda.” However, despite further visits to Money over the next ten years, Reimer was never really able to identify himself as female and lived as a male from the age of 14. He would go on to have treatment to undo the sex reassignment, but the ongoing experiment had prompted extreme depression in him – an underlying factor that contributed to his 2004 suicide. John Money, meanwhile, was mired in controversy.

These shocking psychological experiments were quite bizarre and damaging. Psychological research studies examine possible cause-and-effect relationships between variables. Experimental research involves careful manipulation of one variable (the independent variable) and measuring changes in another variable (the dependent variable). The most simple experimental design uses a control group and an experimental group. The experimental group experiences whatever treatment or condition that’s under investigation while the control group does not. Even with these guidelines followed, we can see that ethics still needs to be part of interesting psychology studies as well.

Related Resources:

  • 20 Famous People with Schizophrenia
  • Ranking Top 25 Graduate Programs in Experimental Psychology
  • What Are The Best Experimental Psychology Programs in the Country?
  • Tips For Designing Psychology Experiments
  • What Are The Top 10 Unethical Psychology Experiments?
  • The Most Groundbreaking Psychology Experiments of All Time

Trending now

Richard Contrada Ph.D.

Why Some Famous Psychology Experiments Could Be Wrong

Be wary of psychological research findings that seem to resemble actual events..

Posted December 30, 2022 | Reviewed by Tyler Woods

  • Academic Problems and Skills
  • Take our Mental Processing Test
  • Find a therapist to help with academics
  • Heuristic thinking can be an efficient means of arriving at judgments, but accuracy is not guaranteed.
  • Heuristics may have biased the interpretation of famous studies in social psychology because of their resemblance to real-life events.
  • The perceived generalizability and intuitive appeal of research may misdirect and constrain analysis and interpretation by activating heuristics.
  • Science literacy requires questioning the mental associations and resulting heuristic thinking that research activates in the mind of the reader.

" Preemptive constructs" are mental structures that favor a particular explanation over others . Used by lay person and scientist alike, they can be self-serving, or biased in other ways. A sports fan might believe a game was won entirely by their favorite player, without crediting the rest of the team; a scientist might hold their own theory as the only good explanation for a particular finding, when other explanations are equally plausible.

Preemptive constructs can be valid, but when incorrect or incomplete, they can create interpretive bias . They puff up the preferred interpretation, and more subtly, bias the thinking even of those who disagree, who may focus solely on rejecting the original proposition, rather than seeking alternatives.

Heuristics are another source of Interpretive bias: mental shortcuts or rules of thumb that can be efficient in promoting judgments that come quickly but are not necessarily correct. One example, the availability heuristic , occurs when something that happens is interpreted in terms of its apparent similarity to another event to which it may bear only a superficial resemblance. This may be especially likely when the earlier event was recent and vivid.

To illustrate, in a Health Psychology textbook, Shelly Taylor described something that happened shortly after Jim Henson, the creator of the Muppets, died, apparently after delaying medical treament for a respiratory infection, an event that received considerable media coverage. She, like many other mothers, took her child to a doctor for what were only symptoms of the common cold. The doctor referred to them as “Jim Henson mothers”.

Heuristic thinking is common, can have non-obvious influences, and may influence the interpretation of research findings. It may provide an explanation for some of the discourse that has followed the publication of well-known psychology studies.

True or False: Milgram’s Famous Studies Were About Obedience To Authority

For some time, discussion around the Milgram studies , in which ordinary people were convinced to administer seemingly severe electric shocks to their fellow humans, centered on certain questions to the exclusion of others, for example: Did “Teachers” really deliver what they thought were electric shocks to “Learners” simply out of obedience to authority? Much of it involved opinions of "Yea" or "Nay" to this proposition. The experiments are referred to as “Obedience Studies” even by those who dispute the conclusion that obedience was involved.

Griffin Wooldridge/Pexels

But Nisanni (1990) offered an interpretation in which “Teachers” persisted, sometimes even in the face of “Learners’” apparent pain, because of the difficult cognitive restructuring that was required to overcome the strong, pre-existing belief that scientific researchers would not ask them to do something harmful.

This is not a final answer and does not rule out a role for obedience to authority. But it is plausible, and suggests the operation of heuristic thinking. Milgram and others had drawn parallels between his studies and the atrocities perpertrated during World War II. The iconic “Experimenter-Teacher-Learner” scenario may have acquired a strong mental association with thoughts and images of soldiers “just following orders” to perform brutal acts.

This connection channeled debate about Milgrams’s studies to focus on whether or not they reflected obedience to authority to the exclusion of other factors, such as that suggested by Nisanni, a double-whammy involving two instances of belief persistence.

We Are Prisoners of Our Own (Mental) Devices.

research studies gone wrong

The famous Stanford Prison Study similarly may have been subject to a prematurely narrowed set of construals. For some, the results supported the researchers’ thesis that mere assignment of participants to the role of prison guards would lead them to abuse those in the role of prisoners. For others, this conclusion was dismissed in light of methodological criticisms.

RODNAE Productions/Pexels

Here again, there has emerged an interesting alternative perspective: Beyond the effects of role assignments, Haslam et al. (2019) discuss how the experimenters may have played a more active role than had previously been thought. They describe a complex set of processes involving persuasion , group identification, guards coming to share experimenter goals , and some pushback from the guards against brutality.

Why might the original interpretation have been resistant to alternatives? Well, life in prison actually can be quite harsh, and this is often emphasized and exaggerated in fictional depictions that feature cruel prison guards. Learning about the study may activate this thinking, and it provides a suitable explanation that may eclipse other possibilities.

If You Can Make It There...

A third example comes from efforts to elucidate the Kitty Genovese Murder , which was characterized in the news as an instance in which 38 bystanders witnessed a woman being attacked but did not intervene, even just by calling the police. Research inspired by this event examined potential explanations in terms of a “ Bystander Effect ,” hypothesized to result from a diminished sense of individual responsibility and leading to failure to intervene in emergencies, that can stem from the knowledge that there are other witnesses.

Zachary DeBottis/Pexels

As documented by Manning et al. (2007), the initial news account of the Kitty Genovese incident was flawed. Evidently, there were fewer than 38 witnesses, they had not watched the attack in its entirety, and some attempted to intervene. There is now reason to believe that, depending on circumstances, bystanders to such events may, in fact, take action. Regarding research on the Bystander Effect, it has yielded replicable effects identifying many variables that can influence bystander behavior, and should not be conflated with the Kitty Genovese incident, as it has been in textbooks. Nor should her murder be viewed solely as a parable regarding the failure of bystanders to intervene (Manning et al., 2007).

How might heuristic thinking have shaped discussion of the Kittly Genovese incident and bystander effect? The Manning et al. (2007) analysis incorporates many factors in accounting for the persistence of the original news story in individual minds, textbooks, and the larger culture. For present purposes, I wish to highlight the points they make about crowds and other collectives having the capacity to promote inactivity, which they link to the psychology of urban settings. Perhaps the availability heuristic has played a role, in part, because the incident occurred in New York City, where residents are sometimes viewed stereotyically as cold and indifferent to the plight of others.

Implications for Research Consumption

Being aware of the possible operation of heuristics in one’s own thinking, as well as in the author’s, can help us to keep an open mind about research. It is an aspect of scientific literacy, as is wariness regarding the use of preemptive construals of findings. In the common observation that a particular scientific explanation “is intuitive,” this is typically seen as a good thing, but that's not always be the case. Intuition should be questioned; not necessarily discarded, but maybe not swallowed whole without further consideration.

Another implication concerns perceived generalizability, which also is often touted as a good thing. In addition to the commonly expressed caution that generalizabiiltiy must be demonstrated empirically, there is another: Take notice of your (and what seem to be the author’s) mental associations and generalizations regarding research findings, especially to widely-known, real-world events and conditions. And be sure that their mental availability, recency, and vividness are not having undue influence on your thinking.

Copyright 2022 Richard J. Contrada

Haslam, S. A., SD Reicher , S. D., & Van Bavel, J. J. (2019). Rethinking the nature of cruelty: The role of identity leadership in the Stanford Prison Experiment. American Psychologist. 74, 809-822. doi:10.1037/amp0000443.

Nissani, M. (1990). A cognitive reinterpretation of Stanley Milgram's observations on obedience to authority. American Psychologist , 45(12), 1384–1385. https://doi.org/10.1037/0003-066X.45.12.1384 .

Manning, R., Levine, M., Collins, A. (2007). The Kitty Genovese Murder and the Social Psychology of Helping: The Parable of the 38 Witnesses". American Psychologist, 62, 555–562. doi : 10.1037/0003-066x.62.6.555 .

Taylor, S. E., & Stanton, A. L. (2021). Health Psychology. New York: McGraw-Hill.

Richard Contrada Ph.D.

Richard Contrada, Ph.D., is a Professor in the Department of Psychology at Rutgers, the State University of New Jersey. His primary research areas lie at the interface of psychology and health and include psychological stress, cognitive and emotional self-regulation, and health-related stigma.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

July 2024 magazine cover

Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience
  • 20 Most Unethical Experiments in Psychology

Humanity often pays a high price for progress and understanding — at least, that seems to be the case in many famous psychological experiments. Human experimentation is a very interesting topic in the world of human psychology. While some famous experiments in psychology have left test subjects temporarily distressed, others have left their participants with life-long psychological issues . In either case, it’s easy to ask the question: “What’s ethical when it comes to science?” Then there are the experiments that involve children, animals, and test subjects who are unaware they’re being experimented on. How far is too far, if the result means a better understanding of the human mind and behavior ? We think we’ve found 20 answers to that question with our list of the most unethical experiments in psychology .

Emma Eckstein

research studies gone wrong

Electroshock Therapy on Children

research studies gone wrong

Operation Midnight Climax

research studies gone wrong

The Monster Study

research studies gone wrong

Project MKUltra

research studies gone wrong

The Aversion Project

research studies gone wrong

Unnecessary Sexual Reassignment

research studies gone wrong

Stanford Prison Experiment

research studies gone wrong

Milgram Experiment

research studies gone wrong

The Monkey Drug Trials

research studies gone wrong

Featured Programs

Facial expressions experiment.

research studies gone wrong

Little Albert

research studies gone wrong

Bobo Doll Experiment

research studies gone wrong

The Pit of Despair

research studies gone wrong

The Bystander Effect

research studies gone wrong

Learned Helplessness Experiment

research studies gone wrong

Racism Among Elementary School Students

research studies gone wrong

UCLA Schizophrenia Experiments

research studies gone wrong

The Good Samaritan Experiment

research studies gone wrong

Robbers Cave Experiment

research studies gone wrong

Related Resources:

  • What Careers are in Experimental Psychology?
  • What is Experimental Psychology?
  • The 25 Most Influential Psychological Experiments in History
  • 5 Best Online Ph.D. Marriage and Family Counseling Programs
  • Top 5 Online Doctorate in Educational Psychology
  • 5 Best Online Ph.D. in Industrial and Organizational Psychology Programs
  • Top 10 Online Master’s in Forensic Psychology
  • 10 Most Affordable Counseling Psychology Online Programs
  • 10 Most Affordable Online Industrial Organizational Psychology Programs
  • 10 Most Affordable Online Developmental Psychology Online Programs
  • 15 Most Affordable Online Sport Psychology Programs
  • 10 Most Affordable School Psychology Online Degree Programs
  • Top 50 Online Psychology Master’s Degree Programs
  • Top 25 Online Master’s in Educational Psychology
  • Top 25 Online Master’s in Industrial/Organizational Psychology
  • Top 10 Most Affordable Online Master’s in Clinical Psychology Degree Programs
  • Top 6 Most Affordable Online PhD/PsyD Programs in Clinical Psychology
  • 50 Great Small Colleges for a Bachelor’s in Psychology
  • 50 Most Innovative University Psychology Departments
  • The 30 Most Influential Cognitive Psychologists Alive Today
  • Top 30 Affordable Online Psychology Degree Programs
  • 30 Most Influential Neuroscientists
  • Top 40 Websites for Psychology Students and Professionals
  • Top 30 Psychology Blogs
  • 25 Celebrities With Animal Phobias
  • Your Phobias Illustrated (Infographic)
  • 15 Inspiring TED Talks on Overcoming Challenges
  • 10 Fascinating Facts About the Psychology of Color
  • 15 Scariest Mental Disorders of All Time
  • 15 Things to Know About Mental Disorders in Animals
  • 13 Most Deranged Serial Killers of All Time

Online Psychology Degree Guide

Site Information

  • About Online Psychology Degree Guide

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

1. Co-lead authors Maxwell Block and Bingtian Ye.

Spin squeezing for all

Should kids play wordle.

Mother teaching daughter about molecules.

How moms may be affecting STEM gender gap

research studies gone wrong

Harvard’s Gary King (pictured) is one in a cohort of researchers rebutting a consortium of 270 scientists known as the Open Science Collaboration, which made worldwide headlines last year when it claimed that it could not replicate the results of 100 published psychology studies.

File photo by Stephanie Mitchell/Harvard Staff Photographer

Study that undercut psych research got it wrong

Peter Reuell

Harvard Staff Writer

Widely reported analysis that said much research couldn’t be reproduced is riddled with its own replication errors, researchers say

According to two Harvard professors and their collaborators, a widely reported study released last year that said more than half of all psychology studies cannot be replicated is itself wrong.

In an attempt to determine the “replicability” of psychological science, a consortium of 270 scientists known as the Open Science Collaboration (OSC) tried to reproduce the results of 100 published studies. More than half of them failed, creating sensational headlines worldwide about the “replication crisis” in psychology.

But an in-depth examination of the data by Daniel Gilbert , the Edgar Pierce Professor of Psychology at Harvard, Gary King , the Albert J. Weatherhead III University Professor at Harvard, Stephen Pettigrew, a Ph.D. student in the Department of Government at Harvard, and Timothy Wilson, the Sherrell J. Aston Professor of Psychology at the University of Virginia, has revealed that the OSC made some serious mistakes that make its pessimistic conclusion completely unwarranted.

The methods of many of the replication studies turn out to be remarkably different from the originals and, according to the four researchers, these “infidelities” had two important consequences.

First, the methods introduced statistical error into the data, which led the OSC to significantly underestimate how many of their replications should have failed by chance alone. When this error is taken into account, the number of failures in their data is no greater than one would expect if all 100 of the original findings had been true.

Second, Gilbert, King, Pettigrew, and Wilson discovered that the low-fidelity studies were four times more likely to fail than were the high-fidelity studies, suggesting that when replicators strayed from the original methods of conducting research, they caused their own studies to fail.

Finally, the OSC used a “low-powered” design. When the four researchers applied this design to a published data set that was known to have a high replication rate, it too showed a low replication rate, suggesting that the OSC’s design was destined from the start to underestimate the replicability of psychological science.

Individually, Gilbert and King said, each of these problems would be enough to cast doubt on the conclusion that most people have drawn from this study, but taken together, they completely repudiate it. The flaws are described in a commentary to be published Friday in Science.

Like most scientists who read the OSC’s article when it appeared, Gilbert, King, Pettigrew, and Wilson were shocked and chagrined. But when they began to scrutinize the methods and reanalyze the raw data, they immediately noticed problems, which started with how the replicators had selected the 100 original studies.

“If you want to estimate a parameter of a population,” said King, “then you either have to randomly sample from that population or make statistical corrections for the fact that you didn’t. The OSC did neither.”

‘Arbitrary list of sampling rules’

“What they did,” added Gilbert, “is create an idiosyncratic, arbitrary list of sampling rules that excluded the majority of psychology’s subfields from the sample, that excluded entire classes of studies whose methods are probably among the best in science from the sample, and so on. Then they proceeded to violate all of their own rules.

“Worse yet, they actually allowed some replicators to have a choice about which studies they would try to replicate. If they had used these same methods to sample people instead of studies, no reputable scientific journal would have published their findings. So the first thing we realized was that no matter what they found — good news or bad news — they never had any chance of estimating the reproducibility of psychological science, which is what the very title of their paper claims they did.”

research studies gone wrong

“And that was just the beginning,” King said. “If you are going to replicate 100 studies, some will fail by chance alone. That’s basic sampling theory. So you have to use statistics to estimate how many of the studies are expected to fail by chance alone because otherwise the number that actually do fail is meaningless.”

According to King, the OSC did this, but made a critical error.

“When they did their calculations, they failed to consider the fact that their replication studies were not just new samples from the same population. They were often quite different from the originals in many ways, and those differences are a source of statistical error. So we did the calculation the right way and then applied it to their data. And guess what? The number of failures they observed was just about what you should expect to observe by chance alone — even if all 100 of the original findings were true. The failure of the replication studies to match the original studies was a failure of the replications, not of the originals.”

Gilbert noted that most people assume that a replication is a “replica”’ of the original study.

“Readers surely assumed that if a group of scientists did 100 replications, then they must have used the same methods to study the same populations. In this case, that assumption would be quite wrong. Replications always vary from originals in minor ways, of course. But if you read the reports carefully, as we did, you discover that many of the replication studies differed in truly astounding ways — ways that make it hard to understand how they could even be called replications.”

As an example, Gilbert described an original study that involved showing white students at Stanford University a video of four other Stanford students discussing admissions policies at their university. Three of those talking were white and one was black. During the discussion, a white student made offensive comments about affirmative action, and the researchers found that the observers looked significantly longer at the black student when they believed he could hear other comments than when they believed he could not.

“So how did they do the replication? With students at the University of Amsterdam!” Gilbert said. “They had Dutch students watch a video of Stanford students, speaking in English, about affirmative action policies at a university more than 5,000 miles away.”

In other words, unlike the participants in the original study, participants in the replication study watched students at a foreign university speaking in a foreign language about an issue of no relevance to them.

But according to Gilbert, that was not the most troubling part of the methodology.

Gilbert: ‘No one involved in this study was trying to deceive anyone. They just made mistakes, as scientists sometimes do.’

“If you dive deep into the data, you discover something else,” Gilbert said. “The replicators realized that doing this study in the Netherlands might have been a problem, so they wisely decided to run another version of it in the U.S. And when they did, they basically replicated the original result. And yet, when the OSC estimated the reproducibility of psychological science, they excluded the successful replication and included only the one from the University of Amsterdam that failed. So the public hears that ‘Yet another psychology study doesn’t replicate’ instead of ‘Yet another psychology study replicates just fine if you do it right, and not if you do it wrong,’ which isn’t a very exciting headline. Some of the replications were quite faithful to the originals, but anyone who carefully reads all the replication reports will find many more examples like this one.”

‘They introduce additional error’

“These infidelities were a problem for another reason,” King added, “namely, that they introduce additional error into the data set. That error can be calculated, and when we do, it turns out that the number of replication studies that actually failed is about what we should expect if every single one of the original findings had been true. Now, one could argue about how best to make this calculation, but the fact is that OSC didn’t make it at all. They simply ignored this potent source of error, and that caused them to draw the wrong conclusions from their data. That doesn’t mean that all 100 studies were true, of course, but it does mean that this article provides no evidence to the contrary.”

“So we now know that the infidelities created statistical noise,” said Gilbert, “but was that all they did? Or were the infidelities of a certain kind? In other words, did they just tend to change the original result, or did they tend to change it in a particular way?”

“To find out,” said King, “we needed a measure of how faithful each of the 100 replications was. Luckily, the OSC supplied it.”

Before each replication began, the OSC asked the original authors to examine the planned replication study and say whether they would endorse it as a faithful replication of their work, and about 70 percent did so.

“We used this as a rough index of fidelity, and when we did, we discovered something important: The low-fidelity replications were an astonishing four times more likely to fail,” King said. “What that suggests is that the infidelities did not just create random statistical noise — they actually biased the studies toward failure.”

In their “technical comment,” Gilbert, King, Pettigrew, and Wilson also note that the OSC used a “low-powered” design. They replicated each of the 100 studies once, using roughly the number of subjects used in the original studies. But according to King, this method artificially depresses the replication rate.

“To show how this happens, we took another published article that had examined the replicability of a group of classic psychology studies,” said King. “The authors of that paper had used a very high-powered design — they replicated each study with more than 30 times the original number of participants — and that high-powered design produced a very high replication rate. So we asked a simple question: What would have happened if these authors had used the low-powered design that was used by the OSC? The answer is that the replication rate would have been even lower than the replication rate found by the OSC.”

Despite uncovering serious problems with the landmark study, Gilbert and King emphasized that their critique does not suggest wrongdoing and is simply part of the normal process of scientific inquiry.

“Let’s be clear, Gilbert said. “No one involved in this study was trying to deceive anyone. They just made mistakes, as scientists sometimes do. Many of the OSC members are our friends, and the corresponding author, Brian Nosek, is actually a good friend who was both forthcoming and helpful to us as we wrote our critique. In fact, Brian is the one who suggested one of the methods we used for correcting the OSC’s error calculations. So this is not a personal attack, this is a scientific critique.

“We all care about the same things: doing science well and finding out what’s true. We were glad to see that in their response to our comment, the OSC quibbled about a number of minor issues but conceded the major one, which is that their paper does not provide evidence for the pessimistic conclusions that most people have drawn from it.”

“I think the big takeaway point here is that meta-science must obey the rules of science,” King said. “All the rules about sampling and calculating error and keeping experimenters blind to the hypothesis — all of those rules must apply whether you are studying people or studying the replicability of a science. Meta-science does not get a pass. It is not exempt. And those doing meta-science are not above the fray. They are part of the scientific process. If you violate the basic rules of science, you get the wrong answer, and that’s what happened here.”

“This [OSC] paper has had extraordinary impact,” Gilbert said. “It was Science magazine’s No. 3 ‘ Breakthrough of the Year’ across all fields of science. It led to changes in policy at many scientific journals, changes in priorities at funding agencies, and it seriously undermined public perceptions of psychology. So it is not enough now, in the sober light of retrospect, to say that mistakes were made. These mistakes had very serious repercussions. We hope the OSC will now work as hard to correct the public misperceptions of their findings as they did to produce the findings themselves.”

The OSC’s reply to “technical comments” by Gilbert and others, and Gilbert and others’ response to that reply, can be found here .

Share this article

You might like.

Physicists ease path to entanglement for quantum sensing

Early childhood development expert has news for parents who think the popular online game will turn their children into super readers

Mother teaching daughter about molecules.

Research suggests encouragement toward humanities appears to be very influential for daughters

Good genes are nice, but joy is better

Harvard study, almost 80 years old, has proved that embracing community helps us live longer, and be happier

Examining new weight-loss drugs, pediatric bariatric patients

Researcher says study found variation in practices, discusses safety concerns overall for younger users

Shingles may increase risk of cognitive decline

Availability of vaccine offers opportunity to reduce burden of shingles and possible dementia

The 10 Most Ridiculous Scientific Studies

research studies gone wrong

I mportant news from the world of science: if you happen to suffer a traumatic brain injury, don’t be surprised if you experience headaches as a result. In other breakthrough findings: knee surgery may interfere with your jogging, alcohol has been found to relax people at parties, and there are multiple causes of death in very old people. Write the Nobel speeches, people, because someone’s going to Oslo!

Okay, maybe not. Still, every one of those not-exactly jaw-dropping studies is entirely real—funded, peer-reviewed, published, the works. And they’re not alone. Here—with their press release headlines unchanged—are the ten best from from science’s recent annals of “duh.”

Study shows beneficial effect of electric fans in extreme heat and humidity: You know that space heater you’ve been firing up every time the temperature climbs above 90º in August? Turns out you’ve been going about it all wrong. If you don’t have air conditioning, it seems that “fans” (which move “air” with the help of a cunning arrangement of rotating “blades”) can actually make you feel cooler. That, at least, was the news from a study in the Journal of the American Medical Association (JAMA) last February. Still to come: “Why Snow-Blower Use Declines in July.”

Study shows benefit of higher quality screening colonoscopies: Don’t you just hate those low-quality colonoscopies? You know, the ones when the doctor looks at your ears, checks your throat and pronounces, “That’s one fine colon you’ve got there, friend”? Now there’s a better way to go about things, according to JAMA, and that’s to be sure to have timely, high quality screenings instead. That may be bad news for “Colon Bob, Your $5 Colonoscopy Man,” but it’s good news for the rest of us.

Holding on to the blues: Depressed individuals may fail to decrease sadness: This one apparently came as news to the folks at the Association for Psychological Science and they’ve got the body of work to stand behind their findings. They’re surely the same scientists who discovered that short people often fail to increase inches, grouchy people don’t have enough niceness and folks who wear dentures have done a terrible job of hanging onto their teeth. The depression findings in particular are good news, pointing to exciting new treatments based on the venerable “Turn that frown upside down” method.

Quitting smoking after heart attack reduces chest pain, improves quality of life: Looks like you can say goodbye to those friendly intensive care units that used hand out packs of Luckies to post-op patients hankering for a smoke. Don’t blame the hospitals though, blame those buzz-kills folks at the American Heart Association who are responsible for this no-fun finding. Next in the nanny-state crosshairs: the Krispy Kreme booth at the diabetes clinic.

Older workers bring valuable knowledge to the job: Sure they bring other things too: incomprehensible jokes, sensible shoes, the last working Walkman in captivity. But according to a study in the Journal of Applied Psychology , they also bring what the investigators call “crystallized knowledge,” which comes from “knowledge born of experience.” So yes, the old folks in your office say corny things like “Show up on time,” “Do an honest day’s work,” and “You know that plan you’ve got to sell billions of dollars worth of unsecured mortgages, bundle them together, chop them all up and sell them to investors? Don’t do that.” But it doesn’t hurt to humor them. They really are adorable sometimes.

Being homeless is bad for your health: Granted, there’s the fresh air, the lean diet, the vigorous exercise (no sitting in front of the TV for you!) But living on the street is not the picnic it seems. Studies like the one in the Journal of Health Psychology show it’s not just the absence of a fixed address that hurts, but the absence of luxuries like, say, walls and a roof. That’s especially true in winter—and spring, summer and fall too, follow-up studies have found. So quit your bragging, homeless people. You’re no healthier than the rest of us.

The more time a person lives under a democracy, the more likely she or he is to support democracy: It’s easy to fall for a charming strong-man—that waggish autocrat who promises you stability, order and no silly distractions like civil liberties and an open press. Soul-crushing annihilation of personal freedoms? Gimme’ some of that, big boy. So it came as a surprise that a study in Science found that when you give people even a single taste of the whole democracy thing, well, it’s like what they say about potato chips, you want to eat the whole bag. But hey, let’s keep this one secret. Nothing like a peevish dictator to mess up a weekend.

Statistical analysis reveals Mexican drug war increased homicide rates: That’s the thing about any war—the homicide part is kind of the whole point. Still, as a paper in The American Statistician showed, it’s always a good idea to crunch the numbers. So let’s run the equation: X – Y = Z, where X is the number of people who walked into the drug war alive, Y is the number who walked out and Z is, you know, the dead guys. Yep, looks like it adds up. (Don’t forget to show your work!)

Middle-aged congenital heart disease survivors may need special care: Sure, but they may not, too. Yes you could always baby them, like the American Heat Association recommends. But you know what they say: A middle-aged congenital heart disease survivor who gets special care is a lazy middle-aged congenital heart disease survivor. Heck, when I was a kid, our middle-aged congenital heart disease survivors worked for their care—and they thanked us for it too. This is not the America I knew.

Scientists Discover a Difference Between the Sexes: Somewhere, in the basement warrens of Northwestern University, dwell the scientists who made this discovery—androgynous beings, reproducing by cellular fission, they toiled in darkness, their light-sensitive eye spots needing only the barest illumination to see. Then one day they emerged blinking into the light, squinted about them and discovered that the surface creatures seemed to come in two distinct varieties. Intrigued, they wandered among them—then went to a kegger and haven’t been seen since. Spring break, man; what are you gonna’ do?

Read about changes to Time.com

More Must-Reads from TIME

  • Breaking Down the 2024 Election Calendar
  • What if Ultra-Processed Foods Aren’t as Bad as You Think?
  • How Ukraine Beat Russia in the Battle of the Black Sea
  • The Reintroduction of Kamala Harris
  • Long COVID Looks Different in Kids
  • How Project 2025 Would Jeopardize Americans’ Health
  • What a $129 Frying Pan Says About America’s Eating Habits
  • The 32 Most Anticipated Books of Fall 2024

Write to Jeffrey Kluger at [email protected]

  • Share full article

Advertisement

Supported by

An N.Y.U. Study Gone Wrong, and a Top Researcher Dismissed

research studies gone wrong

By Benedict Carey

  • June 27, 2016

New York University’s medical school has quietly shut down eight studies at its prominent psychiatric research center and parted ways with a top researcher after discovering a series of violations in a study of an experimental, mind-altering drug.

A subsequent federal investigation found lax oversight of study participants, most of whom had serious mental issues. The Food and Drug Administration investigators also found that records had been falsified and researchers had failed to keep accurate case histories.

In one of the shuttered studies, people with a diagnosis of post-traumatic stress caused by childhood abuse took a relatively untested drug intended to mimic the effects of marijuana, to see if it relieved symptoms.

“I think their intent was good, and they were considerate to me,” said one of those subjects, Diane Ruffcorn, 40, of Seattle, who said she was sexually abused as a child. “But what concerned me, I was given this drug, and all these tests, and then it was goodbye, I was on my own. There was no follow-up.”

It’s a critical time for two important but still controversial areas of psychiatry: the search for a blood test or other biological sign of post-traumatic stress disorder , which has so far come up empty, and the use of recreational drugs like ecstasy and marijuana to treat it.

At least one trial of marijuana, and one using ecstasy, are in the works for traumatized veterans, and some psychiatrists and many patients see this work as having enormous promise to reshape and improve treatment for trauma. But obtaining approval to use the drugs in experiments is still politically sensitive. Doctors who have done studies with these drugs say that their uncertain effects on traumatic memory make close supervision during treatment essential.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

Listverse Logo

  • Entertainment
  • General Knowledge

research studies gone wrong

10 Gruesome Realities from the Great Chinese Famine

research studies gone wrong

10 Surprising Radioactive Products That People Actually Used

research studies gone wrong

10 Celebrations of Last Place Finishers Who Didn’t Quit

research studies gone wrong

10 Popular Antebellum Slang Terms That’ll Make You Laugh

research studies gone wrong

Unwrapping the Unknown: 10 Bizarre Mummy Stories

research studies gone wrong

The Top 10 Zombies in Pop Culture History

research studies gone wrong

10 Untranslated Texts That Hold Secrets of Forgotten Languages

research studies gone wrong

10 Stories about Airplane Hijackers Who Aren’t D.B. Cooper

research studies gone wrong

10 Intriguing Stories about the First Reich’s Antichrist

research studies gone wrong

10 Games Where You Can Get Drunk

Who's behind listverse.

Jamie Frater

Jamie Frater

Head Editor

Jamie founded Listverse due to an insatiable desire to share fascinating, obscure, and bizarre facts. He has been a guest speaker on numerous national radio and television stations and is a five time published author.

Top 10 Clinical Trials That Went Horribly Wrong

Clinical trials are the most important step in getting a drug approved by the FDA. Without them, no one would know if their medicines were safe. The vast majority of the time, these trials go well, and the medicine is approved for general use. But every once in a while, a clinical trial goes horribly wrong. Keep reading to learn about 10 of these famous incidents that medical companies try desperately to hide.

READ MORE: 10 Terrible Ideas In Medicine From The Past 100 Years

10The University of Minnesota Seroquel Experiment

research studies gone wrong

“My son Dan died almost five years ago in a clinical study at the University of Minnesota, a study he lacked any diagnosis for, and a study that I tried unsuccessfully to get him out of for five months.” Ever since her son’s untimely death, Mary Weiss has been trying to spread this message to the world.

In 2003, her delusional son, Dan Markingson, was diagnosed with schizophrenia and admitted to the University of Minnesota Medical Center in Fairview. Shortly after, he was put into a clinical trial testing three different types of schizophrenia medications: Seroquel, Risperdal, and Zyprexa. Very quickly, his daily 800mg doses of Seroquel started to worsen his delusions.

In response, his mother frantically sent letters, emails, and called the study coordinators to try and take her son out of the program. But the administration banned Dan from leaving the study, threatening to put him into a mental facility if he tried to drop out. Weiss was shocked by this until she learned a key fact about the program: her son’s participation was worth $15,000 to the school. [1]

Unable to leave the program, Markingson’s delusions became worse until he eventually committed suicide by stabbing himself to death in the shower. A suicide note read, “I went through this experience smiling!” Devastated, his mother sued the school, which refused to take responsibility for its actions. Markingson was one of five trial subjects to attempt suicide, and one of two who succeeded in taking their own lives.

9French Biotrial Tragedy

Family's anger at France drug trial, where 6 died

In January 2016, the French company Biotrial recruited 128 healthy volunteers to take part in a clinical trial of a new drug designed to combat anxiety related to cancer and Parkinson’s disease. Under the influence of small doses of the drug, the patients reported no side effects. But when the doses began to escalate after the first week, problems started to surface. In particular, six of the participants became sick and were immediately sent to the ER.

One of these patients, a healthy man in his late 20s, was declared brain dead just one week after being admitted to the hospital and two weeks after starting the trial. The five other patients remained in a stable condition, but doctors predict that many will have suffered irreversible brain damage and mental handicaps.

Even though this was the first time the drug had been tested on humans, the trial administrators knew that there were serious issues with the drug. One French news source uncovered a pre-trial that had similar effects on dogs, killing several and leaving others with brain damage. [2] Yet the trial was still conducted on humans, and with horrible results.

8The Thalidomide Trials

research studies gone wrong

The drug Thalidomide was first manufactured in Germany, primarily for the purpose of treating respiratory infections. Today, many people know about this drug because of its adverse effects on pregnancy . Over 10,000 children born during the 1960s suffered serious impairments, such as missing limbs and cleft palates, as a result of this drug.

Unlike the other trials on the list, the eerie part of the thalidomide clinical trial was that everything went horribly right. During the patenting and approval phase, researchers tested the drug on animals but neglected to observe the effects on their offspring. Since it was impossible to die from an overdose of the medicine, it was deemed safe, and it hit the shelves in 1956. [3]

It was not until 1961 that Australian doctor William McBride discovered the link between Thalidomide and the deformities. Until then, every clinical trial came to the conclusion that thalidomide was a safe over-the-counter medicine although10,000 people paid the price.

7Gene Therapy Clinical Trial

research studies gone wrong

Jesse Gelsinger was 18 when he entered a study that tested the safety of gene therapy in kids with severe genetic mutations in the liver. Like the other children in the study, he had been born with a condition called OTC that prevented his liver from eliminating enough ammonia, which the researchers tried to fight by injecting him with a cold virus. But one high dose of the medicine would be Gelsinger’s last. On September 17, 1999, his symptoms quickly spiraled from jaundice, to organ failure, to brain death. [4]

The FDA dug into this death and found a few eerily irresponsible actions on the part of the administrators. First, Gelsinger was in the final group of patients, and every group before him had suffered severe reactions to the drug. Yet the study continued. Secondly, Gelsinger’s levels of ammonia were so high that they should have disqualified him from the trial in the first place. He was originally intended as an alternate, but a patient dropped out, and he was hastily included in the study.

6Anil Potti’s Miracle Cancer Drug

Potti found guilty of research misconduct

Throughout the 2000s, Anil Potti was an up-and-coming medical star. He promised cancer treatments with an 80-percent cure rate, and medical professionals believed that his discoveries could save 10,000 lives a year, but in 2015, this all changed. Potti was found guilty of including false data in a manuscript, nine papers, and a grant application, so the results of his studies were voided.

One woman who was particularly affected by this fraudulence was Joyce Shoffner, [5] patient No. 1 in a July 2008 trial done by Potti. Under the guarantee that Potti’s therapy cured 80 percent of cancers, Shoffner eagerly signed up to join the study to help cure her breast cancer. She underwent a painful biopsy, in which doctors took tissue samples by inserting a long needle from under her arm and up into her neck. She then went through a regimen of Adriamycin-Cytoxan (AC) chemotherapy, only to be told two years later that the study’s results had been voided due to Potti’s involvement. Today, Shoffner does not have breast cancer, but she lives with the blood clots and diabetes caused by the AC regimen, as well as post-traumatic stress disorder resulting from the trial itself.

READ MORE: 10 People With Shocking and Extreme Deformities

5Stem Cell Vision Treatment

research studies gone wrong

In January 2017, three women entered a study with their vision and left without it. Their ages ranging from 72 to 88, all three of these women suffered from macular degeneration , an eye disease closely related to old age. The patients each paid $5,000 to have both eyes treated with stem cell therapy, a process that was “both atypical and unsafe” according to several ophthalmology experts. [6]

Just days after the procedure, all three women reported severe side effects, including bleeding and retinal detachment. One patient entirely lost her eyesight, while the other two lost most. None of the patients are expected to recover their sight. But scientists knew that this trial had flaws from the beginning. First and foremost, the patients were required to pay for their own procedures, which is a flagrant sign of illegitimate research. Additionally, medical professionals have tried to erase the history of the trial; when you visit government records of the trial online, it only says that the study was “withdrawn prior to enrollment,” which clearly was not the case.

4Leukemia CAR-T Trial

Mast-Surgical-Error

In July 2016, three adult leukemia patients died in a trial of a new cellular-level medicine by Juno Therapeutics. Nicknamed CAR-T, Juno’s new treatment option was supposed to attack the malignant cells until they appeared to have vanished. [7] The technology was an up-and-coming phenomenon that many researchers called the “fifth pillar” of cancer treatment, but hopes were soon dashed by the results of the 2016 study.

The cause of death for the three patients was swelling in the brain, medically known as cerebral edema. Representatives from the sponsoring medical company Juno admit that cerebral edema is rather common in patients who have been given CAR-T treatments, as are immune system reactions and increased neurological toxicity .

After the news of the deaths had been released, Juno’s stock fell 27 percent. Their practices are under FDA review, and it is unclear whether they will be allowed to continue their studies.

3New York Lidocaine Disaster

research studies gone wrong

In 1996, Hoi Yan “Nicole” Wan, a healthy sophomore at the University of Rochester needed some pocket money . So she decided to sign up, without her parents’ permission, for a clinical trial that paid $150. [8] The researchers inserted a tube down her throat and into her lungs to see the effects of pollution on her respiratory system, a common procedure called a bronchoscopy.

But what Nicole did not know was that they took far more cell samples than originally outlined in the proposal. And as they took more samples from her lungs, they increased the dose of her anesthetic, Lidocaine, far above the levels approved by the FDA. She was released feeling incredibly weak and in enormous amounts of pain , and two days later was found dead. An autopsy revealed that lethal levels of Lidocaine, due to malpractice in the study, had caused her heart to stop beating and the rest of her body to fail along with it.

2John Hopkins Asthma Trial

research studies gone wrong

Ellen Roche, a technician at Johns Hopkins Hospital, volunteered to take part in an asthma trial for healthy individuals. The trial’s goal was to discover what mechanism kept healthy people from developing the symptoms of asthma, so the doctors induced a mild asthmatic reaction and then treated it with hexamethonium.

At first, inhaling this medicine simply caused Ms. Roche to develop a cough. But as time progressed, she was put on a ventilator as her lung tissue broke down and her kidneys began to fail. She died one month later, on June 2, 2001. [9] Medical officials from the trial admit that the hexamethonium “was either solely responsible for the subject’s illness or played an important contributory role.” To make matters worse, participants learned after the trial that hexamethonium is not even an FDA-approved drug. This fact was not included in the consent form, so Johns Hopkins has been forced to take full responsibility for Roche’s death.

1The Elephant Man Trial

Human guinea pigs disfigured by Elephant Man drug trial tell of limb loss horror

The most famous clinical trial of all time, The Elephant Man Trial took place in London in 2006. The trial, which was testing a new cancer treatment called TGN1412, seemed harmless to the eight men who took part in it; medical professionals had assured them that the worst symptoms would only include a headache and nausea.

But the results were much more gruesome than that. Shortly after they were given the doses, all of the patients began writhing in pain and vomiting. [10] One of the participants lost his fingers and toes, while another had to have his foot partially amputated. The trial earned its nickname, The Elephant Man Trial because one participant’s head swelled up so large that his girlfriend teased him about looking like an elephant.

No one is completely sure what went wrong, but the patients have a few ideas. One suggests that the timing of the dosage made it dangerous; researchers spent 90 minutes slowly injecting animals with the drug, but took a mere six minutes to inject it into the human subjects. Another claims that the preliminary animal testing was inaccurate because instead of testing on a bonobo, whose DNA is a 98 percent match to humans, the agency cut costs and used a macaque, whose DNA is only a 94 percent match. These men may never know exactly what went wrong that fateful day, or how it will continue to affect their lives.

Sydney is a part-time content writer and a full-time student.

READ MORE about medical misadventures on 10 Horrible Cases Of Medical Malpractice and 10 Nightmarish Stories About Terrifying Medical Errors .

More Great Lists

10 Assassinations That Went Horribly Wrong

Psychologists Confront Rash of Invalid Studies

wheels-in-head

In the wake of several scandals in psychology research, scientists are asking themselves just how much of their research is valid.

In the past 10 years, dozens of studies in the psychology field have been retracted, and several high-profile studies have not stood up to scrutiny when outside researchers tried to replicate the research.

By selectively excluding study subjects or amending the experimental procedure after designing the study, researchers in the field may be subtly biasing studies to get more positive findings. And once research results are published, journals have little incentive to publish replication studies, which try to check the results.

That means the psychology literature may be littered with effects, or conclusions, that aren't real. [ Oops! 5 Retracted Science Studies ]

The problem isn't unique to psychology, but the field is going through some soul-searching right now. Researchers are creating new initiatives to encourage replication studies, improve research protocols and to make data more transparent.  

"People have started doing replication studies to figure out, 'OK, how solid, really, is the foundation of the edifice that we're building?'" said Rolf Zwaan, a cognitive psychologist at Erasmus University in the Netherlands. "How solid is the research that we're building our research on?"

Storm brewing

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

In a 2010 study in the Journal of Social and Personal Psychology, researchers detailed experiments that they said suggested people could predict the future .

Other scientists questioned how the study, which used questionable methodology such as changing the procedure partway through the experiment, got published; the journal editors expressed skepticism about the effect, but said the study followed established rules for doing good research.

That made people wonder, "Maybe there's something wrong with the rules," said University of Virginia psychology professor Brian Nosek.

But an even bigger scandal was brewing. In late 2011, Diederik Stapel, a psychologist in the Netherlands, was fired from Tilburg University for falsifying or fabricating data in dozens of studies, some of which were published in high-profile journals.

And in 2012, a study in PLOS ONE failed to replicate a landmark 1996 psychology study that suggested making people think of words associated with the elderly — such as Florida, gray or retirement — made them walk more slowly.

Motivated reasoning

The high-profile cases are prompting psychologists to do some soul-searching about the incentive structure in their field.

The push to publish can lead to several questionable practices.

Outright fraud is probably rare. But "adventurous research strategies" are probably common, Nosek told LiveScience. [ The 10 Most Destructive Human Behaviors ]

Because psychologists are so motivated to get flashy findings published, they can use reasoning that may seem perfectly logical to them and, say, throw out research subjects who don't fit with their findings. But this subtle self-delusion can result in scientists seeing an effect where none exists, Zwaan told LiveScience.

Another way to skew the results is to change the experimental procedure or research question after the study has already begun. These changes may seem harmless to the researcher, but from a statistical standpoint, they make it much more likely that psychologists see an underlying effect where none exists, Zwaan said.

For instance, if scientists set up an experiment to find out if stress is linked to risk of cancer, and during the study they notice stressed people seem to get less sleep, they might switch their question to study sleep. The problem is the experiment wasn't set up to account for confounding factors associated with sleep, among other things.

Fight fire with psychology

In response, psychologists are trying to flip the incentives by using their knowledge of transparency, accountability and personal gain.

For instance, right now there's no incentive for researchers to share their data , and a 2006 study found that of 141 researchers who had previously agreed to share their data, only 38 did so when asked.

But Nosek and his colleagues hope to encourage such sharing by making it standard practice. They are developing a project called the Open Science Framework, and one goal is to encourage researchers to publicly post their data and to have journals require such transparency in their published studies. That should make researchers less likely to tweak their data.

"We know that behavior changes as a function of accountability, and the best way to increase accountability is to create transparency," Nosek said.

One journal, Social Psychology, is dangling the lure of guaranteed publication to motivate replication studies. Researchers send proposals for replication studies to the journal, and if they're approved, the authors are guaranteed publication in advance. That would encourage less fiddling with the protocol after the fact.

And the Laura and John Arnold Foundation now offers grant money specifically for replication studies, Nosek said.

Follow LiveScience on Twitter @livescience . We're also on Facebook  & Google+ . 

Tia is the managing editor and was previously a senior writer for Live Science. Her work has appeared in Scientific American, Wired.com and other outlets. She holds a master's degree in bioengineering from the University of Washington, a graduate certificate in science writing from UC Santa Cruz and a bachelor's degree in mechanical engineering from the University of Texas at Austin. Tia was part of a team at the Milwaukee Journal Sentinel that published the Empty Cradles series on preterm births, which won multiple awards, including the 2012 Casey Medal for Meritorious Journalism.

What is ASMR, and why do only some people experience it?

Many kids are unsure if Alexa and Siri have feelings or think like people, study finds

200 meteorites on Earth traced to 5 craters on Mars

Most Popular

  • 2 Scientists collect high-resolution images of the North Star's surface for 1st time
  • 3 Supercharged 'cocoon of energy' may power the brightest supernovas in the universe
  • 4 AI and brain implant enables ALS patient to easily converse with family 'for 1st time in years'
  • 5 Fallout from NASA's asteroid-smashing DART mission could hit Earth — potentially triggering 1st human-caused meteor shower

research studies gone wrong

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

When experiments go wrong: the U.S. perspective

Affiliation.

  • 1 World Health Organization, Geneva.
  • PMID: 15202353

The view that once prevailed in the U.S.--that research is no more dangerous than the activities of daily life--no longer holds in light of recent experience. Within the past few years, a number of subjects (including normal volunteers) have been seriously injured or killed in research conducted at prestigious institutions. Plainly, when we are talking about research going wrong, we're talking about something very important. We have seen that experiments can go wrong in several ways. Subjects can be injured--physically, mentally, or by having other interests violated. Investigators can commit fraud in data collection or can abuse subjects. And review mechanisms--such as IRBs--don't always work. The two major issues when research goes wrong in any of these ways are, first: What will be done for subjects who have suffered an injury or other wrong? and second: How will future problems be prevented? The present system in the U.S. is better at the second task than the first one. Part of the difficulty in addressing the first lies in knowing what "caused" an apparent injury. Moreover, since until recently the problem of research-related injuries was thought to be a small one, there was considerable resistance to setting up a non-fault compensation system, for fear that it would lead to payment in many cases where such compensation was not deserved. Now, with a further nudge from the NBAC there is renewed interest in developing a formal system to compensate for research injuries. Finally, I have tried to show that our system of local oversight is only partially effective in improving the design of experiments and the consent process in light of "unexpected (adverse) results." As many observers, including the federal General Accounting Office (GAO), have reported, the requirement for "continuing review" of approved research projects is the weak point in the IRB system. The probable solution would be to more strictly apply the requirement that investigators report back any adverse results, de-emphasizing the "screen" introduced by the present language about "unexpected" findings. Yet, despite its weaknesses, there are good aspects to the local basis of our oversight system, and when problems become severe enough, OHRP is likely to evaluate a system and insist on local improvements. Thus, while the U.S. system is far from perfect in responding when research goes wrong, our experience may be useful to others in crafting a system appropriate to their own circumstances. One of the major tasks will be to adequately define what triggers oversight--that is, who reports what to whom and when? The setting of this trigger needs to balance appropriate incentives and penalties. Any system, including our own, will, in my opinion, work much better once an accreditation process is in place, which will offer much more current and detailed information on how each IRB is functioning and what steps are needed to help avoid "experiments going wrong."

PubMed Disclaimer

Similar articles

  • [The origin of informed consent]. Mallardi V. Mallardi V. Acta Otorhinolaryngol Ital. 2005 Oct;25(5):312-27. Acta Otorhinolaryngol Ital. 2005. PMID: 16602332 Italian.
  • [Addiction and brief-systemic therapy: working with compulsion]. Cottencin O, Doutrelugne Y, Goudemand M, Consoli SM. Cottencin O, et al. Encephale. 2009 Jun;35(3):214-9. doi: 10.1016/j.encep.2008.06.002. Epub 2008 Aug 26. Encephale. 2009. PMID: 19540406 French.
  • Exception from informed consent: viewpoint of institutional review boards--balancing risks to subjects, community consultation, and future directions. Ernst AA, Fish S. Ernst AA, et al. Acad Emerg Med. 2005 Nov;12(11):1050-5. doi: 10.1197/j.aem.2005.06.015. Acad Emerg Med. 2005. PMID: 16264073
  • [Development of antituberculous drugs: current status and future prospects]. Tomioka H, Namba K. Tomioka H, et al. Kekkaku. 2006 Dec;81(12):753-74. Kekkaku. 2006. PMID: 17240921 Review. Japanese.
  • Improving safety for children with cardiac disease. Thiagarajan RR, Bird GL, Harrington K, Charpie JR, Ohye RC, Steven JM, Epstein M, Laussen PC. Thiagarajan RR, et al. Cardiol Young. 2007 Sep;17 Suppl 2:127-32. doi: 10.1017/S1047951107001230. Cardiol Young. 2007. PMID: 18039406 Review.
  • Practices and Attitudes of Swiss Stakeholders Regarding Investigator-Initiated Clinical Trial Funding Acquisition and Cost Management. McLennan S, Griessbach A, Briel M; Making Randomized Trials Affordable (MARTA) Group. McLennan S, et al. JAMA Netw Open. 2021 Jun 1;4(6):e2111847. doi: 10.1001/jamanetworkopen.2021.11847. JAMA Netw Open. 2021. PMID: 34076698 Free PMC article.
  • Statistical power, the Belmont report, and the ethics of clinical trials. Vollmer SH, Howard G. Vollmer SH, et al. Sci Eng Ethics. 2010 Dec;16(4):675-91. doi: 10.1007/s11948-010-9244-0. Epub 2010 Nov 10. Sci Eng Ethics. 2010. PMID: 21063801
  • Strategies to minimize risks and exploitation in phase one trials on healthy subjects. Shamoo AE, Resnik DB. Shamoo AE, et al. Am J Bioeth. 2006 May-Jun;6(3):W1-13. doi: 10.1080/15265160600686281. Am J Bioeth. 2006. PMID: 16754430 Free PMC article.
  • Search in MeSH

LinkOut - more resources

Research materials.

  • NCI CPTC Antibody Characterization Program

Miscellaneous

  • NCI CPTAC Assay Portal
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Skip to main content
  • Keyboard shortcuts for audio player

1A

  • LISTEN & FOLLOW
  • Apple Podcasts
  • Amazon Music

Your support helps make our show possible and unlocks access to our sponsor-free feed.

'If You Can Keep It': The Realities Of Ranked Choice Voting

research studies gone wrong

An election judge holds "I Voted" stickers while collecting drive-thru ballots outside the Highland Recreation Center in Denver, Colorado. Marc Piscotty/Getty Images hide caption

An election judge holds "I Voted" stickers while collecting drive-thru ballots outside the Highland Recreation Center in Denver, Colorado.

Both major political parties have now wrapped up their conventions. In one way, they're advertisements for why Americans should vote for each party's candidates, from president all the way down the ticket.

But this year, voters in five states will see another question on their ballot: whether to use a different method to elect their representatives.

The system is known broadly as ranked choice voting. There are different flavors of it. In some cases, it's called "instant runoff voting" or "final five voting."

In all cases, they describe a way of electing candidates that's different from what most Americans are used to. As a voter, you get to rank your preferred candidates. So, you don't just choose one name. You may have a first, second, and third preference for who represents you.

When voting is over, a process of elimination takes place. The lowest vote-getter in the first round is eliminated, and their votes are redistributed to higher vote-getters, according to how voters ranked the other candidates. The process continues until you end up with a winner. Ranked choice voting systems are already in place for some races in Alaska, Maine, and cities like Minneapolis and New York City.

What's driving reformers to push for these ranked choice voting systems in more states? And how are voters responding?

Like what you hear? Find more of our programs online . Listen to 1A sponsor-free by signing up for 1A+ at plus.npr.org/the1a .

IMAGES

  1. CSEB @ Western: Research Gone Wrong

    research studies gone wrong

  2. 12 reasons research goes wrong

    research studies gone wrong

  3. 12 bad reasons for rejecting scientific studies

    research studies gone wrong

  4. Problems With Scientific Research: How Science Goes Wrong

    research studies gone wrong

  5. The 5 Worst Academic Essay Writing Mistakes to Avoid

    research studies gone wrong

  6. Is Most Published Research Wrong?

    research studies gone wrong

VIDEO

  1. Science studies gone wrong #memes #funny

  2. studies_gone_wrong_students_be_like_😂😂_#fyp_#viral_#mejjagenge_@MYFAMILYComedy

  3. Science experiments that went wrong

  4. The Shocking Truth About The Tuskegee Syphilis Experimental Studies

  5. Common problems in experiments

  6. Uncovering the Darkest Psychological Experiments: A Deep Dive into Human Behavior and Ethics! 🧠

COMMENTS

  1. 14 Experiments Gone Wrong

    Luckily for all of us, this horrifying experiment never made it to a Happy Meal near you. 5. William Perkin's Mauve-lous Mistake. In 1856, chemist William Perkin was experimenting with ways to ...

  2. 11 Psychology Experiments That Went Horribly Wrong

    CIA mind control experiments. The CIA has been implicated in a number of illegal mind-control experiments that went horribly wrong for the subjects. During the Cold War, the spy agency ...

  3. Controversial and Unethical Psychology Experiments

    At a Glance. Some of the most controversial and unethical experiments in psychology include Harlow's monkey experiments, Milgram's obedience experiments, Zimbardo's prison experiment, Watson's Little Albert experiment, and Seligman's learned helplessness experiment. These and other controversial experiments led to the formation of rules and ...

  4. Mad Scientists: 10 Most Unethical Social Experiments Gone Horribly Wrong

    The study investigated if exposure to COVID-19 during pregnancy or right after birth has a long-term impact on the development and breathing of babies. ... Mad Scientists: 10 Most Unethical Social Experiments Gone Horribly Wrong. Published Mar 04, 2016 5:26 PM EST By Lizette Borreli. A look at the series of disturbing and famous social ...

  5. Unethical experiments' painful contributions to today's medicine

    The study observed 600 black men, 201 of whom did not have the disease. In order to incentivize participants, they were offered free medical exams, meals and burial insurance.

  6. Ugly past of U.S. human experiments uncovered

    The AP review of past research found: A federally funded study begun in 1942 injected experimental flu vaccine in male patients at a state insane asylum in Ypsilanti, Mich., then exposed them to ...

  7. 10 Psychological Experiments That Went Horribly Wrong

    Here are ten psychological experiments that spiraled out of control. 10. Stanford Prison Experiment. Prisoners and guards. In 1971, social psychologist Philip Zimbardo set out to interrogate the ways in which people conform to social roles, using a group of male college students to take part in a two-week-long experiment in which they would ...

  8. Stanford Prison Experiment: why famous psychology studies are now ...

    The Stanford Prison Experiment, one of the most famous and compelling psychological studies of all time, told us a tantalizingly simple story about human nature. The study took paid participants ...

  9. 10 Psychological Experiments That Could Never Happen Today

    7. Robbers Cave Experiment. Muzafer Sherif conducted the Robbers Cave Experiment in the summer of 1954, testing group dynamics in the face of conflict. A group of preteen boys were brought to a ...

  10. Another 14 Iconic Psychology Experiments Have Failed ...

    Another 14 Iconic Psychology Experiments Have Failed Replication Attempts. A lot of what we think we know about psychology might be wrong. A major research initiative, the second of its kind, tried to reconstruct 28 famous classic psychology experiments. But of those 28, only 14 of the experiments yielded the same results, according to research ...

  11. Hundreds of Psychology Studies Are Wrong

    One hidden factor means that hundreds of studies are completely useless. Posted May 1, 2017 |Reviewed by Davia Sills. This might sound like nothing new if you've kept up with the replication ...

  12. 12 reasons research goes wrong

    12 reasons research goes wrong. FIXING THE NUMBERS Massaging data, small sample sizes and other issues can affect the statistical analyses of studies and distort the results, and that's not all ...

  13. 10 Controversial Psychological Experiments That Crossed the Line

    The Little Albert Experiment is one of the psychological experiments gone wrong. Things were different in 1920. ... Psychological research studies examine possible cause-and-effect relationships between variables. Experimental research involves careful manipulation of one variable (the independent variable) and measuring changes in another ...

  14. Why Some Famous Psychology Experiments Could Be Wrong

    Heuristic thinking can be an efficient means of arriving at judgments, but accuracy is not guaranteed. Heuristics may have biased the interpretation of famous studies in social psychology because ...

  15. 20 Most Unethical Experiments in Psychology

    In the 1950s, the Central Intelligence Agency sponsored a mind-control research project it dubbed Operation Midnight Climax. The purpose of the operation was to study the effects of LSD on people, and so non-consenting individuals in San Francisco and New York were lured by CIA-paid prostitutes to safe houses, where they were slipped mind ...

  16. Study that undercut psych research got it wrong

    Study that undercut psych research got it wrong — Harvard Gazette. Harvard's Gary King (pictured) is one in a cohort of researchers rebutting a consortium of 270 scientists known as the Open Science Collaboration, which made worldwide headlines last year when it claimed that it could not replicate the results of 100 published psychology ...

  17. The 10 Most Ridiculous Scientific Studies

    September 9, 2015 4:09 PM EDT. Jeffrey Kluger is an editor at large at TIME. He covers space, climate, and science. He is the author of 12 books, including Apollo 13, which served as the basis for ...

  18. An N.Y.U. Study Gone Wrong, and a Top Researcher Dismissed

    By Benedict Carey. June 27, 2016. New York University's medical school has quietly shut down eight studies at its prominent psychiatric research center and parted ways with a top researcher ...

  19. 5 disastrous clinical trials with tragic outcomes

    Though there is always a risk involved with clinical trials, these risks can be potentially reduced if more scientific research toward development of animal models closely mimicking drug behavior in humans can be developed." CAFE study. This particularly egregious example of a clinical study gone terribly wrong is surreal.

  20. Top 10 Clinical Trials That Went Horribly Wrong

    9French Biotrial Tragedy. Family's anger at France drug trial, where 6 died. In January 2016, the French company Biotrial recruited 128 healthy volunteers to take part in a clinical trial of a new drug designed to combat anxiety related to cancer and Parkinson's disease.

  21. Beware those scientific studies—most are wrong, researcher warns

    Beware those scientific studies—most are wrong, researcher warns. A few years ago, two researchers took the 50 most-used ingredients in a cook book and studied how many had been linked with a ...

  22. How Many Psychology Studies Are Wrong?

    That made people wonder, "Maybe there's something wrong with the rules," said University of Virginia psychology professor Brian Nosek. But an even bigger scandal was brewing. In late 2011 ...

  23. When experiments go wrong: the U.S. perspective

    We have seen that experiments can go wrong in several ways. Subjects can be injured--physically, mentally, or by having other interests violated. Investigators can commit fraud in data collection or can abuse subjects. And review mechanisms--such as IRBs--don't always work. The two major issues when research goes wrong in any of these ways are ...

  24. 'If You Can Keep It': The Realities Of Ranked Choice Voting : 1A

    Ranked choice voting is a way of electing candidates that's different from what most Americans are used to. As a voter, you get to rank your preferred candidates. So, you don't just choose one name.

  25. Darwin's fear was unjustified: Study suggests fossil record gaps not a

    The outcome of simulating carbonate platforms in the stratigraphic domain. A Scenario A: deposition based on a fictional sea-level curve. B Scenario B: deposition based on the sea-level curve from ...

  26. Mediterranean diet may reduce Covid-19 risk, study finds

    A healthy diet has long been hailed by some experts as one potentially important factor influencing the risk of Covid-19, or how bad someone's case gets. But a team of researchers in Indonesia ...