Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 06 April 2024

Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results

  • Dan Garisto

You can also search for this author in PubMed   Google Scholar

Physicist Ranga Dias was once a rising star in the field of superconductivity research. Credit: Lauren Petracca/New York Times/Redux/eyevine

Ranga Dias, the physicist at the centre of the room-temperature superconductivity scandal , committed data fabrication, falsification and plagiarism, according to an investigation commissioned by his university. Nature ’s news team discovered the bombshell investigation report in court documents.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 628 , 481-483 (2024)

doi: https://doi.org/10.1038/d41586-024-00976-y

The author of this story is related to Robert Garisto, the chief editor of PRL. The two have had no contact about this story.

Snider, E. et al. Nature 586 , 373–377 (2020); retraction 610 , 804 (2022).

Article   Google Scholar  

Dasenbrock-Gammon, N. et al. Nature 615 , 244–250 (2023); retraction 624 , 460 (2023).

Smith, G. A. et al. Chem. Commun. 58 , 9064–9067 (2022); retraction 60 , 1047 (2024).

Durkee, D. et al. Phys. Rev. Lett. 127 , 016401 (2021); retraction 131 , 079902 (2023).

Lamichhane, A. et al. J. Chem. Phys. 155 , 114703 (2021).

Tabak, G. et al. Phys. Rev. B 109 , 064102 (2024).

Pant, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2007.15247 (2020).

Download references

Reprints and permissions

Supplementary Information

  • University of Rochester investigation report

Related Articles

research misconduct cases 2022

  • Institutions
  • Materials science

The Taliban said women could study — three years on they still can’t

The Taliban said women could study — three years on they still can’t

News 14 AUG 24

Who is legally responsible for climate harms? The world’s top court will now decide

Who is legally responsible for climate harms? The world’s top court will now decide

Editorial 13 AUG 24

The time to act is now: the world’s highest court must weigh in strongly on climate and nature

The time to act is now: the world’s highest court must weigh in strongly on climate and nature

World View 08 AUG 24

The Taliban ‘took my life’ — scientists who fled takeover speak out

The Taliban ‘took my life’ — scientists who fled takeover speak out

News 09 AUG 24

Why we quit: how ‘toxic management’ and pandemic pressures fuelled disillusionment in higher education

Why we quit: how ‘toxic management’ and pandemic pressures fuelled disillusionment in higher education

Career News 08 AUG 24

Hijacked journals are still a threat — here’s what publishers can do about them

Hijacked journals are still a threat — here’s what publishers can do about them

Nature Index 23 JUL 24

On-chip topological beamformer for multi-link terahertz 6G to XG wireless

On-chip topological beamformer for multi-link terahertz 6G to XG wireless

Article 14 AUG 24

Precision spectroscopy on 9Be overcomes limitations from nuclear structure

Precision spectroscopy on 9Be overcomes limitations from nuclear structure

Twist-assisted all-antiferromagnetic tunnel junction in the atomic limit

Twist-assisted all-antiferromagnetic tunnel junction in the atomic limit

Faculty Positions in Center of Bioelectronic Medicine, School of Life Sciences, Westlake University

SLS invites applications for multiple tenure-track/tenured faculty positions at all academic ranks.

Hangzhou, Zhejiang, China

School of Life Sciences, Westlake University

research misconduct cases 2022

Faculty Positions, Aging and Neurodegeneration, Westlake Laboratory of Life Sciences and Biomedicine

Applicants with expertise in aging and neurodegeneration and related areas are particularly encouraged to apply.

Westlake Laboratory of Life Sciences and Biomedicine (WLLSB)

research misconduct cases 2022

Faculty Positions in Chemical Biology, Westlake University

We are seeking outstanding scientists to lead vigorous independent research programs focusing on all aspects of chemical biology including...

Assistant Professor Position in Genomics

The Lewis-Sigler Institute at Princeton University invites applications for a tenure-track faculty position in Genomics.

Princeton University, Princeton, New Jersey, US

The Lewis-Sigler Institute for Integrative Genomics at Princeton University

research misconduct cases 2022

Associate or Senior Editor, BMC Medical Education

Job Title: Associate or Senior Editor, BMC Medical Education Locations: New York or Heidelberg (Hybrid Working Model) Application Deadline: August ...

New York City, New York (US)

Springer Nature Ltd

research misconduct cases 2022

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Advertisement

  • Publications

This site uses cookies to enhance your user experience. By continuing to use this site you are agreeing to our COOKIE POLICY .

Grab your lab coat. Let's get started

Create an account below to get 6 c&en articles per month, receive newsletters and more - all free., it seems this is your first time logging in online. please enter the following information to continue., as an acs member you automatically get access to this site. all we need is few more details to create your reading experience., not you sign in with a different account..

Password and Confirm password must match.

If you have an ACS member number, please enter it here so we can link this account to your membership. (optional)

ACS values your privacy. By submitting your information, you are gaining access to C&EN and subscribing to our weekly newsletter. We use the information you provide to make your reading experience better, and we will never sell your data to third party members.

Already have an ACS ID? Log in here

The key to knowledge is in your (nitrile-gloved) hands

Access more articles now. choose the acs option that’s right for you..

Already an ACS Member? Log in here  

$0 Community Associate

ACS’s Basic Package keeps you connected with C&EN and ACS.

  • Access to 6 digital C&EN articles per month on cen.acs.org
  • Weekly delivery of the C&EN Essential newsletter

$80 Regular Members & Society Affiliates

ACS’s Standard Package lets you stay up to date with C&EN, stay active in ACS, and save.

  • Access to 10 digital C&EN articles per month on cen.acs.org
  • Weekly delivery of the digital C&EN Magazine
  • Access to our Chemistry News by C&EN mobile app

$160 Regular Members & Society Affiliates $55 Graduate Students $25 Undergraduate Students

ACS’s Premium Package gives you full access to C&EN and everything the ACS Community has to offer.

  • Unlimited access to C&EN’s daily news coverage on cen.acs.org
  • Weekly delivery of the C&EN Magazine in print or digital format
  • Significant discounts on registration for most ACS-sponsored meetings

research misconduct cases 2022

Your account has been created successfully, and a confirmation email is on the way.

Your username is now your ACS ID.

Research Integrity

Us office of research integrity received 269 allegations of research misconduct last fiscal year, office closed 36 cases and released nine findings of research misconduct during the period, by dalmeet singh chawla, special to c&en, february 24, 2023.

  • Court overturns conviction of chemist Feng “Franklin” Tao
  • Are retraction notices becoming clearer?
  • Former chemistry professor jailed for making meth
  • One academic paper’s journey through the mill
  • Highly-cited chemist is suspended for claiming to be affiliated with Russian and Saudi universities

The US Office of Research Integrity (ORI) received a total of 269 complaints of alleged research misconduct between Oct. 1, 2021 and Sept. 30, 2022, a new report released by the agency reveals.

During the period, the agency closed 42 cases and released nine findings of research misconduct (one involving a single person but two institutions); 10 other investigated cases yielded no such findings. The ORI declined to pursue the remaining 22 cases. In the nine cases with guilty findings, seven were cases of falsification and fabrication, one was falsification alone, and one was of plagiarism.

The ORI defines research misconduct as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.” In two of the nine cases, the researchers were banned from federal research funding for a certain period of time, and four papers have been requested to be retracted or corrected.

In the last fiscal year, the ORI continued 33 cases from previous years and opened 38 new ones. The ORI has also awarded three grants totaling just under $450,000 to researchers conducting studies in the area of research integrity.

In September 2022, the ORI released a request for information , asking institutions, funders, and concerned individuals for their views on the ORI’s plans to revise the 2005 Public Health Service Policies on Research Misconduct. In the new report, the ORI reveals that 31 institutions, organizations, and individuals submitted comments, which the agency will use to develop a notice for public comment.

You might also like...

Serving the chemical, life science, and laboratory worlds

Sign up for C&EN's must-read weekly newsletter

Contact us to opt out anytime

  • Share on Facebook
  • Share on Linkedin
  • Share on Reddit

This article has been sent to the following recipient:

Join the conversation

Contact the reporter

Submit a Letter to the Editor for publication

Engage with us on Twitter

The power is now in your (nitrile gloved) hands

Sign up for a free account to get more articles. or choose the acs option that’s right for you..

Already have an ACS ID? Log in

Create a free account To read 6 articles each month from

Join acs to get even more access to.

  • Environment
  • Science & Technology
  • Business & Industry
  • Health & Public Welfare
  • Topics (CFR Indexing Terms)
  • Public Inspection
  • Presidential Documents
  • Document Search
  • Advanced Document Search
  • Public Inspection Search
  • Reader Aids Home
  • Office of the Federal Register Announcements
  • Using FederalRegister.Gov
  • Understanding the Federal Register
  • Recent Site Updates
  • Federal Register & CFR Statistics
  • Videos & Tutorials
  • Developer Resources
  • Government Policy and OFR Procedures
  • Congressional Review
  • My Clipboard
  • My Comments
  • My Subscriptions
  • Sign In / Sign Up
  • Site Feedback
  • Search the Federal Register

This site displays a prototype of a “Web 2.0” version of the daily Federal Register. It is not an official legal edition of the Federal Register, and does not replace the official print version or the official electronic version on GPO’s govinfo.gov.

The documents posted on this site are XML renditions of published Federal Register documents. Each document posted on the site includes a link to the corresponding official PDF file on govinfo.gov. This prototype edition of the daily Federal Register on FederalRegister.gov will remain an unofficial informational resource until the Administrative Committee of the Federal Register (ACFR) issues a regulation granting it official legal status. For complete information about, and access to, our official publications and services, go to About the Federal Register on NARA's archives.gov.

The OFR/GPO partnership is committed to presenting accurate and reliable regulatory information on FederalRegister.gov with the objective of establishing the XML-based Federal Register as an ACFR-sanctioned publication in the future. While every effort has been made to ensure that the material on FederalRegister.gov is accurately displayed, consistent with the official SGML-based PDF version on govinfo.gov, those relying on it for legal research should verify their results against an official edition of the Federal Register. Until the ACFR grants it official status, the XML rendition of the daily Federal Register on FederalRegister.gov does not provide legal notice to the public or judicial notice to the courts.

Design Updates: As part of our ongoing effort to make FederalRegister.gov more accessible and easier to use we've enlarged the space available to the document content and moved all document related data into the utility bar on the left of the document. Read more in our feature announcement .

Findings of Research Misconduct

A Notice by the Health and Human Services Department on 03/17/2022

This document has been published in the Federal Register . Use the PDF linked in the document sidebar for the official electronic format.

  • Document Details Published Content - Document Details Agencies Department of Health and Human Services Office of the Secretary Document Citation 87 FR 15256 Document Number 2022-05659 Document Type Notice Pages 15256-15257 (2 pages) Publication Date 03/17/2022 Published Content - Document Details
  • View printed version (PDF)

This table of contents is a navigational tool, processed from the headings within the legal text of Federal Register documents. This repetition of headings to form internal navigation links has no substantive legal effect.

FOR FURTHER INFORMATION CONTACT:

Supplementary information:.

This feature is not available for this document.

Additional information is not currently available for this document.

  • Sharing Enhanced Content - Sharing Shorter Document URL https://www.federalregister.gov/d/2022-05659 Email Email this document to a friend Enhanced Content - Sharing
  • Print this document

Document page views are updated periodically throughout the day and are cumulative counts for this document. Counts are subject to sampling, reprocessing and revision (up or down) throughout the day.

This document is also available in the following formats:

More information and documentation can be found in our developer tools pages .

This PDF is the current document as it appeared on Public Inspection on 03/16/2022 at 8:45 am.

It was viewed 15 times while on Public Inspection.

If you are using public inspection listings for legal research, you should verify the contents of the documents against a final, official edition of the Federal Register. Only official editions of the Federal Register provide legal notice of publication to the public and judicial notice to the courts under 44 U.S.C. 1503 & 1507 . Learn more here .

Document headings vary by document type but may contain the following:

  • the agency or agencies that issued and signed a document
  • the number of the CFR title and the number of each part the document amends, proposes to amend, or is directly related to
  • the agency docket number / agency internal file number
  • the RIN which identifies each regulatory action listed in the Unified Agenda of Federal Regulatory and Deregulatory Actions

See the Document Drafting Handbook for more details.

Department of Health and Human Services

Office of the secretary.

Office of the Secretary, HHS.

Findings of research misconduct have been made against Shuo Chen, Ph.D. (Respondent), formerly a postdoctoral researcher, Department of Physics, University of California, Berkeley (UCB). Respondent engaged in research misconduct in research reported in a grant application submitted for U.S. Public Health Service (PHS) funds, specifically National Institute of Neurological Disorders and Stroke (NINDS), National Institutes of Health (NIH), grant application K99 NS116562-01. The administrative actions, including supervision for a period of one (1) year, were implemented beginning on February 28, 2022, and are detailed below.

Wanda K. Jones, Dr.P.H., Acting Director, Office of Research Integrity, 1101 Wootton Parkway, Suite 240, Rockville, MD 20852, (240) 453-8200.

Notice is hereby given that the Office of Research Integrity (ORI) has taken final action in the following case:

Shuo Chen, Ph.D., University of California, Berkeley: Based on the report of an investigation conducted by UCB and additional analysis conducted by ORI in its oversight review, ORI found that Dr. Shuo Chen, formerly a postdoctoral researcher, Department of Physics, UCB, engaged in research misconduct in research reported in a grant application submitted for PHS funds, specifically NINDS, NIH, grant application K99 NS116562-01.

ORI found that Respondent engaged in research misconduct by intentionally, knowingly, and/or recklessly falsifying data and methods by altering, reusing, and relabeling source two-photon microscopy and electrophysiological data to represent images of mouse hippocampal neurons in the following grant application:

  • K99 NS116562-01, “Investigation into network dynamics of hippocampal replay sequences by ultrafast voltage imaging,” submitted to NINDS, NIH, on June 25, 2019.

ORI found that Respondent intentionally, knowingly, and/or recklessly falsified two-photon microscopy and in vivo electrophysiological activity images, figure legends, and text descriptions of hippocampal neurons from a mouse running on a treadmill in a head-fixed virtual reality (VR) set up. Specifically:

  • Respondent reused an image of visual cortex neurons to represent fluorescence calcium imaging of hippocampal neurons in Figure 6d and its associated text and figure legend of K99 NS116562-01.
  • Respondent reused in vivo electrophysiological data from control mice of spatial receptive fields for all recorded place cells during linear track exploration sessions from Supplemental Figure 1b from Nat Neurosci. 2018 Jul;21(7):996-1003 (doi: 10.1038/s41593-018-0163-8) to represent several sessions of two-photon hippocampal calcium imaging of progressive place fields, obtained from multiple mice running on a treadmill in a head-fixed VR set up, in Figure 6e and its associated text and figure legend of K99 NS116562-01.

Respondent neither admits nor denies ORI's findings of research misconduct. The parties entered into a Voluntary Settlement Agreement (Agreement) to conclude this matter without further expenditure of time, finances, or other resources. The settlement is not an admission of liability on the part of the Respondent.

Respondent voluntarily agreed to the following:

(1) Respondent will have his research supervised for a period of one (1) year beginning on February 28, 2022 (the “Supervision Period”). Prior to the submission of an application for PHS support for a research project on which Respondent's participation is proposed and prior to Respondent's participation in any capacity in PHS-supported research, Respondent will submit a plan for supervision of Respondent's duties to ORI for approval. The supervision plan must be designed to ensure the integrity of Respondent's research. Respondent will not participate in any PHS-supported research until such a supervision plan is approved by ORI. Respondent will comply with the agreed-upon supervision plan.

(2) The requirements for Respondent's supervision plan are as follows:

i. A committee of 2-3 senior faculty members at the institution who are familiar with Respondent's field of research, but not including Respondent's supervisor or collaborators, will provide oversight and guidance during the Supervision Period. The committee will review primary data from Respondent's laboratory on a quarterly basis and submit a report to ORI at six (6) month intervals setting forth the committee meeting dates and Respondent's compliance with appropriate research standards and confirming the integrity of Respondent's research.

ii. The committee will conduct an advance review of each application for PHS funds, or report, manuscript, or abstract involving PHS-supported research in which Respondent is involved. The review will include a discussion with Respondent of the primary data represented in those documents and will include a certification to ORI that the data presented in the proposed application, report, manuscript, or abstract is supported by the research record.

(3) During the Supervision Period, Respondent will ensure that any institution employing him submits, in conjunction with each application for PHS funds, or report, manuscript, or abstract involving PHS-supported research in which Respondent is involved, a certification to ORI that the data provided by Respondent are based on actual experiments or are otherwise legitimately derived and that the data, procedures, and methodology are accurately reported in the application, report, manuscript, or abstract. ( print page 15257)

(4) If no supervision plan is provided to ORI, Respondent will provide certification to ORI at the conclusion of the Supervision Period that his participation was not proposed on a research project for which an application for PHS support was submitted and that he has not participated in any capacity in PHS-supported research.

(5) During the Supervision Period, Respondent will exclude himself voluntarily from serving in any advisory or consultant capacity to PHS including, but not limited to, service on any PHS advisory committee, board, and/or peer review committee.

Dated: March 14, 2022.

Wanda K. Jones,

Acting Director, Office of Research Integrity, Office of the Assistant Secretary for Health.

[ FR Doc. 2022-05659 Filed 3-16-22; 8:45 am]

BILLING CODE 4150-31-P

  • Executive Orders

Reader Aids

Information.

  • About This Site
  • Legal Status
  • Accessibility
  • No Fear Act
  • Continuity Information

The Harvard Crimson Logo

  • Editor's Pick

research misconduct cases 2022

Family of Anthony N. Almazan ’16 Files Wrongful Death Lawsuit Against Harvard

research misconduct cases 2022

Roy Mottahedeh ’60, Pioneering Middle East Scholar Who Sought to Bridge U.S.-Iran Divide, Dies at 84

research misconduct cases 2022

CPD Proposal for Surveillance Cameras in Harvard Square Sparks Privacy Concerns

research misconduct cases 2022

‘The Rudder of the Organization’: Longtime PBHA Staff Member Lee Smith Remembered for Warmth and Intellect

research misconduct cases 2022

Susan Wojcicki ’90, Former YouTube CEO and Silicon Valley Pioneer, Dies at 56

Top Harvard Medical School Neuroscientist Accused of Research Misconduct

Khalid Shah, a prominent neuroscientist at Brigham and Women's Hospital, is accused of falsifying data and plagiarizing images across 21 papers.

Top Harvard Medical School neuroscientist Khalid Shah allegedly falsified data and plagiarized images across 21 papers, data manipulation expert Elisabeth M. Bik said.

In an analysis shared with The Crimson, Bik alleged that Shah, the vice chair of research in the department of neurosurgery at Brigham and Women’s Hospital, presented images from other scientists’ research as his own original experimental data.

Though Bik alleged 44 instances of data falsification in papers spanning 2001 to 2023, she said the “most damning” concerns appeared in a 2022 paper by Shah and 32 other authors in Nature Communications, for which Shah was the corresponding author.

Shah is the latest prominent scientist to have his research face scrutiny by Bik, who has emerged as a leading figure among scientists concerned with research integrity.

She contributed to data falsification allegations against four top scientists at the Dana-Farber Cancer Institute — leading to the retraction of six and correction of 31 papers — and independently reviewed research misconduct allegations reported by the Stanford Daily against former Stanford president Marc T. Tessier-Lavigne, which played a part in his resignation last summer.

Bik said that after being notified of the allegations by one of Shah’s former colleagues, she used the AI software ImageTwin and reverse image searches to identify duplicates across papers. Bik said she plans on detailing the specifc allegations in a forthcoming blog post.

In interviews, Matthew S. Schrag, an assistant professor of neurology at Vanderbilt University Medical Center, and Mike Rossner, the president of Image Data Integrity — who reviewed the allegations at The Crimson’s request — said they had merit and raised serious concerns about the integrity of the papers in question.

Shah did not respond to a request for comment for this article.

In an emailed statement Wednesday, Paul J. Anderson, the chief academic officer of Mass General Brigham, which oversees Brigham and Women’s Hospital, did not comment on the specific allegations against Shah but said the hospital “is committed to preserving the highest standards of biomedical research and fostering scientific innovation.”

“We take very seriously any questions, concerns, or allegations regarding research conducted at our hospitals and undertake a robust and confidential process to assess and respond to any claims that are brought to our attention in accordance with hospital policy and federal regulations,” Anderson wrote.

Bik said the 2022 paper contained lifted images from seven papers authored by other scientists and the websites of two scientific vendors.

The 2022 paper contains an image which Bik said was taken from imaging by R&D Systems, a company which manufactures antibodies for scientific research. An apparently identical image to the one contained in the 2022 paper appears in a 2018 R&D Systems catalog entry obtained by The Crimson.

R&D Systems is not credited for the image in the 2022 article.

Schrag said that not only were the images repeated, but that “the vendor is saying this is a different antibody than the one that the authors are saying it is.”

“This is a really unusual sort of thing that I cannot imagine how this happens by accident,” Schrag added.

Rossner added he had never seen an allegation of duplication of this sort in 22 years.

“If I were either a research integrity officer or a journal editor, I would want to see the source data,” Rossner said.

Nature Communications, which published the 2022 article, did not respond to a request for comment.

The allegation against the earliest paper — published in 2001, on which Shah is listed as first author — claims that two blots have been copied, magnified, and pasted into two other blots within the same figure.

According to Schrag, this manipulation would change the findings of the study, as it suggests production of a larger abundance of proteins.

The remaining 19 papers contain blot and image duplications within figures in the same paper or repeated from earlier papers authored by Shah, Bik alleged.

In an emailed statement Wednesday, HMS spokesperson Ekaterina D. Pesheva declined to comment on the allegations against Shah, citing a policy against commenting on individual research integrity concerns due to federal and institutional regulations.

She wrote that a research integrity officer will typically respond to concerns of research misconduct to determine whether it “warrants a formal inquiry” led by HMS and the “respective affiliated institution” under federal research misconduct regulations.

“Please note that until proven otherwise, any and all concerns remain simply concerns, and it is critical for the review process to unfold as intended,” Pesheva wrote.

Of the 18 scientific journals which published the articles questioned by Bik, spokespeople for seven — Oncogene, Biophysical Journal, PLOS One, Proceedings of the National Academy of Science, Cancer Biology and Therapy, Nature Scientific Reports, and Clinical Cancer Research — said they were aware of and investigating the allegations.

The other eleven journals, including Nature Communications, did not respond to requests for comment.

Correction: February 2, 2024

A previous version of this article misspelled the name of Harvard Medical School spokesperson Ekaterina D. Pesheva.

—Staff writer Veronica H. Paulus can be reached at [email protected] . Follow her on X @VeronicaHPaulus .

—Staff writer Akshaya Ravi can be reached at [email protected] . Follow her on X @akshayaravi22 .

Want to keep up with breaking news? Subscribe to our email newsletter.

research fraud cartoon

May 9, 2019

In Fraud We Trust: Top 5 Cases of Misconduct in University Research

There’s a thin line between madness and immorality. This idea of the “mad scientist” has taken on a charming, even glorified perception in popular culture. From the campy portrayal of Nikola Tesla in the first issue of Superman, to Dr. Frankenstein, to Dr. Emmet Brown of Back to the Future, there’s no question Hollywood has softened the idea of the mad scientist. So, I will not paint the scientists involved in these five cases of research fraud as such. The immoral actions of these researchers didn’t just affect their own lives, but also the lives and careers of innocent students, patients, and colleagues. Academic fraud is not only a crime, it is a threat to the intellectual integrity upon which the evolution of knowledge rests. It also compromises the integrity of the institution, as any institution will take a blow to their reputation for allowing academic misconduct to go unnoticed under its watch. Here, you will find the top five most notorious cases of fraud in university research in only the last few years

Fraud in Psychology Research

In 2011, a Dutch psychologist named Diederik Stapel committed academic fraud in a number of publications over the course of ten years, spanning three different universities: the University of Groningen, the University of Amsterdam, and Tilburg University.

Among the dozens of studies in question, most notably, he falsified data on a study which analyzed racial stereotyping and the effects of advertisements on personal identity. The journal Science published the study, which claimed that one particular race stereotyped and discriminated against another particular race in a chaotic, messy environment, versus an organized, structured one. Stapel produced another study which claimed that the average person determined employment applicants to be more competent if they had a male voice. As a result, both studies were found to be contaminated with false, manipulated data.

Psychologists discovered Stapel’s falsified work and reported that his work did not stand up to scrutiny. Moreover, they concluded that Stapel took advantage of a loose system, under which researchers were able to work in almost total secrecy and very lightly maneuver data to reach their conclusions with little fear of being contested. A host of newspapers published Stapel’s research all over the world. He even oversaw and administered over a dozen doctoral theses; all of which have been rendered invalid, thereby compromising the integrity of former students’ degrees.

“I have failed as a scientist and a researcher. I feel ashamed for it and have great regret,” lamented Stapel to the New York Times. You can read the particulars of this fraud case here .

Duke University Cancer Research Fraud

In 2010, Dr. Anil Potti left Duke University after allegations of research fraud surfaced. The fraud came in waves. First, Dr. Potti flagrantly lied about being a Rhodes Scholar to attain hundreds of thousands of dollars in grant money from the American Cancer Society. Then, Dr. Potti was caught outright falsifying data in his research, after he discovered one of his theories for personalized cancer treatment was disproven. This theory was intended to justify clinical trials for over a hundred patients. Because it was disproven, the trials could no longer take place. Dr. Potti falsified data in order to continue with these trials and attain further funding.

Over a dozen papers that he published were retracted from various medical journals, including the New England Journal of Medicine.

Dr. Potti had been working on personalized cancer treatment he hailed as “the holy grail of cancer.” There are a lot of people whose bodies fail to respond to more traditional cancer treatments. Personalized treatments, however, offer hope because patients are exposed to treatments that are tailored to their own unique body constitution, and the type of tumors they have. Because of this, patients flocked to Duke to register for trials for these drugs. They were even told there was an 80% chance that they would find the right drug for them. The patients who partook in these trials filed a lawsuit against Duke, alleging that the institution performed ill-performed chemotherapy on participants. Patients were so excited that there was renewed hope for their cancer treatment, that they trusted Dr. Potti’s trials and drugs. Sadly, many of these cancer patients suffered from unusual side effects like blood clots and damaged joints.

Duke settled these lawsuits with the families of the patients. You can read details of the case here .

Plagiarism in Kansas

Mahesh Visvanathan and Gerald Lushington, two computer scientists from the University of Kansas, confessed to accusations of plagiarism. They copied large chunks of their research from the works of other scientists in their field. The plagiarism was so ubiquitous that even the summary statement of their presentation was lifted from another scientist’s article in a renowned journal.

Visvanathan and Lushington oversaw a program at the University of Kansas in which researchers reviewed and processed large amounts of data for DNA analysis. In this case, Visvanathan committed the plagiarism and Lushington knowingly refrained from reporting it to the university. Learn more about this case here .

Columbia University Research Misconduct

The year was 2010. Bengü Sezen was finally caught falsifying data after ten years of continuously committing fraud. Her fraudulent activity was so blatant that she even made up fake people and organizations in an effort to support her research results. Sezen was found guilty of committing over 20 acts of research misconduct, with about ten research papers recalled for redaction due to plagiarism and outright fabrication.

Sezen’s doctoral thesis was fabricated entirely in order to produce her desired results. Additionally, her misconduct greatly affected the careers of other young scientists who worked with her. These scientists dedicated a large portion of their graduate careers trying to reproduce Sezen’s desired results.

Columbia University moved to retract her Ph.D in chemistry. Sezen fled the country during her investigation.   Read further details about this case here .

Penn State Fraud

In 2012, Craig Grimes ripped off the U.S. government to the tune of $3 million. He pleaded guilty to wire fraud, money laundering, and engaging in fraudulent statements to attain grant money.

Grimes bamboozled the National Institute of Health (NIH) and the National Science Foundation (NSF) into granting him $1.2 million for research on gases in blood, which helps detect disorders in infants. Sadly, it was revealed by the Attorney’s Office that Grimes never carried out this research, and instead used the majority of his granted funds for personal expenditures. In addition to that $1.2 million, Grimes also falsified information that helped him attain $1.9 million in grant money via the American Recovery and Reinvestment Act. Consequently, a federal judge ruled that Grimes spend 41 months in prison and pay back over $660,000 to Penn State, the NIH, and the NSF.

Check out the details about this case here .

Share this:

Latest articles, how to keep your orcid profile current .

Cory Thaxton

Toxic Labs and Research Misconduct

Digital persistent identifiers and you, featured articles.

ORCID Graphic

August 15, 2024

research misconduct cases 2022

November 21, 2023

researcher taking a ticket

November 16, 2023

research misconduct cases 2022

How courts can help, not punish parents of habitually absent students

Can training high school students help address the teacher shortage?

School district sued over broken windows, mold, overheating classrooms and missing teachers

How earning a college degree put four California men on a path from prison to new lives | Documentary 

Patrick Acuña’s journey from prison to UC Irvine | Video

Family reunited after four years separated by Trump-era immigration policy

research misconduct cases 2022

Getting Students Back to School

research misconduct cases 2022

Calling the cops: Policing in California schools

research misconduct cases 2022

Black teachers: How to recruit them and make them stay

research misconduct cases 2022

Lessons in Higher Education: California and Beyond

research misconduct cases 2022

Superintendents: Well paid and walking away

research misconduct cases 2022

Keeping California public university options open

research misconduct cases 2022

Getting students back to school: Addressing chronic absenteeism

August 28, 2024

research misconduct cases 2022

July 25, 2024

Adult education: Overlooked and underfunded

research misconduct cases 2022

May 14, 2024

Getting California kids to read: What will it take?

research misconduct cases 2022

News Update

Stanford university’s president investigated over research misconduct allegations.

research misconduct cases 2022

Thursday December 1, 2022 10:46 am

State board of education, november 11, 2020.

research misconduct cases 2022

Betty Márquez Rosales

Stanford University’s president, Marc Tessier-Lavigne, is the focus of an investigation alleging multiple manipulated images were included in at least four neurobiology papers that he co-authored.

The investigation was announced soon after The Daily, the university student newspaper, reported the allegations, which have been raised repeatedly over several years and most recently highlighted by Elizabeth Bik, a biologist who also investigates science misconduct.

The university’s Board of Trustees, which Tessier-Lavigne is a member of, is overseeing the investigation, but a Stanford spokeswoman confirmed that he “will not be involved in the Board of Trustees’ oversight of the review,” according to The Chronicle of Higher Education.

In 2015, Tessier-Lavigne submitted corrections to Science , where two of the papers in question were published. Science, however, did not publish them “due to an error on our part,” the Chronicle confirmed.

The timeline for the investigation remains unclear.

Latest updates:

Thursday, august 15, 2024, 12:50 pm, columbia university president resigns following “period of turmoil”, thursday, august 15, 2024, 10:35 am, lawmakers push again for parental leave for teachers, thursday, august 15, 2024, 9:27 am, school district in san diego county defies new state law by passing parental notification policy, wednesday, august 14, 2024, 10:31 am, new housing for homeless families in nine communities in california, wednesday, august 14, 2024, 9:51 am, fresno superintendent sets goal for improving test scores, stay informed with our daily newsletter.

Retraction Watch

Tracking retractions as a window into the scientific process

A U.S. federal science watchdog made just three findings of misconduct in 2021. We asked them why.

research misconduct cases 2022

Retraction Watch readers are likely familiar with the U.S. Office of Research Integrity (ORI), the agency that oversees institutional investigations into misconduct in research funded by the NIH, as well as focusing on education programs.

Earlier this month, ORI released data on its case closures dating back to 2006. We’ve charted those data in the graphics below. In 2021, ORI made just 3 findings of misconduct, a drop from 10 — roughly the average over the past 15 years — in 2020. Such cases can take years.

As the first chart makes clear, a similar dip in ORI findings of misconduct occurred in 2016. That was then-director Kathy Partin’s first year in the role, and a time of some turmoil at the agency. In an interview with us then , Partin referred multiple times to the agency being short-staffed. Partin was removed from the post in 2017 and became intramural research integrity officer at the NIH in 2018 .

ORI — as has often been the case over the past two decades — is once again without a permanent director. The most recent permanent director, Elisabeth (Lis) Handley, became Principal Deputy Assistant Secretary for Health in the Office of the Assistant Secretary for Health in July 2021.

We asked ORI to explain what’s behind the figures. A spokesperson responded on their behalf.

research misconduct cases 2022

Data: ORI; graphic by Retraction Watch

What is ORI’s explanation for the fact that there were so few findings in 2021?

After ORI conducts a thorough oversight review for cases in which the institution has submitted its final investigation report, ORI may or may not concur with the institutional findings (that research misconduct occurred or did not occur). ORI also must decide whether to pursue or decline to pursue (DTP) separate administrative actions. ORI may conclude that the misconduct lacks significance (e.g., no published papers or grants) or that PHS funds were not involved and closes the case as a DTP.  ORI’s DTP closure of a case is not an exoneration of the respondent, and an institution can implement its own administrative actions based on its determination of the respondent’s research, scientific, or professional misconduct. In CY 2021, ORI closed 41 cases (Finding of Research Misconduct, No Finding of Research Misconduct, and DTP), compared to 48 in CY 2020 (Finding of Research Misconduct, No Finding of Research Misconduct, and DTP).

research misconduct cases 2022

Per the ORI , a declined to pursue (DTP) closure “involves a case in which an institution found that research misconduct occurred and may implement administrative actions against the respondent, but during its oversight review, ORI determined that a separate PHS finding of research misconduct was not warranted.” A “no-misconduct closure involves a case in which the institution conducted an investigation and determined that research misconduct did not occur. Based on a preponderance of the evidence during its oversight review, ORI concurred with the institution’s determination.” And an accession “involves a case that was resolved during the assessment or inquiry stage of the institutional proceeding. Generally, during its assessment or inquiry into the allegation(s), the institution determined that there was insufficient evidence to warrant proceeding to an investigation. Based on its subsequent review of the institution’s assessment or inquiry report, ORI concurred with the institution.” Accessions include cases in which ORI decided the case was outside of its jurisdiction.

What is ORI’s explanation for why accessions nearly doubled, but findings declined by more than half?

As nearly every labor sector has experienced, institutional workflows were altered over the past two years of COVID-related building closures and staffing issues. These closures in part affected the ability of some institutions to complete their proceedings within the time limitations as specified in the federal regulations. Although the number of allegations received over that time has changed little, the delays in receiving completed institutional reports meant that ORI could focus on the full range of potential accession closures. ORI notes that some allegations may involve many published papers or grant applications with multiple respondents from various institutions, further enumerating the complexity of a case and completing its review. The number of accessions for any calendar year does not necessarily reflect the year in which ORI received the allegation, when the institution started or completed its research misconduct proceeding, or when ORI initiated its oversight review. Generally, an accession closure involves an institutional proceeding that did not progress to an investigation, and ORI’s oversight review concurred with the institution’s determination that there was insufficient evidence to warrant proceeding to an investigation.  In other cases, ORI may not have jurisdiction (does not involve PHS funded-research or is outside the 42 C.F.R. Part 93 definition of research misconduct). ORI would close such a case while the institution proceeds to an investigation under its own (or other funding agency’s) authority and relevant regulations. ORI closed 52 accessions in CY 2021 after thorough oversight review of the associated allegations and institutional outcomes. The increase in accession closures reflects tireless work carried out at ORI and by institutions.

Is ORI concerned about how long cases typically take to be adjudicated?

ORI recognizes the importance of and is focused on fully addressing allegations of potential research misconduct in the most effective and efficient way possible in accordance with 42 C.F.R. Part 93. ORI also recognizes that ensuring due process and a full, fair, and independent examination of allegations is in the best interest of all involved. It is important to remember that institutional processes take time. Sometimes the investigation must be expanded in scope to consider other possible research misconduct committed by the same or additional respondents or to examine additional papers and grant applications that were not part of the initial allegations. A thorough oversight review, which ORI undertakes when the institutions complete their work, also takes time. ORI hopes to expand its use of technology for file submission, reporting of allegations, and information processing to improve overall processes, efficiencies, and case closure rates in the coming years.  

Like Retraction Watch? You can make a  one-time  tax-deductible contribution by PayPal  or  by Square , or a  monthly tax-deductible donation by Paypal  to support our work, follow us  on Twitter , like us  on Facebook , add us to your  RSS reader , or subscribe to our  daily digest . If you find a retraction that’s  not in our database , you can  let us know here . For comments or feedback, email us at [email protected] .

Share this:

22 thoughts on “a u.s. federal science watchdog made just three findings of misconduct in 2021. we asked them why.”.

An example of the type of thing ORI does not consider misconduct, is self-plagiarism, such as the case I reported here… https://psblab.org/?p=611 Essentially, according to ORI it’s perfectly OK to just publish the exact same data multiple times across different journals, thus gaming the metrics system by which all academics are judged. 42 C.F.R. Part 93.103 needs to be rewritten!

Clearly ORI needs more funding and more staff. I am wondering if some sort of a tax on institutions receiving NIH grants could be levied to fund it?

Even cases that have been resolved by the DOJ are still be pending at ORI. One example would be the Sam W. Lee case (misconduct in grant proposals) which was settled with the DOJ but there is still no finding of misconduct from ORI. Another case that I have been waiting to see resolved is the Xianglin Shi/Zhuo Zhang case at the University of Kentucky which will be interesting reading because to my knowledge ORI have never found two respondents to be responsible for the same research misconduct.

Scotus (re the interesting funding idea): ORI was (is?) actually funded out of a fixed percentage of the annual NIH budget, nice because that’s generally politically immunized. But irrespective of congressional intent, ORI is administratively within – and its budget controlled by – OASH (Office of Assist Secretary of Health) . . . . which is at its own separate mercy of congressional appropriation process. So OASH controls ORI spending, and you can figure out the rest!

Having read the spokesperson’s comments, MEGO (My Eyes Glaze Over).

This RW story is based on the ORI spokesperson’s account, so one wouldn’t think that this had anything to do with the fact that -in its infinite wisdom (and just before the pandemic) – HHS downgraded ORI’s facilities by moving out of its 7th floor space down to the 2nd floor, eliminating and replacing individual offices with open carrels in that process. (Or so I was told then and had asked RW to look into at the time.)

Hard enough to work on confidential files, and d**n near impossible to review physical case evidence which must be secured in the file from. OK, so let’s add Covid to the mix, and it is not hard to see how the geniuses at HHS really screwed over ORI investigations. The facts are easier to understand than the spokesperson’s fine explanation

Fascinating. Thank you for the insight into the weaponization of open-plan offices.

I had heard of the idea of open offices before I left in 2013, but no one then thought it would be implemented by HHS since it was deemed so crazy and unworkable. (BTW: for clarification, my “file from” is ‘iPadese’ for “file room”.). But this had to be a short sighted decision from upstairs at HHS, fully independently from the troops doing investigations.

Given that it was made during the Trump administration, I think there’s a good chance it was done for the sole purpose of frustrating and demoralizing government workers.

Honestly the shocking thing to me about this is the idea that anybody in the government not in senior leadership still had individual offices.

It’s been almost a decade since I left federal service but at the time the cubicles were disappearing in favor of a completely open office, and there was serious talk of hot-desking.

Jake, what do you mean by “hot-desking” and would that in an open environment protect confidential phone conversations or need for spreading across your desk (and complete office) the original physical evidence in rebiewing these cases? Would a “hot desk” protect the scientist-investigator from even a less-than-clever respondent’s lawyer impeaching their credibility in an appeals hearing: “So Dr, how can we be sure my client’s data hasn’t been altered, since he swears those are not accurate”? Finally (and I don’t know the answer to this question), does the NIH recruit professional scientific staff at NIH work in cubicles, and at what GS level?

Hot-desking is where nobody has an assigned desk. First-come, first-serve at the start of every day. It could be combined with open-plan, cubicles, OR even private offices, but let’s face it, it’s most likely to be combined with open plan. During the workday, it’s no less private (could anything get LESS private than open-plan?), but the lack of assigned spaces means that you can’t just lock up your desk drawer or filing cabinet at the end of the day. I have no idea where confidential materials would be stored overnight. Maybe take them home? Yeah, sounds good, I can’t see anything going wrong…

Thanks Adele, I had not heard that term, which describes a come-and-go, pick-up-and-leave, portable work space. I don’t know how they have managed to make things work in the ‘new’ ORI facilities, but at my time (1993-2013) we (all 8-10 of us) worked on multiple case files at one time, confidential records spread, strewn, and stashed all over our individual secure and locked office spaces. I was told the only individual space was the computer forensics room we established. I can fully understand why the HHS spokesperson was unnamed. RW should look into who pushed those decisions downtown.

NSF moved into a new building in 2018. The initial plan was for open office but that was nixed, the argument being that program officers needed privacy when conducting panels, telephone calls with PIs, etc. The support/administrative staff, however, were put into cubicles. And – and I’m not making this up – low grade levels got cubicles with 66” barriers, high grade levels got 80” barriers.

Can somebody explain how come Fazlul Sarkar, one of the most disgraced “scientist” due to the extent of his research misconduct and retractions, cannot be found in the ORI summaries? At the time his papers were flagged, he had at least 4 R01s and also DOD funding. No investigation of Sarkar from ORI? No punishment for the millions of dollars that were wasted in fraudulent research? No accountability? How is this possible? Something doesn’t add up.

http://retractionwatch.com/2020/08/07/cancer-researcher-hit-with-10-year-ban-on-federal-us-funding-for-nearly-100-faked-images/

Wang was initially Sarkar’s PhD student and then he became his postdoc. Why was he targeted by ORI and not the advisor? The PI of the R01 and several DOD grants was Sarkar, and Zhang was not a co-author in many of the retracted papers. I don’t believe Zhang was held responsible by the investigation at Wayne, only Sarkar. It seems that Sarkar got away with the crime and the student paid the price. All very strange.

I spoke with a staffer with the House Investigations and Oversight Committee on Science, Space, and Technology late in 2021 and while he didn’t share any details, my impression was that they might be working on legislation aimed at addressing some of this and on research integrity in general.

In my discussion with him, I shared a list of ideas that might help with research funded by NIH and other US agencies. As I’m not a researcher (which he knew), the list may be misguided, but here it is:

More clear-cut guidance from the ORI about ethics in publishing. Perhaps adopt these as a code of ethics: https://www.nature.com/nature-portfolio/editorial-policies . While this may only be enforced on NIH-funded papers, having this as US policy would nudge US institutions (at least) to follow suit.

Require research to be published in open-access journals; public pays for it, public should get access

Require data retention for 10 years with a custodian (can be institution or for-profit data warehouse) pre-identified in grant requests and published paper

Require data transparency – with anonymization for patient data

Agreement that funded researchers must submit to audit or examination upon request, or grant money is clawed back

Whistleblower protections in writing before grants awarded

Perhaps adopt the “Bik scale” https://scienceintegritydigest.com/2020/01/08/oops-i-did-it-again/

In my experience working on ORI images with Journals, an institutional data retention requirement (by the funders such as the NIH) would be a key first step. But start with the key motivation in ‘science.’ That of course is “retraction” (of occasional discussion by RW watchers), and that solely is the purview of academic/research institutions and (IMHO) not a call for government. But a funding requirement for data retention would make the key step much easier for Journals.

Specifically, Journals can already exercise common sense (and one in their own self interest), by simply requiring the corresponding author to accept, pre-publication, an immediate retraction ‘for cause’, namely if 1) a legitimate question is raised in post publication peer review and 2) the data is then found to be ‘missing.’ No debate about the results, easy-peasey, and all within academic prerogatives. If the journals stepped up to the plate, 70% of ORI’s investigative work (that involving images cases) would evaporate.

Such an institutional data retention requirement by funders would facilitate what publishers can already be doing to protect their investment and inspire coauthors to review the primary data.

The NIH has a new policy on data management that will go into effect in 2023

https://grants.nih.gov/grants/guide/notice-files/NOT-OD-21-013.html

The policy will will require “storing data of sufficient quality to validate and replicate scientific findings” with an expectation that data will be digital or digitized.

Use of a data management system would establish the provenance of data and make it much simpler to identify fraud. Based on the amount of gel and western blot fraud I can only imagine how many bogus graphs and tables of data the fraudsters are churning out.

There are literally hundreds, if not thousands, of fake, fraudulent, or otherwise “misconducted” research published every year. Everybody knows this. It is also clear that many of these cases are extremely obvious, even if others might be harder to identify. The ORI finds three (3) cases in a whole year… That seems like an institution that has no reason to exists if it can’t work more effectively than that.

I suggest you (re?)read the federal regulations defining the limits of ORI’s jurisdiction and authority.

(Indulge my aggressive mock cynicism here: Historically , the ORI was created to sweep up after the annual NIH appropriation funding parade passed each year, much to the then dismay of the academic research community. Are you saying it is ORI’s (and government’s) task to enforce failures to make really the simple steps that academics, research institutions, and journals could painlessly and easily implement to insure their standards keep up with research practices in the digital age?

In addition, ORI lost last year one of its top (for 12 years) scientist-investigators, described by the former ORI Director as having “brought incredible tenacity, a willingness to teach others and to share her knowledge, strategic understanding of the importance of working with others like our Office of General Counsel and RIOs on the successful outcome of cases, and a keen and strong intellect to our work. The outcomes she achieved with her cases are something I wish the public knew about public servants like her. She fought hard to make sure that biomedical research results can be counted upon, by painstakingly chasing down fabrication, falsification and plagiarism in hundreds of cases across multiple disciplines and with thousands and thousands of images.” https://ori.hhs.gov/blog/personnel-announcement-ori-director

Thanks, Anne.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Deceiving scientific research, misconduct events are possibly a more common practice than foreseen

Alonzo alfaro-núñez.

1 Department of Clinical Biochemistry, Naestved Hospital, Ringstedgade 57a, 4700 Naestved, Denmark

2 Section for Evolutionary Genomics, GLOBE Institute, University of Copenhagen, Øster Farimagsgade 5, 1353 Copenhagen K, Denmark

Associated Data

Not applicable.

Today, scientists and academic researchers experience an enormous pressure to publish innovative and ground-breaking results in prestigious journals. This pressure may blight the general view concept of how scientific research needs to be done in terms of the general rules of transparency; duplication of data, and co-authorship rights might be compromised. As such, misconduct acts may occur more frequently than foreseen, as frequently these experiences are not openly shared or discussed among researchers.

While there are some concerns about the health and the transparency implications of such normalised pressure practices imposed on researchers in scientific research, there is a general acceptance that researchers must take and accept it in order to survive in the competitive world of science. This is even more the case for junior and mid-senior researchers who have recently started their adventure into the universe of independent researchers. Only the slightest fraction manages to endure, after many years of furious and cruel rivalry, to obtain a long-term, and even less probable, permanent position. There is an evil circle; excellent records of good publications are needed in order to obtain research funding, but how to produce pioneering research during these first years without funding? Many may argue this is a necessary process to ensure good quality scientific investigation, possibly, but perseverance and resilience may not be the only values needed when rejection is received consecutively for years.

There is a general culture that scientists rarely share previous bad experiences, in particular if they were associated to misconduct, as they may not be seen or considered as a relevant or hot topic to the scientific community readers. On next, a recent misconduct experience is shared, and a few additional reflections and suggestions on this topic were drafted in the hope other researchers might be spared unnecessary and unpleasant times.

Scientists are under great pressure to publish not only high-quality research, but also a larger number of publications, the more the merrier, within the first years of career in order to survive in the competitive world of science. This pressure might mislead young less experienced researchers to take “shortcuts” that may consequently mislead to carry out misconduct actions. The aim of this article is not just trying to report a case of misconduct to the concerned stakeholders, but also to the research community as a whole in the hope other researchers might avoid similar experiences. Moreover, some basic recommendations are shared to remind the basic rules of transparency, duplication of data and authorship rights to avoid and prevent misconduct acts based on existing literature and the present experience.

Welcoming collaboration

During the first months of 2021, already in the second year of the COVID-19 pandemic with most European research institutes and labs still in lockdown [ 1 ], and all over the world, I received an email from a young researcher overseas. This young fellow is based in Bangladesh, South Asia, in a country in which I have never collaborated before. He was interested in a potential collaboration with many ideas, and proved to be a very energetic person writing me on a daily basis and even several times a day during the first weeks.

There were obviously some suspicions about the nature of this collaboration, but the general and basic background check out was done, and this fellow seemed to be legitimate. Thus, after a few weeks of discussing back and forth research ideas, I welcomed the collaboration. Thereafter, for the first few months many ideas were elaborated and discussed, and so we began to draft two review manuscripts simultaneously. In no time, it felt like a potential and long-standing collaboration was born. However, it also required additional time because of the linguistic and cultural barrier. It appeared that sometimes the main message was getting lost in translation, and it was reflected in the text on the various manuscript versions. We repetitively argued about the importance of transparency, the correct use of data previously published and the general rules of authorship and citation, especially when producing a new review document. Nevertheless, these errors were corrected and he guaranteed to have full understanding, and I trusted.

After some time, enthusiasm started to decline and the highly motivated collaborator started to rush to complete the work regardless of the quality, especially as a third manuscript was now also in play. I was not willing to sacrifice quality, so I started using more of my personal time to complete the different manuscripts, I felt committed. After six months or so, the first of the three manuscripts was ready, and the process of submission started to a high-impact peer-review journal to a special issue on a topic where I had been invited months ago. A few months later, the second manuscript followed the same steps.

By the middle of April 2022, the first of the manuscripts had just been accepted; the second one was already in its second round of review, and the third and last of the manuscripts was ready for submission. I cannot deny the satisfaction felt of a good job properly done in a time record (for my personal standards).

Deceptive surprise

Through the last hours, before submitting our final manuscript, the mandatory final inspection was done. However, I noticed something odd, two new citations had been added in the last minute, and I did not approve that change. Even more curious, the two citations had the new collaborator’s name on it. Immediately, I searched for the two mysterious documents, a book chapter and another peer-reviewed publication were the result. To my surprise, the titles of these two new works were very similar and somehow nearly identical to the topic we had just finished and his name appeared as the first author. Both documents were not open access and had recently been published, one of them less than a week old. Furthermore, our manuscript, the same document I was supposed to submit that same day, had six figures and four tables, all generated by our collaborative work. The book chapter had exactly the same figures and tables just in a different order, but the data and content were nearly identical. The text redaction was different, and there were also some other co-authors from his same region, but the content and background idea was the same.

During the next hours, I went back to the other two manuscripts. Indeed, all my fears were right. My new collaborator had systematically been committing fraud, replicating manuscripts using the same data and publishing by himself using my very ideas and sentences.

I confronted him; I wanted to receive an explanation, a reason for these actions. I copied all other co-authors in these communications. The three manuscripts had built international collaboration, and other parties had actively participated, and now we all were compromised. The first reaction received was that he was not aware that was an illegal action, and then, silence. No satisfactory answer was ever received, and more importantly, it seemed some of the other co-authors did not care, nor were surprised.

The aftermath of deception

In the next coming days, I redacted several email letters describing the misconduct situation to the different journal’s editors, preprint services and especially to the main affiliations of this fraudulent person. The two manuscripts were withdrawn from the respective journals right away. Together with the third manuscript, none of the documents will ever be published. There is a long history and documentation showing that withdraws and retractions of scientific manuscripts may be the most relevant form of silently reporting scientific misconduct [ 2 , 3 ], and now I was part of it. Editors from the journals and editorial houses where the duplicated documents had been published responded to investigate the case. However, after several months of waiting, and despite the multiple complain letters providing all the evidence to prove the misconduct act, no official sanctions have been taken by any of the journals and the documents remain still available online. Editors have the responsibility to pursue scientific misconduct in submitted or published manuscripts; however, editors are not responsible for conducting investigation or deciding whether a scientific misconduct occurred [ 4 ].

The preprint services response was very clear and conclusive, regardless of the evidence provided, the documents published online in their preprint format cannot and will not be removed. Now our names will remain associated with this person to posterity, another wonderful discovery. Release of early results in the format of preprints without going through the process of peer-review is an old well known issue of concern [ 5 – 7 ]. For the last few years I have been in favour and accepting the early release of preprint publications, this new experience has made me reconsider and change entirely this position. I find unacceptable that in spite of providing all evidence of research misconduct, fraud and duplication of data especially, a retraction of a preprint document is not possible for most preprint services available.

As for the consequences or sanctions imposed on this “researcher” by his own affiliate institutions, it also remains unknown as no reply or answer has been received until now. Additionally, some of his personal collaborators also included as co-authors during the editing process of the manuscripts, as it was claimed they “intellectually contributed” to the study, contacted me during the first weeks after withdrawing. These collaborators were unhappy about the decision taken, and complained asking: ‘‘ what is it really necessary to retract the documents entirely, in particular one manuscript already accepted and a second one in-review? Why was not this decision put into a vote among the co-authors?” They did not considered to be an enough reason for withdrawing and claimed, “ It had been a rush and wrong decision” . The answer was simple, it was a clear research misconduct act and the data has been duplicated and misused, my decision could not be clouded by the grief of losing three publications. Besides, I was the last author and corresponding author for all three manuscripts, and thus, the responsibility and final decision relied on me. Furthermore, and as a curious additional detail, all editors associated to the journals where the two-duplicated manuscripts were published, all are as well from the same region as this person. All these facts together allow me to reach the conclusion that misconduct practices may be relatively more common in some other parts of the world, and the research culture may play an important role in this type of practices, but we are still afraid to discuss about it [ 8 ]. There are no rigorous or systematic controls to regulate that one unique person can manipulate, duplicate with slight modifications in the text, and publish the same datasets in different journals, especially if the time between submissions is minimal. There are thousands of journals with many more thousands of editors in an infinite number of online platforms. Decisions over whether to retract or modify a study are more likely to take years than months, this time could potentially harmfully misinform [ 9 ] and damage the reputation of researchers [ 3 ] if any sanction is taken at all by the end [ 10 ]. Based on the previous rationale, this author who duplicated our work and published by himself may simply get away with it, two fraudulent copy/paste extra publications and zero consequences.

Hundreds of hour’s work and nearly a year of effort were lost in an instant. As many others, I believe I work and interact with researchers sharing similar values of honesty, openness and accountability pursuing to establish as an independent researcher to produce good science work. Yet every aspect of science, from the framing of a research idea to the publication of a manuscript, is susceptible to influences that can lead to misconduct [ 11 ]. By withdrawing at once three manuscripts, now associated to misconduct practices, my research colleagues and I will suffer the consequences of the current academia culture of “publish or perish” [ 12 ].

Recommendations to avoid unpleasant research events

With two official retractions across the editorial offices of two major journals and three preprint documents that I cannot rig out, all associated to fraud and scientific misconduct; I am probably the less qualified person with the least authority to provide any feedback and even less, a short list of recommendations to prevent misconduct in research. Nevertheless, here I am. There are many general guidelines and basic rules to prevent, avoid and report misconduct actions [ 3 , 13 – 15 ], the interested readers can get more information below in the reference list if they want to explore deeper into this. Using these guidelines as the main backbone, a short list of three main recommendations is presented in the lines below.

The first and possibly most important recommendation, despite the previous shared experience; always welcome collaboration after a well-throughout background check. This may sound contradictory, but contemporary science is based on collaboration and the interdisciplinary combination of fields [ 16 ], one bad experience and one “rotten apple” cannot disrupt the development of scientific research. Of course, it is mandatory to be vigilant and to carefully investigate the background interests [ 9 ] and history of each new door that opens along the way. Welcome collaboration cautiously.

A second recommendation, to investigate the institution and location of the new coming collaborations. As stated above, the cultural background [ 8 ], and thus, the location of these new collaboration institutions may play a very important role in the final outcome. Most countries across Europe and in the U.S. have well-defined guidelines [ 3 , 10 ], which varied a lot about each principle and at the end are regulated by each institution research policies. However, there may be regions across the world where policies and regulations concerning misconduct actions and the implications and consequences are yet not well established [ 17 ]. Avoid those.

My third recommendation, and possibly the most relevant of all, do not take for granted that the other researchers are fully aware that some actions may lead to misconduct. My biggest mistake was to believe that other researchers knew or cared about the basic rules of duplication of data, transparency and respect of authorship rights. Ignorance still accounts for a large portion of the research misconduct actions [ 11 , 18 ]. Never assume that others know and respect the broad spectrum of misconduct actions.

Two additional personal recommendations. Stay away from review manuscripts and book chapters, avoid them at all cost. Consider very carefully sharing your manuscript results in the format of an early release preprint online publication.

Conclusions

There is so much to modify in the existing science research environment to avoid situations like this to continue or ever happen again. Young scientists need to be inspired and motivated to produce by example based on principles of integrity, ethical values, transparency and respect, and not by current trend of rejection and extreme pressure. Dealing with the research pressure to secure external funds and to publish in top-tier journals stand as the most common stressors that contribute to research misconduct [ 15 , 19 ]. The same research culture that creates this pressure for publishing and obtaining funds, it also contributes to the behaviour practice of silence that leads to ignore and avoid the topic of misconduct in research. While there is a general concern and scientific journals attempt to take situations like this seriously, there should also be a more open space to share and inform junior and even senior researchers about this kind of predatory stealing research practices.

Manipulation and duplication of data to inflate academic records is a desperate and shameless act, and it truly represents scientific misconduct and fraud. Unfortunately, there is a general trend with an increase in misconduct in research [ 13 ], which ultimately account for the majority of withdrawals in modern scientific publications [ 20 ]. I would like to believe that even good people could do bad things when extreme pressure is received. Nevertheless, would this justify misconduct and fraud? Never!

Acknowledgements

Special thanks to Esther Agnete Jensen, Therese Kronevald, Stina Christensen, Aksel Skovgaard, Morten Juel, Jesper Clausager Madsen and Alonso A. Aguirre for their support and advice. The author would also like to thank the three anonymous reviewers for their comments, feedback and improvements.

Author contributions

The author read and approved the final manuscript.

Authors’ information

Web of Science Researcher ID H-2972-2019.

This research received support from the Department of Clinical Biochemistry at Naestved Hospital, Region Sjaelland.

Availability of data and materials

Declarations.

The author gives full consent for publication.

The author declares no competing of interest.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A compilation of articles regarding Research Misconduct issues. 

This page offers news-worthy topics for the Responsible Conduct of Research and Research Misconduct. Note: Due to the nature of web page evolution, some links may be broken.

Research Misconduct News

August September October November December

Errors in scientific research — intentional or not — hurt the progress of science. CREDIT: ISTOCK.COM/ALEKSEYLISS

Science stands on shaky shoulders with research misconduct

Research misconduct poisons the well of scientific literature, but finding systemic ways to change the current “publish or perish” culture will help.

July 4, 2024 Drug Discovery News (DDN) Stephanie DeMarco, PhD

I distinctly remember the day I saw a western blot band stretched, rotated, and pasted into another panel. Zoomed out, it looked like a perfectly normal blot; the imposter band sat amongst the others like it had always been there.

Sitting at a long table with the other graduate students on my training grant, I watched as our professor showed us example after example of images from published scientific papers that had been manipulated to embellish the data. I really appreciate that course and the other research integrity courses I took during my research training for teaching me and my peers how to spot bad science and what to do when we encounter it. It made me a better scientist when I was in the lab, and now, it makes me a better journalist.

When bad science infiltrates the publication record, researchers unwittingly build their own research programs around shaky science. Not only does this waste researchers’ time and money, but it affects real people’s lives. 

Read more...

PsyArXiv Preprints

Implementing statcheck during peer review is related to a steep decline in statistical reporting inconsistencies

June 20, 2024 PsyArXiv Preprints Michele B. Nuijten and Jelte Wicherts

We investigated whether statistical reporting inconsistencies could be avoided if journals implement the tool statcheck in the peer review process. In a preregistered pretest-posttest quasi-experiment covering over 7000 articles and over 147,000 extracted statistics, we compared the prevalence of reported p-values that were inconsistent with their degrees of freedom and test statistic in two journals that implemented statcheck in their peer review process (Psychological Science and Journal of Experimental and Social Psychology) and two matched control journals (Journal of Experimental Psychology: General and Journal of Personality and Social Psychology, respectively), before and after statcheck was implemented. Preregistered multilevel logistic regression analyses showed that the decrease in both inconsistencies and decision inconsistencies around p = .05 is considerably steeper in statcheck journals than in control journals, offering preliminary support for the notion that statcheck can be a useful tool for journals to avoid statistical reporting inconsistencies in published articles. We discuss limitations and implications of these findings.

Interviews with staff and students at three elite Chinese universities revealed a sense of pressure to publish. Credit: Hao Qunying/Costfoto/Sipa USA via Alamy

Elite researchers in China say they had ‘no choice’ but to commit misconduct

Anonymous interviewees say they engaged in unethical behaviour to protect their jobs — although others say study presents an overly negative view .

June 11, 2024 Nature Smriti Mallapaty

“I had no choice but to commit [research] misconduct,” admits a researcher at an elite Chinese university. The shocking revelation is documented in a collection of several dozen anonymous, in-depth interviews offering rare, first-hand accounts of researchers who engaged in unethical behaviour — and describing what tipped them over the edge. An article based on the interviews was published in April in the journal  Research Ethics 1 .

The interviewer, sociologist Zhang Xinqu, and his colleague Wang Peng, a criminologist, both at the University of Hong Kong, suggest that researchers felt compelled, and even encouraged, to engage in misconduct to protect their jobs. This pressure, they conclude, ultimately came from a Chinese programme to create globally recognized universities. The programme prompted some Chinese institutions to set ambitious publishing targets, they say.

The article offers “a glimpse of the pain and guilt that researchers felt” when they engaged in unethical behaviour, says Elisabeth Bik, a scientific-image sleuth and consultant in San Francisco, California.

Karen Ashe of the University of Minnesota Twin Cities stands by the conclusions of her team’s 2006 paper. JERRY HOLT/STAR TRIBUNE VIA GETTY IMAGE

Researchers plan to retract landmark Alzheimer’s paper containing doctored images

Senior author acknowledges manipulated figures in study tying a form of amyloid protein to memory impairment

June 4, 2024 Science Charles Pillar

Authors of a landmark  Alzheimer’s disease research paper  published in  Nature  in 2006 have agreed to retract the study in response to allegations of image manipulation. University of Minnesota (UMN) Twin Cities neuroscientist Karen Ashe, the paper’s senior author, acknowledged in a  post  on the journal discussion site PubPeer that the paper contains doctored images. The study has been cited nearly 2500 times, and would be the most cited paper ever to be retracted, according to  Retraction Watch data .

“Although I had no knowledge of any image manipulations in the published paper until it was brought to my attention two years ago,” Ashe wrote on PubPeer, “it is clear that several of the figures in Lesné et al. (2006) have been manipulated … for which I as the senior and corresponding author take ultimate responsibility.”

Biomedical paper retractions have quadrupled in 20 years — why?

Biomedical paper retractions have quadrupled in 20 years — why?

Unreliable data, falsification and other issues related to misconduct are driving a growing proportion of retractions.

May 31, 2024 Nature Holly Else

The retraction rate for European biomedical-science papers increased fourfold between 2000 and 2021, a study of thousands of retractions has found.

Two-thirds of these papers were withdrawn for reasons relating to research misconduct, such as data and  image manipulation  or  authorship fraud . These factors accounted for an increasing proportion of retractions over the roughly 20-year period, the analysis suggests.

“Our findings indicate that research misconduct has become more prevalent in Europe over the last two decades,” write the authors, led by Alberto Ruano‐Ravina, a public-health researcher at the University of Santiago de Compostela in Spain.

Copy-and-Paste: How Allegations of Plagiarism Became the Culture War’s New Frontier

Copy-and-Paste: How Allegations of Plagiarism Became the Culture War’s New Frontier

Harvard had already found itself in the crossfires of the culture war. But with new software at their disposal and a trove of unscrutinized scholarship to dive into, the plagiarism allegations against Claudine Gay had opened up a new frontier.

May 23, 2024 The Harvard Crimson Angelina J. Parker and Neil H. Shah

Plagiarism is a cardinal offense for academics. In December, it also became the latest cudgel in the conservative culture war on Harvard and diversity, equity, and inclusion.

The development could not have come at a worse time for the University. Harvard was  struggling to navigate public fallout  from former President Claudine Gay’s now-infamous  congressional hearing . The University was under a national microscope like never before, and politicians, alumni, and Harvard affiliates were  calling for Gay’s resignation .

And amidst it all — as the Harvard Corporation met to discuss Gay’s future at the University — right-wing activist Christopher F. Rufo and journalist Christopher Brunet hit publish on a piece that would add a new element to the controversy:  allegations  that Gay had plagiarized large sections of her Ph.D. dissertation at Harvard.

Why Scientific Fraud Is Suddenly Everywhere

Why Scientific Fraud Is Suddenly Everywhere

May 21, 2024 Intelligencer Kevin T. Dugan

Junk science has been forcing a reckoning among scientific and medical researchers for the past year, leading to thousands of retracted papers. Last year, Stanford president Marc Tessier-Lavigne resigned amid  reporting  that some of his most high-profile work on Alzheimer’s disease was at best inaccurate. (A probe commissioned by the university’s board of trustees later  exonerated  him of manipulating the data).

But the problems around credible science appear to be getting worse. Last week, scientific publisher Wiley  decided to shutter  19 scientific journals after retracting 11,300 sham papers. There is a large-scale industry of so-called “paper mills” that sell fictive research, sometimes written by artificial intelligence, to researchers who then publish it in peer-reviewed journals — which are sometimes edited by people who had been placed by those sham groups. Among the institutions exposing such practices is Retraction Watch, a 14-year-old organization co-founded by journalists Ivan Oransky and Adam Marcus. I spoke with Oransky about why there has been a surge in fake research and whether fraud accusations against the presidents of Harvard and Stanford are actually good for academia.

Pay researchers to spot errors in published papers

Pay researchers to spot errors in published papers

Borrowing the idea of ‘bug bounties’ from the technology industry could provide a systematic way to detect and correct the errors that litter the scientific literature.

May 21, 2024 Nature Malte Elson

In 2023, Google awarded a total of US$10 million to researchers who found vulnerabilities in its products. Why? Because allowing errors to go undetected could be much costlier. Data breaches could lead to refund claims, reduced customer trust or legal liability.

It’s not just private technology companies that invest in such ‘bug bounty’ programmes. Between 2016 and 2021, the US Department of Defense awarded more than US$650,000 to people  who found weaknesses in its networks .

Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering.

"‘Grimpact’: psychological researchers should do more to prevent widespread harm

‘Grimpact’: psychological researchers should do more to prevent widespread harm

Researchers carefully evaluate ethics for study participants, but Alon Zivony argues we need to consider wider guidelines for socially responsible science.

May 17, 2024 The British Psychological Society (the psychologist) Dr. Alon Zivony

In a recent study, US researchers claimed to have found that Black and Hispanic Americans are not more likely to be fatally shot by police officers than White Americans. Unsurprisingly, this study got a lot of attention with over 100 news outlets covering it and millions of people  discussing it  on social media. Meanwhile, scientists criticised the study for its analyses, claiming they were so flawed that they invalidated the conclusions entirely. At first, the authors rejected the criticisms.

But then, something almost unprecedented happened: in response to the public debate, the authors decided to retract their paper due to 'the continued use' of the work to 'support for the idea that there are no racial biases in fatal shootings, or policing in general'. In other words, this highly visible paper was retracted, not because of flaws in the methodology, but because of ethical concerns about its adverse impacts on society.

Scientists Possess Inflated Views of Their Own Ethics

Scientists Possess Inflated Views of Their Own Ethics

Scientists are many things. Being unbiased isn’t one of them.

May 6, 2024 Psychology Today Matt Grawitch, Ph.D.

A recent  Psychology Today  post by  Miller (2024)  discussed the results of a research study 1  that included a sample of more than 10,000 researchers from Sweden. Respondents were provided with a description of ethical research practices (Figure 1) and asked to rate (1) how well they applied ethical research practices relative to others in their field and (2) how well researchers in their field applied ethical research practices relative to those in other fields.

The study itself was not overly complex (in fact, each rating was just a single item). When it came to rating their own application of research ethics, 55 percent rated themselves as equal to their peers, close to 45 percent rated themselves as better, and less than 1 percent rated themselves as worse. When it came to assessing others in their field, 63 percent rated their field as similar to others, 29 percent rated their field as better, and close to 8 percent rated their field as worse.

How reliable is this research? Tool flags papers discussed on PubPeer

How reliable is this research? Tool flags papers discussed on PubPeer

Browser plug-in alerts users when studies — or their references — have been posted on a site known for raising integrity concerns.

April 29, 2024 Nature Dalmeet Singh Chawla

A free online tool released earlier this month alerts researchers if a paper cites studies that are mentioned on the website  PubPeer , a forum scientists often use to raise integrity concerns surrounding published papers.

Studies are usually flagged on PubPeer when readers have suspicions, for example about  image manipulation ,  plagiarism , data fabrication or  artificial intelligence (AI)-generated text . PubPeer already offers its own browser plug-in that alerts users if a study that they are reading has been posted on the site. The new tool, a plug-in  released on 13 April by RedacTek , based in Oakland, California, goes further — it searches through reference lists for papers that have been flagged. The software pulls information from many sources, including PubPeer’s database; data from the digital-infrastructure organization Crossref, which assigns  digital object identifiers  to articles; and  OpenAlex , a free index of hundreds of millions of scientific documents.

So you’ve found research fraud. Now what?

So you’ve found research fraud. Now what?

Harvard dishonesty researcher Francesca Gino faked her research. But she still has a lot to teach us.

April 26, 2024 Vox Kelsey Piper

When it is alleged that a scientist has manipulated data behind their published papers, there’s an important but miserable project ahead: looking through the  rest  of their published work to see if any of that is fabricated as well. 

After dishonesty researcher Francesca Gino was placed on leave at Harvard Business School last fall following allegations that  four of her papers contained manipulated data , the people who’d co-authored other papers with her scrambled to start double-checking their published works. 

ILLUSTRATION BY THE CHRONICLE; ISTOCK

One Scientist Neglected His Grant Reports. Now U.S. Agencies Are Withholding Grants for an Entire University.

April 10, 2024 The Chronicle of Higher Education Francie Diep

The National Institutes of Health, the Office of Naval Research, and the U.S. Army are withholding all of their grants from the University of California at San Diego because one scientist failed to turn in required final reports for two of his grants, according to a message sent to the campus community on Tuesday.

“This action is the result of one Principal Investigator’s extended non-submission of final technical reports for two awards,” Corinne Peek-Asa, vice chancellor for research and innovation, wrote in the message. “If you are a PI receiving a new or continuing award from one of these agencies, you will receive a notice that the award will be delayed.”

Students rally on the campus of UCLA in 2016 to protest the handling of a sexual-harassment case. Credit: LUIS SINCO, LOS ANGELES TIMES, GETTY IMAGES

One Way to Stop ‘Passing the Harasser’? Require Colleges to Ask About It.

April 8, 2024 The Chronicle of Higher Education Michael Vasquez

Higher education has long been dogged by the “pass-the-harasser” phenomenon, in which employees found responsible for sexual misconduct have been allowed to quietly depart their colleges, only to be hired by other campuses who knew nothing of their misdeeds. Sometimes the misconduct continues.

That is slowly changing. California is the latest state to consider enacting a law that would require colleges to contact job applicants’ current or past employers to ask about policy violations. Assembly Bill 810, part of  a larger package of anti-harassment legislation , has been passed by the state’s lower chamber and is now in the Senate.

Physicist Ranga Dias was once a rising star in the field of superconductivity research. Credit: Lauren Petracca/New York Times/Redux/eyevine

Exclusive: official investigation reveals how superconductivity physicist faked blockbuster results

The confidential 124-page report from the University of Rochester, disclosed in a lawsuit, details the extent of Ranga Dias’s scientific misconduct.

April 6, 2024 Nature Dan Garisto

Ranga Dias, the physicist at the centre of  the room-temperature superconductivity scandal , committed data fabrication, falsification and plagiarism, according to a investigation commissioned by his university.  Nature ’s news team discovered the bombshell investigation report in court documents.

The ten-month investigation, which concluded on 8 February, was carried out by an independent group of scientists recruited by the University of Rochester in New York. They examined 16 allegations against Dias and concluded that it was more likely than not that in each case, the physicist had committed scientific misconduct. The university is now attempting to fire Dias, who is a tenure-track faculty member at Rochester, before his contract expires at the end of the 2024–25 academic year.

ROBERT NEUBECKER

Why I foster multiple lines of communication with students in my lab

April 4, 2024 Science Denis Meuthen

When I started my faculty position, I was excited to be leading my own lab—and nervous. I’m legally deaf and rely on lip-reading for verbal communication. I had managed fine as a graduate student and postdoc, though not without misunderstandings and challenges. But leading a team was different. I worried about whether I would be able to communicate effectively with my lab members, and also whether they would respect me. My pronunciation is sometimes off because of my disability, which leads some people to judge my intelligence as lacking. I set out unsure how to navigate these uncertain waters. But after almost 2 years in the position, I’ve come up with a set of solutions for how best to communicate with my trainees. Many would be useful for any lab head, disabled or not.

Microbiology Society

Publishing negative results is good for science

April 2, 2024 Microbiology Society Elisabeth M. Bik

Scientists face challenges in publishing negative results, because most scientific journals are biassed in accepting positive and novel findings. Despite their importance, negative results often go unpublished, leading to duplication of efforts, biassed meta-analyses, and ethical concerns regarding animal and human studies. In this light, the initiative by Access Microbiology to collect and publish negative results in the field of microbiology is a very important and valuable contribution towards unbiassed science.

Bik, E. M. (2024). Publishing negative results is good for science. Access Microbiology , 6 (4). https://doi.org/10.1099/acmi.0.000792

Getty Images

Universities Oppose Federal Plan to Bolster Research Misconduct Oversight

The Office of Research Integrity is considering stronger regulations for institutional investigations of alleged research misconduct. Universities say it’s too prescriptive.

April 2, 2024 Inside Higher Ed Kathryn Palmer

The federal Office of Research Integrity (ORI) is  proposing changes  that would give the government more oversight of investigations of research misconduct at colleges and universities.

But scores of university and research hospital leaders and the organizations representing them are opposed and say the proposed rules would be burdensome to institutions and could potentially deter people from reporting alleged research misconduct, among other perceived negative consequences.

An unpleasant surprise awaited scientists who surveyed 1,035 journal articles to prepare a review about a test commonly carried out on rats. Credit: Oleksandr Bushko/Alamy

How papers with doctored images can affect scientific reviews

Scientists compiling a review scan more than 1,000 papers and find troubling images in some 10%.

March 28, 2024 Nature Sumeet Kulkarni

It was in just the second article of more than 1,000 that Otto Kalliokoski was screening that he spotted what he calls a “Photoshop masterpiece”.

The paper showed images from western blots — a technique used to analyse protein composition — for two samples. But Kalliokoski, an animal behaviourist at the University of Copenhagen, found that the images were identical down to the pixel, which he says is clearly not supposed to happen.

Image manipulation  in scientific studies is a known and widespread problem. All the same, Kalliokoski and his colleagues were startled to come across more than 100 studies with  questionable images  while compiling a systematic review about a widely used test of laboratory rats’ moods

Institutions such as Harvard Medical School argue their processes are adequate and adjustments would constitute government overreach. PHOTO: CHARLES KRUPA/ASSOCIATED PRESS

The Feds Want More Oversight of Scientific Research. Universities Are Fighting Back.

Research institutions are pushing back against proposed changes to misconduct, plagiarism investigations

March 28, 2024 The Wall Street Journal Melissa Korn and Nidhi Subbaraman

Research universities and hospitals are pushing back against a federal agency’s proposal to boost oversight of investigations related to fraud and plagiarism, even as many face questions over the credibility of their scientists’ work.

The Office of Research Integrity, part of the U.S. Department of Health and Human Services, oversees more than $40 billion in research funds and is calling for more transparency in research-misconduct investigations. The recommended changes come amid  high-profile cases  at schools including Stanford University, Harvard Medical School and the University of Rochester. 

Morteza Mahmoudi is the co-founder of the Academic Parity Movement, an organization that aims to end bullying in academia.Credit: Haniyeh Aghaverdi

Bullied in science: I quit my job and launched an advocacy non-profit

Ahead of the Academic Parity Movement’s annual conference, co-founder Morteza Mahmoudi describes how it supports whistle-blowers.

March 12, 2024 Nature Morteza Mohmoudi

I experienced a wide spectrum of academic bullying and eventually had to quit a job because of it. It was a heart-wrenching decision. Since my departure, I’ve found peace in a supportive work environment. I was determined to use all the available means to prevent others from facing similar situations.

So, alongside my scientific work, I study the root causes of academic bullying and harassment and seek solutions to them. I forgave my bully last year, but I still find it challenging to forgive those who protected the bully and ultimately forced my departure.

Nature

Automatically listing senior members of departments as co-authors is highly prevalent in health sciences: meta-analysis of survey research

March 11, 2024 Nature Reint A. Meursinge Reynders, David Cavagnetto, Gerben ter Riet, Nicola Dr Girolamo, & Mario Malicki

A systematic review with meta-analysis was conducted to assess the prevalence of automatically listing (a) senior member(s) of a department as co-author(s) on all submitted articles in health sciences and the prevalence of degrees of support on a 5-point justification scale. Survey research was searched in PubMed, Lens.org, and Dimensions.ai. until January 5 2023. We assessed the methodological quality of studies and conducted quantitative syntheses. We identified 15 eligible surveys, that provided 67 results, all of which were rated as having low quality. A pooled estimate of 20% [95% CI 16–25] (10 surveys, 3619 respondents) of researchers in various health sciences reported that a senior member of their department was automatically listed as an author on all submitted articles. Furthermore, 28% [95% CI 22–34] of researchers (10 surveys, 2180 respondents) felt that this practice was ‘never’, 24% [95% CI 22–27] ‘rarely’, 25% [95% CI 23–28] ‘sometimes’, 13% [95% CI 9–17] ‘most of the time’, and 8% [95% CI 6–9] ‘always justified’. The practice of automatically assigning senior members of departments as co-authors on all submitted manuscripts may be common in the health sciences; with those admitting to this practice finding it unjustified in most cases.

Meursinge Reynders, R.A., Cavagnetto, D., ter Riet, G.  et al.  Automatically listing senior members of departments as co-authors is highly prevalent in health sciences: meta-analysis of survey research.  Sci Rep   14 , 5883 (2024). https://doi.org/10.1038/s41598-024-55966-x

Ranga Dias and his team at the University of Rochester compressed materials in a device called a diamond anvil cell to explore superconductivity. Credit: Lauren Petracca/New York Times/Redux/eyevine

Superconductivity scandal: the inside story of deception in a rising star’s physics lab

Ranga Dias claimed to have discovered the first room-temperature superconductors, but the work was later retracted. An investigation by  Nature ’s news team reveals new details about what happened — and how institutions missed red flags.

March 8, 2024 Nature Dan Garisto

In 2020, Ranga Dias was an up-and-coming star of the physics world. A researcher at the University of Rochester in New York, Dias achieved widespread recognition for his  claim to have discovered the first room-temperature superconductor , a material that conducts electricity without resistance at ambient temperatures. Dias published that finding in a landmark  Nature  paper 1 .

Nearly two years later, that  paper was retracted . But not long after, Dias announced an even bigger result, also published in  Nature : another room-temperature superconductor 2 . Unlike the previous material, the latest one supposedly worked at relatively modest pressures, raising the enticing possibility of applications such as superconducting magnets for medical imaging and powerful computer chips.

Medscape

Peer Review and Scientific Publishing Are Faltering

March 7, 2024 Medscape Robert Villa, MD

A drawing of a rat with four testicles and a giant penis was included in  a scientific paper  and recently circulated on social media and in online publications. It graphically represents the outcome of disregarding the quality of science produced each year in favor of its quantity.

For many years, there has been talk of  paper mills : publishers who print scientific journals and articles for a fee without caring about the reliability of their research. These publishers of what are called predatory journals sometimes seem not to care whether their authors even exist. The business pleases publishing groups paid by researchers, researchers who can increase the number of their publications (which is crucial for their professional evaluation), institutions that can boast of researchers who publish a lot, and sometimes, even interest groups outside academia or research centers that exploit the system to give scientific legitimacy to their demands (as has sometimes happened within antivaccine movements). Serious scientists and, above all, trust in science suffer.

Lightspring/Shutterstock

Science integrity sleuths welcome legal aid fund for whistleblowers

Investor has pledged $1 million over 4 years

March 5, 2024 Science Holly Else

A Silicon Valley investor has pledged $1 million to help pay the legal costs of scientists being sued for flagging fraudulent research. Yun-Fang Juan, an engineer and data scientist by background, hopes the new Scientific Integrity Fund—the first of its kind—will make speaking up about wrongdoing less intimidating. The fund comes after a spate of cases in which high-profile scientists have retracted papers after whistleblowers made allegations of research fraud.

“As scientists, we need to be able to ask questions and raise concerns about other researchers’ work, without the risk of being sued, or going bankrupt because we have to hire a lawyer,” says prominent science sleuth Elisabeth Bik, an adviser to the fund.

Trends in US public confidence in science and opportunities for progress

Trends in US public confidence in science and opportunities for progress

March 4, 2024 PNAS Arthur Lupia, David B. Allison, Kathleen Hall Jamieson, Jennifer Heimberg, Magdalena Skipper, and Susan Wolf

In recent years, many questions have been raised about whether public confidence in science is changing. To clarify recent trends in the public’s confidence and factors that are associated with these feelings, an effort initiated by the National Academies’ Strategic Council for Research Excellence, Integrity, and Trust (the Strategic Council) analyzed findings from multiple survey research organizations. The Strategic Council’s effort, which began in 2022, found that U.S. public confidence in science, the scientific community, and leaders of scientific communities is high relative to other civic, cultural, and governmental institutions for which researchers regularly collect such data. 

Lupia, Arthur, et al. “Trends in U.S. Public Confidence in Science and Opportunities for Progress.” Proceedings of the National Academy of Sciences of the United States of America , vol. 121, no. 11, 4 Mar. 2024, https://doi.org/10.1073/pnas.2319488121. Accessed 18 Mar. 2024.

ALEX HOGAN/STAT

Q&A: The scientific integrity sleuth taking on the widespread problem of research misconduct

February 28, 2024 STAT News Deborah Balthazar

Elisabeth Bik, a microbiologist by training, has become one of the world’s most influential science detectives. An authority on scientific image analysis who’s been  profiled  in The New Yorker for her unique ability to spot duplicated or doctored photographs, she appeared frequently in the news over the past year as one of the experts who raised research misconduct concerns that led to an  investigation  into, and the eventual departure of, former Stanford president  Marc Tessier-Lavigne .

Taylor & Francis Online

Responding to research misconduct allegations brought against top university officials

February 27, 2024 Taylor & Francis Online David B. Resnik, Mohammad Hosseini, & Lisa Rasmussen

Investigating research misconduct allegations against top officials can create significant conflicts of interest (COIs) for universities that may require changes to existing oversight frameworks. One way of addressing some of these challenges is to develop policies and procedures that specifically address investigation of allegations of misconduct involving top university officials. Steps can also be taken now regardless of whether such a body is created. Federal and university research misconduct regulations and policies may need to be revised to provide institutions with clearer guidance on how to deal with misconduct allegations against top officials. For their part, institutions may benefit from proactively creating and transparently disclosing their own processes for independent investigation of research misconduct allegations against senior officials.

David B Resnik, Mohammad Hosseini & Lisa Rasmussen (2024) Responding to research misconduct allegations brought against top university officials, Accountability in Research, DOI:  10.1080/08989621.2024.2321179

ILLUSTRATION BY THE CHRONICLE; ISTOCK IMAGES

Wanted: Scientific Errors. Cash Reward.

February 21, 2024 The Chronicle of Higher Education Stephanie M. Lee

Scientific-misconduct accusations are leading to retractions of high-profile papers, forcing reckonings within fields and ending professorships, even presidencies. But there’s no telling how widespread errors are in research: As it is, they’re largely brought to light by unpaid volunteers.

A program launching this month is hoping to shake up that incentive structure.

 Yves Moreau has long waged a campaign against genetics studies from China with questionable consent procedures.Lies Willaert

‘Ethics is not a checkbox exercise.’ Bioinformatician Yves Moreau reacts to mass retraction of papers from China

A genetics journal has pulled 18 studies over concerns that study participants did not give free consent

February 20, 2024 Science Dennis Normile

Last week, bioinformatician Yves Moreau of KU Leuven scored an important victory: The journal Molecular Genetics & Genomic Medicine retracted 18 papers from Chinese institutions because of ethical concerns. Moreau has long waged a solo campaign against studies that fail to get proper free and informed consent when collecting genetic samples, especially from vulnerable populations in China. He had raised questions about the now-retracted papers in 2021 and says this appears to be the largest set of retractions ever over human rights issues.

PHOTO: CAMERON DAVIDSON

Passion is not misconduct

February 13, 2024 Science H. Holden Thorp

University of Pennsylvania climate scientist Michael Mann  was awarded more than $1 million  in a lawsuit against bloggers who accused him of scientific misconduct in inflammatory terms, likening his treatment of data to what a noted child molester did to children. The verdict suggests that there are limits to which scientists working on politically sensitive topics can be falsely attacked. But the case also says something profound about the difference between matters of opinion and scientific interpretations that can be worked out through normal academic processes. Although Mann has expressed strong—and even intemperate—emotions and words in political discourse, the finding of the District of Columbia Superior Court boiled down to the fact that it is not an opinion that determines when scientific misconduct occurs but rather, misconduct can be established using known processes.

SARA GIRONI CARNEVALE

Vendor offering citations for purchase is latest bad actor in scholarly publishing

Unscrupulous researchers have many options for gaming citations metrics, new study highlights

February 12, 2024 Science Katie Langin

In 2023, a new Google Scholar profile appeared online featuring a researcher no one had ever heard of. Within a few months, the scientist, an expert in fake news, was listed by the scholarly database as their field’s 36th most cited researcher. They had an h-index of 19—meaning they’d published 19 academic articles that had been cited at least 19 times each. It was an impressive burst onto the academic publishing scene.

But none of it was legitimate. The researcher and their institution were fictional, created by researchers at New York University (NYU) Abu Dhabi who were probing shady publishing practices. The publications were written by ChatGPT. And the citation numbers were bogus: Some came from the author excessively citing their own “work,” while 50 others had been purchased for $300 from a vendor offering a “citations booster service.”

ADOBE

A flurry of research misconduct cases has universities scrambling to protect themselves

February 12, 2024 STAT Angus Chen and Jonathan Wosen

There was a time when an allegation of data mishandling, scientific misconduct, or just a technical error felt like a crisis to Barrett Rollins, an oncologist and research integrity officer at Dana-Farber Cancer Institute. Now, it’s just another Tuesday.

The renowned cancer treatment and research center is in the midst of a  lengthy review  of possible discrepancies involving around 60 papers co-authored by four of its top researchers over a period of over 15 years, including CEO Laurie Glimcher and COO William Hahn. And it’s hardly alone. Over the past decade, the number of research misconduct allegations reported to the National Institutes of Health has more than doubled, climbing from 74 in 2013 to 169 in 2022. And  scientific sleuths  are finding plenty of other problems that don’t always qualify as outright misconduct.

Journals are making an effort to detect manipulated images of the gels used to analyse proteins and DNA. Credit: Shutterstock

How journals are fighting back against a wave of questionable images

Publishers are deploying AI-based tools to detect suspicious images, but generative AI threatens their efforts.

February 12, 2024 Nature Nicola Jones

It seems that every month brings a fresh slew of high-profile allegations against researchers whose papers — some of them years old — contain signs of possible  image manipulation  .

Scientist sleuths  are using their own trained eyes, along with commercial software based on artificial intelligence (AI), to spot image duplication and other issues that might hint at sloppy record-keeping or worse. They are bringing these concerns to light in places like PubPeer, an online forum featuring  many new posts  every day flagging image concerns.

Some of these efforts have led to action. Last month, for example, the Dana-Farber Cancer Institute (DFCI) in Boston, Massachusetts, said that it would ask journals to retract or correct a slew of papers authored by its staff members. The disclosure came after an  observer raised concerns  about images in the papers. The institute says it is continuing to investigate the concerns.

Taylor & Francis Online

An accidental discovery of scientific fraud: A reconstruction

February 9, 2024 Taylor & Francis Online Marijke Schotanus-Dijkstra

Dear Professor Covan,

You have recently decided to retract the paper of Hania et al. ( Citation2022 ). Thank you for inviting me to explain why I was suspicious about the originality of this paper which led to this retraction.

I am currently working on a scoping review about flourishing mental health during the menopausal transition. First, I read around 250 papers after initial screening of titles and abstracts. Second, I started with the data extraction process in which at that point around 40 articles were extracted in an Excel datafile. I started to impute the data of the paper of Iioka and Komatsu ( Citation2015 ), but I discovered that in each of the columns, except for “study” and “country,” Excel showed me the exact or almost exact answer with the information I wanted to extract. For some columns like the age range and mean age (SD), this might be possible as some articles use similar datasets. Yet, the further I worked on the extraction for this paper, the more suspicious I got because each column seemed to be identical to one particular article, the one of Hania et al. ( Citation2022 ). Especially the exact same key-findings of the exact same outcomes was disturbing.

Marijke Schotanus-Dijkstra (2024) An accidental discovery of scientific fraud: A reconstruction, Health Care for Women International, DOI:  10.1080/07399332.2024.2310709

A new method searches the scholarly literature for trends in authorship that indicate paper-mill activity. Credit: Zoonar GmbH/Alamy

Fake research papers flagged by analysing authorship trends

A new approach to detecting fraudulent paper-mill studies focuses on patterns of co-authors rather than manuscript text.

February 7, 2024 Nature Dalmeet Singh Chawla

A research-technology firm has developed a new approach to help identify journal articles that originate from  paper mills  — companies that churn out fake or poor-quality studies and sell authorships.

The technique, described in a preprint posted on arXiv last month  1  , uses factors such as the combination of a paper’s authors to flag suspicious studies. Its developers at London-based firm Digital Science say it can help to identify cases in which researchers might have bought their way onto a paper.

Accusations of plagiarism, including alleged misuse of ChatGPT, should not be made lightly. Credit: Alexandre Rotenberg/Alamy

‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud

A journal reviewer accused Lizzie Wolkovich of using ChatGPT to write a manuscript. She hadn’t — but her paper was rejected anyway.

February 5, 2024 Nature E. M. Wolkovich

I have just been accused of scientific fraud. Not data fraud — no one accused me of fabricating or misleadingly manipulating data or results. This, I suppose, is a relief because my laboratory, which studies how global change reshapes ecological communities, works hard to ensure that data are transparent and sharable, and that our work is reproducible. Instead, I was accused of writing fraud: passing off ‘writing’ produced by artificial intelligence (AI) as my own. That hurts, because — like many people — I find writing a paper to be a somewhat painful process. I read books on how to write — both to be comforted by how much these books stress that writing is generally slow and difficult, and to find ways to improve. My current strategy involves willing myself to write and creating several outlines before the first draft, which is followed by writing and a lot of revising. I always suggest this approach to my students, although I know it is not easy, because I think it’s important that scientists try to communicate well.

Fake research papers could jeopardise drug development, warn academics. Photograph: Westend61/Getty Images

‘The situation has become appalling’: fake scientific papers push research credibility to crisis point

Last year, 10,000 sham papers had to be retracted by academic journals, but experts think this is just the tip of the iceberg  

February 3, 2024 The Guardian Robin McKie

Tens of thousands of bogus research papers are being published in journals in an international scandal that is worsening every year, scientists have warned.  Medical research  is being compromised, drug development hindered and promising academic research jeopardised thanks to a global wave of sham science that is sweeping laboratories and universities.

Last year the annual number of papers retracted by research journals topped 10,000 for the first time. Most analysts believe the figure is only the tip of an iceberg of  scientific fraud .

Sheets of paper floating in clouds

Impact factor mania and publish-or-perish may have contributed to Dana-Farber retractions, experts say

Learning from past errors (and misconduct) in cancer research

February 2, 2024 The Cancer Letter Jacquelyn Cobb

More than a decade ago, Glenn Begley and Lee Ellis published a  paper  with astounding findings: of 53 “landmark” studies, only six, or 11%, were reproducible, even with the same reagents and the same protocols—and even, sometimes, in the same laboratory—as the original study.

Begley’s and Ellis’s classic paper, published in  Nature , gave rise to a movement that captured the attention of the uppermost crust of biomedical research. 

Then NCI Director Harold Varmus, for example, focused on the paper—and the broader problem of reproducibility—at a 2013 meeting of the National Cancer Advisory Board ( The Cancer Letter ,  Dec. 3 , 2013). In 2014, Francis Collins and Lawrence Tabak, then-director and then-deputy director of NIH,  outlined the institute’s plan  to address the issue of reproducibility in biomedical research. Journals and funding agencies took action.  Declarations ,  meetings , and  reports  suddenly materialized, and  research funders  rapidly responded.

Bad incentives in academia are leading to a surge in retractions. Conditions could create a trap even for well-meaning researchers. GETTY

Surge In Academic Retractions Should Put U.S. Scholars On Notice

February 1, 2024 Forbes James Broughel

A December  article  in Nature highlighted an alarming new record: more than 10,000 academic papers were retracted in 2023 alone, largely stemming from manipulation of the peer review and publication processes. Over 8,000 of the retractions came from journals run by the Egyptian company Hindawi, a subsidiary of Wiley, and many were in special issues, which are collections of articles often overseen by guest editors that can have laxer standards than normal.

For now, researchers from countries like Saudi Arabia, Pakistan, Russia and China face the highest retraction rates, but it is sensible to ask: what would happen if a major scandal hit a mainstream American discipline? The idea seems less far-fetched than it used to. With  disgraced  ex-Stanford President Marc Tessier-Lavigne and former Harvard President Claudia Gay's academic  records  fresh in public memory, a scandal involving elite American researchers and universities is all too plausible.

Hunter Moseley says that good reproducibility practices are essential to fully harness the potential of big data.Credit: Hunter N.B. Moseley

In the AI science boom, beware: your results are only as good as your data

Machine-learning systems are voracious data consumers — but trustworthy results require more vetting both before and after publication.

February 1, 2024 Nature Hunter Moseley

We are in the middle of a data-driven science boom. Huge, complex data sets, often with large numbers of individually measured and annotated ‘features’, are fodder for voracious artificial intelligence (AI) and machine-learning systems, with details of new applications being published almost daily.

But publication in itself is not synonymous with factuality. Just because a paper, method or data set is published does not mean that it is correct and free from mistakes. Without checking for accuracy and validity before using these resources, scientists will surely encounter errors. In fact, they already have.

Dana-Farber

Science sleuths are using technology to find fakery and plagiarism in published research

January 28, 2024 AP News Carla K. Johnson

Allegations of research fakery at a leading cancer center have turned a spotlight on scientific integrity and the amateur sleuths uncovering image manipulation in published research.

Dana-Farber Cancer Institute, a Harvard Medical School affiliate, announced Jan. 22 it’s requesting retractions and corrections of scientific papers after a British blogger flagged problems in early January.

The blogger, 32-year-old Sholto David, of Pontypridd, Wales, is a scientist-sleuth who detects cut-and-paste image manipulation in published scientific papers.

AI and the Future of Image Integrity in Scientific Publishing

AI and the Future of Image Integrity in Scientific Publishing

January 22, 2024 ScienceEditor Dror Kolodkin-Gal

Scientific publishing serves as a vital medium for sharing research results with the global scientific community. The images within an article are often integral to conveying those results clearly. However, with researchers sometimes including hundreds of sub-images in a manuscript, manually ensuring all images accurately depict the data they are intended to represent can be a challenge. Here, cancer researcher and founder of an artificial intelligence  (AI) image-checking software tool , 1  Dr Dror Kolodkin-Gal, explores how researchers and editors can improve image integrity, and how AI can streamline the publishing process.

AI and the future of image integrity in scientific publishing https://doi.org/10.36591/SE-4701-02

Whistleblowing microbiologist wins unfair dismissal case against USGS

Whistleblowing microbiologist wins unfair dismissal case against USGS

January 11, 2024 ChemistryWorld Rebecca Trager

A microbiologist has won her case for unfair dismissal against a US federal agency after she blew the whistle on animal welfare and biosafety failures. The US Geological Survey (USGS) hired  Evi Emmenegger  as a fisheries microbiologist in 1994, and in 2006 promoted her to manager of the highest biosafety level containment laboratory at the agency’s Western Fisheries Research Center (WFRC) in Seattle. But in 2017, she became a whistleblower when she filed a scientific integrity complaint that the agency dismissed before putting her on leave in January 2020 and then firing her for alleged lapses in her research – a termination that was later retracted.

Genuine images in 2024

Genuine images in 2024

January 5, 2024 Science H. Holden Thorp

In recent years, the research community has become increasingly concerned with issues involving the manipulation of images in scientific papers. Some of these alterations—involving images from experimental techniques such as microscopy, flow cytometry, and western blots—are inadvertent and may not change the conclusions of papers. But in rare cases, some are done deliberately to mislead readers. Image sleuths who can detect these alterations, like the scientific integrity consultant Elisabeth Bik, have risen to prominence, as has the website PubPeer, where many of the detected flaws are posted. High-profile incidents, such as one involving the laboratory of former Stanford University President Marc Tessier-Lavigne, have eroded public confidence in science and harmed careers of investigators who missed doctored images coming from their own laboratories. To address these problems, in 2024, the  Science  family of journals is adopting the use of Proofig, an artificial intelligence (AI)–powered image-analysis tool, to detect altered images across all six of the journals.

News Archive

Dhhs the office of research integrity, misconduct case summaries [ html ].

June 2024 Shaker Mousa, Ph.D., M.B.A., FACC, FACB, Albany College of Pharmacy and Health Sciences

ORI found that Respondent engaged in research misconduct by intentionally, knowingly, or recklessly falsifying and/or fabricating chick chorioallantoic membrane (CAM) assays used to determine angiogenesis activities of small molecules in (2) published papers.

May 2024 Darrion Nguyen, Baylor College of Medicine

ORI found that Respondent engaged in research misconduct by intentionally, knowingly, or recklessly falsifying and/or fabricating experimental data and results that were included in one (1) RPPR, one (1) presentation, one (1) poster, six (6) research records, and two (2) figures of a prospective manuscript.

April 2024 Gian-Stefano Brigidi, Ph.D. University of California San Diego and University of Utah

ORI found that Respondent engaged in research misconduct by knowingly or intentionally falsifying and/or fabricating data and results by manipulating primary data values to falsely increase the n-value, manipulating fluorescence micrographs and their quantification graphs to augment the role of ITFs in murine hippocampal neurons, and/or manipulating confocal images that were obtained through different experimental conditions in twenty (20) figures of one (1) published paper and four (4) PHS grant applications, one (1) panel of one (1) poster, and seven (7) slides of one (1) presentation.

November 2023 Sarah Elizabeth Martin, Auburn University

ORI found that Respondent engaged in research misconduct by intentionally or knowingly falsifying and/or fabricating experimental data and results obtained under different experimental conditions that were included one (1) grant application, one (1) published paper, one (1) submitted manuscript, and six (6) presentations.

October 2023 Lara S. Hwa, Ph.D., Baylor University and University of North Carolina at Chapel Hill

ORI found that Respondent engaged in research misconduct by knowingly or recklessly falsifying and/or fabricating data, methods, results, and conclusions in animal models of alcohol use disorders. Specifically, Respondent falsified and/or fabricated experimental timelines, group conditions, sex of animal subjects, mouse strains, and behavioral response data in two (2) published papers and two (2) PHS grant applications.

research misconduct cases 2022

  • NIH Grants & Funding
  • Blog Policies

NIH Extramural Nexus

research misconduct cases 2022

Test Your Knowledge – Interactive Video of Research Misconduct Case Studies

What are some red flags that may help you avoid research misconduct? Research Integrity Officers from the HHS Office of Research Integrity (ORI) and NIH answer this question and more during our recent Research Misconduct & Detrimental Research Practices event.

In this interactive session, experts break down several case studies and hear from the audience to explain Public Health Service (PHS) regulations on handling allegations and responsibilities of an institution receiving PHS funds. Tune in to the recording to join the conversation and check your knowledge on the ethical conduct of research!

RELATED NEWS

Before submitting your comment, please review our blog comment policies.

Your email address will not be published. Required fields are marked *

research misconduct cases 2022

Ex-Defender Loses Case Over Judiciary’s Misconduct Process (1)

By Jacqueline Thomsen

Jacqueline Thomsen

A federal judge has ruled against a former federal defender’s claims that judiciary officials improperly handled her sexual harassment claim against her supervisor.

Senior US District Judge William Young in Boston said in a Friday ruling that Caryn Strickland had failed to prove that her equal protection and due process rights were violated over her report of harassment and retaliation at the public defender’s office in the Western District of North Carolina.

Strickland had sued federal judiciary officials in March 2020, alleging that her claim of sexual harassment by her supervisor, J.P. Davis, wasn’t properly managed by court officials in ...

Learn more about Bloomberg Law or Log In to keep reading:

Learn about bloomberg law.

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.

Department of Health & Human Services

Case Summary: Leong, Daniel

Daniel Leong, Ph.D., Albert Einstein College of Medicine : Based on the report of an investigation conducted by Albert Einstein College of Medicine (AECM) and additional analysis conducted by the Office of Research Integrity (ORI) in its oversight review, ORI found that Daniel Leong, Ph.D. (Respondent), formerly a Research Technician, AECM , engaged in research misconduct in research misconduct in research supported by U.S. Public Health Service (PHS) funds, specifically National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), National Institutes of Health (NIH), grant R01 AR050968 and National Heart, Lung, and Blood Institute (NHLBI), NIH, grant P01 HL110900.

ORI found that Respondent engaged in research misconduct by intentionally, knowingly, or recklessly falsifying and/or fabricating data included in sixteen (16) grant applications submitted for PHS funds:

  • R01 AR065563-01, “CITED2 and Chondroprotection,” submitted to NIAMS, NIH, on 02/05/2013.
  • R01 AR066009-01, “Remote Loading for Osteoarthritis,” submitted to NIAMS, NIH, on 06/04/2013.
  • R01 AR065563-01A1, “CITED2 and Chondroprotection,” submitted to NIAMS, NIH, on 11/05/2014.
  • R41 AR070695-01, “A novel product for tendinopathy treatment,” submitted to NIAMS, NIH, on 01/05/2015.
  • R01 AG069693-01, “Chondrocyte fate regulation and cartilage protection,” submitted to National Institute on Aging (NIA), NIH, on 06/05/2015.
  • R01 AG039561-06, “Human tendon stem progenitor cell aging and regeneration,” submitted to NIA, NIH, on 03/15/2016 (original grant funding from 08/15/2012-04/30/2018).
  • R43 AT009414-01, “A novel nutraceutical drug for tendinopathy treatment,” submitted to National Center for Complementary and Alternative Medicine (NCCAM), NIH, on 04/05/2016.
  • R01 AR070431-01A1, “The role of Panx1 in the pathogenesis and pain of osteoarthritis,” submitted to NIAMS, NIH, on 07/19/2016
  • R41 AG056246-01A1, “A novel product for tendinopathy treatment,” submitted to NIA, NIH, on 09/06/2016, funded from 09/15/2017-08/31/2019.
  • R01 AG056623-01, “Chondrocyte fate regulation and osteoarthritis,” submitted to NIA, NIH, on 10/05/2016.
  • R01 AR072038-01, “MSC-derived exosomes and tendon disorders,” submitted to NIAMS, NIH, on 10/05/2016.
  • R43 AT009414-01A1, “A novel nutraceutical drug for tendinopathy treatment,” submitted to NCCAM, NIH, on 04/05/2017, funded from 08/01/2018-07/31/2020.
  • R01 AR073194-01, “Chondrocyte fate regulation and cartilage protection,” submitted to NIAMS, NIH, on 06/05/2017.
  • R01 AR074802-01, “The role of Panx1 in the pathogenesis and pain of osteoarthritis,” submitted to NIAMS, NIH, on 04/02/2018.
  • R01 AR074802-01A1, “The role of Panx1 in the pathogenesis and pain of osteoarthritis,” submitted to NIAMS, NIH, on 08/01/2018.
  • R44 AG065089-01, “Botanical drug for spontaneous osteoarthritis,” submitted to NIA, NIH, on 01/07/2019.

ORI found that Respondent intentionally, knowingly, or recklessly falsified and/or fabricated Western blot and histological image data for chronic deep tissue conditions including osteoarthritis (OA) and tendinopathy in murine models by reusing image data, with or without manipulating them to conceal their similarities, and falsely relabeling them as data representing different experiments in fifty (50) figures included in sixteen (16) PHS grant applications. In the absence of reliable image data, the figures, quantitative data in associated graphs purportedly derived from those images, statistical analyses, and related text also are false.

Specifically, ORI finds that:

  • Respondent reused and relabeled Western blot images from the same source to falsely represent different proteins and/or experimental results in:
  • “β-actin” panel for “Cartilage” and “β-actin” panel for “Liver” are the same
  • “β-actin” panel for “Bone” and “β-actin” panel for “Spleen” are the same
  • “Cited2” blot band for Cartilage in “WT” and “Sham” are the same
  • “Cited2” blot band for Bone in “WT” and “Sham” are the same
  • “Cited2” blot band for Liver in “WT” and “Sham” are the same
  • “Cited2” blot band for Spleen in “WT” and “Sham” are the same
  • Cited2 blot bands in “WT” and “Sham” within each of the three panels represent Cartilage, Bone, and Liver
  • From Figure 11C in R01 AR065563-01 and Figure 16 in R01 AR066009-01, Respondent copied blot panels representing rAAV-vector and rAAV-GFP in human cartilage explants, flipped, resized, added a lane to the left, and reused and relabeled the bands to falsely represent “Sham” and “KO” samples in conditional knock out of Cited2 gene in cartilage of adult mice in:
  • Figure 4B in R01 AR065563-01
  • Figure 4B in R01 AR065563-01A1
  • Figure 11 in R01 AR069693-01
  • Figure 2A in R01 AG056623-01
  • Figure 1A in R01 AR073194-01
  • Figure 2A in R44 AG065089-01
  • Respondent reused and relabeled the same photomicrographs of supraspinatus tendon tissue from tendinopathy rats exposed to different experimental conditions in:
  • Figure 2A in R01 AR072038-01 to falsely represent overuse tendinopathy rats treated with ex-ADSC-2D (control exosomes)
  • Figure 2A in R01 AG039561-06 to falsely represent overuse tendinopathy nude rats with placebo treatment
  • Figure 4A in R41 AR070695-01 to falsely represent overuse tendinopathy nude rats with placebo treatment
  • Figure 3A in R43 AT009414-01 and R43 AT009414-01A1 to falsely represent collagenase induced Achilles tendinopathy rats with placebo treatment
  • Respondent reused and relabeled the same photomicrographs in:
  • Figure 2A in R01 AR072038-01 to falsely represent overuse tendinopathy rats injected with ex-ADSC-3D
  • Figure 1A in R01 AG039561-06 to falsely represent collagenase-induced tendinopathy in rats injected with Cited2 reprogrammed tendon stem/progenitor cells (TSPCs)
  • Respondent reused and relabeled photomicrographs from Figure 2C in R01 AR072038-01 representing cleaved collagen-1 stained supraspinatus tendon of overuse tendinopathy rats injected with placebo + ex-ADSC-2D to falsely represent:
  • Figure 2C in R01 AR072038-01
  • Figure 5D in R41 AG056246-01A1
  • Figure 2B in R01 AG039561-06  
  • Achilles tendon tissue of collagenase-induced tendinopathy rats after placebo injection in Figure 3D in R43 AT009414-01
  • Respondent reused and relabeled photomicrographs of human cartilage explants presented in R01 AG069693-01. Specifically, Respondent reused image panels from R01 AG069693-01:
  • Figure 11A in R01 AR065563-01, Figure 8A in R01 AG069693-01, and Figure 1A in R01 AG056623-01 to falsely represent NITEGE stained non-OA sample
  • Figure 3 in R01 AR070431-01A1 to falsely represent IL-1β stained OA sample  
  • Figure 1A in R01 AG056623-01 to falsely represent p16 stained samples  
  • Figure 3 in R01 AR070431-01A1 to falsely represent NLRP3 or cleaved caspase 1
  • Figure 1A in R01 AG056623-01 to falsely represent p21 and p16  
  • MMP-13 reused and relabeled in Figure 1B in R01 AG056623-01 to falsely represent p21
  • ADAMTS5 reused and relabeled in Figure 1B in R01 AG056623-01 to falsely represent p16
  • Respondent reused and relabeled photomicrographs of non-OA or OA human cartilage explants presented in Figure 3 in R01 AR070431-01A1 representing:
  • cleaved caspase 3 to falsely represent β-gal staining in Figure 1A in R01 AG056623-01
  • NLRP3 or cleaved caspase-1 staining of non-OA human cartilage to falsely represent p21 and p16 in Figure 1A in R01 AG056623-01
  • Respondent reused and relabeled photomicrographs from the following published papers to falsely represent unrelated experimental results in NIH grant applications:
  • Green tea polyphenol treatment is chondroprotective, anti-inflammatory and palliative in a mouse post-traumatic osteoarthritis model. Arthritis Res Ther . 2014 Dec 17;16(6):508; doi: 10.1186/sl3075-014-0508-y (hereafter referred to as “ Arthritis Res Ther . 2014”). Erratum in: Arthritis Res Ther . 2019, Jan 3;21(1):1; doi: 10.1186/s13075-018-1791-9.
  • Curcumin slows osteoarthritis progression and relieves osteoarthritis-associated pain symptoms in a post-traumatic osteoarthritis mouse model. Arthritis Res Ther . 2016 Jun 3; 18(1):128; doi: 10.1186/s13075-016-1025-y (hereafter referred to as “ Arthritis Res Ther . 2016”).
  • Procyanidins Mitigate Osteoarthritis Pathogenesis by, at Least in Part, Suppressing Vascular Endothelial Growth Factor Signaling. Int. J. Mol. Sci . 2016, 17:2065; doi:10.3390/ijms17122065 (hereafter referred to as “ Int. J. Mol. Sci . 2016”).

Specifically, in:

  • Figure 6A representing type II collagen cleavage epitope (Col2-3/4 M) vehicle control and relabeled to falsely represent aggrecan cleavage in DMM WT in Figure 2E in R01 AR070431-01A1  
  • Figure 6D representing ADAMTS5 staining of a vehicle control and relabeled twice in Figure 2F in R01 AR070431-01A1 to falsely represent IL-1β and cleaved caspase staining  
  • Figure 2C representing Col2-3/4 M in vehicle treated sham operated mice and relabeled twice in Figures 2E and 2F in R01 AR070431-01A1 to falsely represent cleaved caspase and IL-1β respectively in sham operated WT mice  
  • Figure 2C representing Col2-3/4 M in epigallocatechin3-gallate (EGCG) treated DMM mice in Figure 2E in R01 AR070431-01A1 and relabeled to falsely represent Col2-3/4 M in Panx1 KO DMM mice  
  • Figure 3A representing cleaved aggrecan in sham operated EGCG treated mice and relabeled in Figure 2E in R01 AR070431-01A1 to falsely represent cleaved aggrecan in sham operated untreated WT mice  
  • Figure 3C representing cleaved aggrecan in DMM WT mice treated with EGCG and relabeled in Figure 2E in R01 AR070431-01A1 to falsely represent cleaved aggrecan in DMM Panx1 KO mice  
  • Figure 4A representing MMP-13 in sham operated EGCG treated mice and relabeled in Figure 4E in R01 AR070431-01A1 to falsely represent antibody-staining control  
  • Figure 2E in R01 AR074802-01 and R01 AR074802-01A1 to falsely represent ADAMTS5 staining in Pax1 KO DMM mice  
  • Figure 2F in R01 AR070431-01A1 to falsely represent NLRP3 staining of Pax1 KO DMM mice  
  • Figure 2E in R01 AR074802-01 and R01 AR074802-01A1 to falsely represent MMP-13 in DMM WT mice  
  • Figure 2F in R01 AR070431-01A1 to falsely represent NLRP3 in DMM WT mice  
  • Figure 5C representing ADAMTS5 in Sham operated EGCG treated mouse and relabeled twice in Figure 2E in R01 AR074802-01 and R01 AR074802-01A1 to falsely represent ADAMTS5 in sham operated WT mice  
  • Figure 5C representing ADAMTS5 in vehicle treated DMM operated mouse sample and relabeled twice in Figure 2E in R01 AR074802-01 and R01 AR074802-01A1 to falsely represent ADAMTS5 in DMM WT mouse sample  
  • Figure 1A representing cartilage from “sham” wildtype C57BL/6 mice treated with oral PBS and relabeled in Figure 2B in R01 AG056623-01 to falsely represent knee cartilage from “sham” Col2a1CreERTxCited2fl/fl mice injected with corn oil without tamoxifen  
  • Figure 8 in R01 AG056623-01 to falsely represent p21 in control mice following DMM surgery  
  • Figure 3 in R01 AG056623-01 to falsely represent β-gal in Cited2 KO mice  
  • Figure 4A representing MMP-13 in EGCG -treated mice 4-weeks post DMM surgery and relabeled in Figure 8 in R01 AG056623-01 to falsely represent p21 following DMM surgery in mice overexpressing Cited2  
  • Figure 4C representing MMP-13 in vehicle-treated mice 8-weeks post sham surgery and relabeled in Figure 2C in R01 AG056623-01 to falsely represent p21 staining in Cited2 KO mice following DMM surgery  
  • Figure 4C representing MMP-13 in EGCG-treated mice 8-weeks post DMM surgery and relabeled in Figure 2C in R01 AG056623-01 to falsely represent p21 in oil-injected control mice with Cited2 conditional deletion in cartilage without Tamoxifen  
  • Figure 8 in R01 AG056623-01 to falsely represent p16 in control mice following DMM surgery  
  • Figure 2C in R01 AG056623-01 to falsely represent β-gal in Cited2 KO mice  
  • Figure 3 in R01 AG056623-01 to falsely represent β-gal in WT control mice with conditional deletion of Cited2 in cartilage  
  • Figure 8 in R01 AG056623-01 to falsely represent Cited2 in Cited-overexpressing mice, as well as β-gal in control mice following DMM surgery  
  • Figure 2C in R01 AG056623-01 to falsely represent p16 staining in Cited2 KO mice  
  • Figure 8 in R01 AG056623-01 to falsely represent p16 in control mice following DMM surgery

Respondent neither admits nor denies ORI’s findings of research misconduct. The parties entered into a Voluntary Settlement Agreement (Agreement) to conclude this matter without further expenditure of time, finances, or other resources. The settlement is not an admission of liability on the part of the Respondent.

Respondent voluntarily agreed to the following:

  • Respondent will exclude himself voluntarily for a period of four (4) years beginning on February 28, 2022 (the “Exclusion Period”) from any contracting or subcontracting with any agency of the United States Government and from eligibility for or involvement in nonprocurement or procurement transactions referred to as “covered transactions” in 2 C.F.R. Parts 180 and 376 (collectively the “Debarment Regulations”). At the conclusion of the Exclusion Period, Respondent agrees to have his research supervised for a period of four (4) years (the “Supervision Period”). During the Supervision Period, prior to the submission of an application for PHS support for a research project on which Respondent’s participation is proposed and prior to Respondent’s participation in any capacity in PHS-supported research, Respondent will submit a plan for supervision of Respondent’s duties to ORI for approval. The supervision plan must be designed to ensure the integrity of Respondent’s research. Respondent will not participate in any PHS-supported research until such a supervision plan is approved by ORI. Respondent will comply with the agreed-upon supervision plan.  
  • A committee of 2-3 senior faculty members at the institution who are familiar with Respondent’s field of research, but not including Respondent’s supervisor or collaborators, will provide oversight and guidance. The committee will review primary data from Respondent’s laboratory on a quarterly basis and submit a report to ORI at six (6) month intervals setting forth the committee meeting dates and Respondent’s compliance with appropriate research standards and confirming the integrity of Respondent’s research.  
  • The committee will conduct an advance review of each application for PHS funds, or report, manuscript, or abstract involving PHS-supported research in which Respondent is involved. The review will include a discussion with Respondent of the primary data represented in those documents and will include a certification to ORI that the data presented in the proposed application, report, manuscript, or abstract is supported by the research record.  
  • During the Supervision Period, Respondent will ensure that any institution employing him submits, in conjunction with each application for PHS funds, or report, manuscript, or abstract involving PHS-supported research in which Respondent is involved, a certification to ORI that the data provided by Respondent are based on actual experiments or are otherwise legitimately derived and that the data, procedures, and methodology are accurately reported in the application, report, manuscript, or abstract.
  • If no supervision plan is provided to ORI, Respondent will provide certification to ORI at the conclusion of the Supervision Period that his participation was not proposed on a research project for which an application for PHS support was submitted and that he has not participated in any capacity in PHS-supported research.
  • During the Exclusion and Supervision Periods, Respondent will exclude himself voluntarily from serving in any advisory or consultant capacity to PHS including, but not limited to, service on any PHS advisory committee, board, and/or peer review committee.

Related Content

  • Federal Register Notice:  Volume 87, Number 57(Thursday, March 24, 2022) . (.pdf)

PDF

Email Updates

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

  • Business and industry
  • Charities and social enterprises
  • Research into public trust in charities and trustees' experience of their role 2024

The Charity Commission

Public trust in charities 2024

Published 16 August 2024

Applies to England and Wales

research misconduct cases 2022

© Crown copyright 2024

This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: [email protected] .

Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.

This publication is available at https://www.gov.uk/government/publications/research-into-public-trust-in-charities-and-trustees-experience-of-their-role-2024/public-trust-in-charities-2024

Executive Summary

Overall trust in charities has been stable since 2020, with new analysis showing that levels are quite high. For some, trust is implicit until proven otherwise, driven mostly by a charity’s aim to do good. For others, low trust stems from media coverage, contact with charities or disagreement with a charity’s actions.

Media coverage is particularly influential in leading to distrust in charities, but generally the public are cautious to not let the actions of one charity influence how they feel about others. However, for any charities where the media uncovers wrongdoing, there is little they can do to redeem their reputation; once sullied the trust is lost.

The majority want information about charities to be available, and information tends to lead to greater trust. However, not all would access the information, knowing that it there tends to be enough. Ease of access of information and signposting to it could be important. Financial transparency is an important type of information in driving trust.

Awareness of and claimed knowledge about the Charity Commission are stable with around 1 in 5 claiming they know the Commission well. Most of those who claim to know the Commission well have a broad grasp of its role. However, the intricacies of the Commission’s role are less well understood, and the public have questions about how the Commission regulates all charities.

Around half are more likely to support charities after learning about the role of the Charity Commission. Having a body to regulate charities reassures that the sector is operating to a high standard. The existence of the register also reassures that the information they need is out there and accessible. However, there is a belief that the Commission can’t forensically monitor all charities due to the resources this would require and so it is likely there is wrongdoing that is going undetected. 

Knowing a charity is registered continues to reassure the public, although most people don’t check the register; just seeing a registration number or the charity claiming to be registered tends to be enough to drive trust. Most would only check the register if they suspected any wrongdoing and if they were to look, they would like to see financial information about charities.

Overall Trust in Charities

Overall trust in charities has been relatively stable since 2020, overall trust in charities over time (mean score):.

High Trust (7-10) Mean scores
2005   6.3
2008   6.6
2010   6.6
2012   6.7
2014   6.7
2016   5.7
2018   5.5
2020 51% 6.2
2021 54% 6.4
2022 55% 6.2
2023 55% 6.3
2024 58% 6.5

From 2018 onwards, the survey was conducted online rather than via telephone. This question, however, was also asked on a concurrent telephone survey as a comparison in 2018, giving a mean score of 5.7/10 (a difference of +0.2).

In 2024, almost 6 in 10 say they have high trust in charities while 1 in 10 have very low trust

Trust in charities.

0 (Don’t trust them at all) 2%
1 1%
2 2%
3 3%
4 5%
5 16%
6 13%
7 23%
8 21%
9 7%
10 (Trust them completely) 7%
Summary: High Trust (7 – 10) 58%
Summary: Medium Trust (4 – 6) 34%
Summary: High Trust (1 – 3) 9%

Trust in charities differs between different demographics and experiences:

  • men are more likely to have low trust in charities (10% vs. 7% of women)
  • those with a degree or higher are more likely to have high trust in charities (65% vs. 57% below degree and 39% with no qualifications)
  • those that have recently seen/heard charities in the news are more likely to have high trust in charities (64% vs. 54% that haven’t)
  • those that have heard of the Charity Commission are more likely to have high trust in charities (63% vs. 52% that have not heard)

Trust in charities ranks very high compared to other organisations, with only trust in doctors ranking higher

Trust in other organisations.

  Summary: Low trust (0-3) Summary: Medium trust (4-6) Summary: High trust (7-10) Mean:
Doctors 7% 25% 68% 7.1
Charities 9% 34% 58% 6.5
Banks 17% 38% 45% 5.9
Police 20% 37% 44% 5.7
Social Services 24% 43% 32% 5.2
Ordinary man/ woman on the street 20% 49% 31% 5.2
Local Council 31% 44% 25% 4.7
Private companies 23% 52% 25% 5.0
Newspapers 42% 40% 18% 4.0
Government Ministers 53% 32% 15% 3.4
MP’s 54% 33% 14% 3.4

There has been very little change in trust since 2022, with charities being the second most trusted organisation type

Trust in other organisations over time (mean scores).

  2018 2020 2021 2022 2023 2024
Doctors 7.4 7.3 7.7 7.2 7.1 7.1
Charities 5.5 6.2 6.4 6.2 6.3 6.5
Police 6.4 6.5 6.5 5.8 5.5 5.7
Banks 4.9 5.5 5.8 5.6 5.6 5.9
Ordinary Man/ Woman on the street 5.7 5.5 5.6 5.5 5.5 5.2
Social Services 5.3 5.3 5.7 5.3 5.3 5.2
Local Council 4.8 5 5.4 4.9 4.9 4.7
Private Companies 5.0 5.1 5.3 5.0 5.0 5.0
Newspapers 3.9 4.0 4.3 3.9 4.0 4.0
MP’s 3.6 3.8 4.0 3.4 3.3 3.4
Government Ministers 3.7 3.8 4.1 3.2 3.2 3.4

Focus groups found that low trust in charities stemmed from the media, contact and disagreement with the charities’ actions, while high trust was more implicit until wrongdoing was uncovered

Sources of low trust.

  • those with less knowledge of the Charity Commission tended to be less trusting of charities in general
  • national charities tend to be trusted less
  • disagreement with the charity’s actions can lead to distrust, such as the RSPCAs use of euthanasia, the RNLI picking up migrants crossing the channel and Children in Need using celebrities to ‘take money’ from the public
  • lack of financial information on charities also contributes to lower levels of trust, as does low visibility and transparency
  • some spontaneously mentioned door knocking and chugging as causes of lower trust in specific charities
  • negative news stories can lead to lower trust, but in most cases, this is limited to the specific charity the story is about

Sources of High Trust

  • those with more knowledge of the Charity Commission tended to be more trusting of charities in general
  • local and smaller charities tend to be trusted more
  • reasons for higher trust include the charitable cause aligning with individual’s views and seeing the impact of the charities work
  • charities that are seen to achieve their purpose well tend to be trusted more, as are those that are seen to maximise how much money reaches the end cause
  • transparency is also key to trust, particularly in relation to the financial aspect of charities and who is running the charity
  • quote from respondent: “Trust is based on visibly seeing what the charity does”
  • quote from respondent: “In general, I have high trust with charities and am willing to give them the benefit of doubt until they do something wrong”

Drivers of Trust in Charities

34% claim to have seen something in the news about charities recently - most coverage has been about fundraising, information or advising what charities do, whether seen charities in the news recently.

Seen charities in the news recently 34%
Not seen charities in the news recently 58%
Don’t know 8%

What has been heard/seen about charities in the news (free text responses)

Requesting donations (fundraising) 11%
Ads/information about charities in general 10%
Charities helping vulnerable people/people in need 8%
Struggling charities (lack of funding/donations) 5%
Charities/UN actions in Gaza/Palestine/Israel 5%
Misuse of money/wasting money 4%
Ads/information about specific charities 4%
Cancer research 4%
Positive perception of charities 4%
Captain Tom foundation (including the recent issues) 3%
Immoral activities/bad press/scandals (not money related) 3%
Macmillan 3%
Cancer charities 2%
Red cross 2%

Most recent news coverage of charities has left the public feeling more positive about charities or indifferent - just 15% said it left them feeling more negative

How what they have seen has changed opinions of charities.

It made me a lot more positive about charities 23%
It made me a little more positive about charities 25%
It made no difference to my opinion of charities 34%
It made me a little more negative about charities 9%
It made me a lot more negative about charities 6%
Don’t know 2%
Summary: Positive 48%
Summary: Negative 15%

The Untrusting and Uninvolved segment were more likely to not think any differently about charities (43%) after seeing/reading/hearing something about them, whereas Trusting Helpers were more likely to feel positively (51%).

Disengaged Donors were more likely to feel negatively about charities after seeing something about them (23%).

In the focus groups we heard how negative stories in the media about charities stick in participants minds

  • when reflecting on charities in the media, negative stories tended to stick out in participants minds. Examples of negative stories were misuse of funds and impersonating charities
  • negative stories tended to lead to some distrust in charities, but, particularly for those with lower knowledge of the Charity Commission, distrust in the media muted this impact. There was agreement that the media often sensationalises stories and only picks the negative cases, saying “good news doesn’t sell stories”
  • quote from respondent: “I don’t know what happened with Oxfam, I think Oxfam should respond to the allegations and the findings should come out”
  • local news is also seen to be important and to be beneficial to smaller charities – something also uncovered with trustees. There was also discussion around the power which the local media has in regards to keeping charities ‘in line’ and that reporting on local charities acts as an incentive for other charities to want to feature
  • when asked whether participants think that negative news stories have a lasting impact on charities, there was some agreement that while individual charities may be approached with more caution than they would have previously, it is important “not to tarnish everybody with the same brush”
  • participants generally felt that if serious wrongdoing is uncovered for a charity, the charity was unable to redeem their reputation. Participants felt they only way back would be a complete rebrand and change of personnel, but most charities would not be able to recover

Generally, all information about how a charity is run contributes to trust in the charity, but knowing where the money is spent and that the charity achieves its purpose and makes a difference are the most important factors

Importance in whether they trust a charity or not.

Very Important Fairly Important
The people that run the charity have a range of different backgrounds and skills 31% 43%
The charity does work that central or local government can’t or won’t do 36% 40%
The charity pushes for change in society 37% 40%
It is clear who runs the charity and is responsible for making decisions 44% 37%
The charity listens to feedback from their supporters/people that use their services 48% 37%
It is easy to see how much the charity has raised, and how this money has been spent 54% 31%
The charity is a voice for the people or causes it supports 54% 32%
That it keeps its staff, volunteers and people who use its services safe from harm 54% 31%
That it operates to high ethical standards 58% 27%
The charity makes a real difference to the people and communities that it serves 59% 28%
The charity does a good job in achieving its purpose 59% 28%
Most of the money raised is spent directly on the causes the charity supports 66% 21%

There are some discrepancies in how important aspects are to whether the public trust a charity and whether charities they know about are displaying these attributes

Extent charities you know about are…. and the importance of each aspect.

Extent charities are.. Importance Gap between importance and extent it is happening
Spending most of the money raised directly on the causes the charity supports 57% 87% 30%
Operating to high ethical standards 61% 86% 25%
Keeping their staff, volunteers and people who use their services safe from harm 61% 85% 24%
Acting as a voice for the people or causes it supports 66% 86% 20%
Making a real difference for the people and communities they serve 69% 87% 18%
Doing a good job in achieving the purpose of the charity 71% 87% 16%

In the focus groups we found that high knowledge of the Charity Commission correlated with wanting a variety of information, whereas low knowledge correlated with just wanting financial information

Having high knowledge of the charity commission links to wanting more information.

  • participants with high knowledge were interested in most aspects of the charity, such as why it was set up, the end cause, how they achieve the end cause, who runs the charity and how money is used

Low knowledge of the Charity Commission links to only wanting financial information

  • those who had very low knowledge of the Charity Commission only wanted information from a charity about what was raised and how much was spent
  • they wanted a clear breakdown of how much money reached the end cause and how much was paid to those running the charity
  • they also wanted to understand where every pound they donated ended up
  • quote from respondent: “I want to know how much they raise and how much goes to the cause”

Across knowledge levels, there was consensus that they wanted stories and examples of the charities work

  • most participants agreed they wanted “a story they could connect with”
  • those with high knowledge wanted to see why a charity was set up, who it had helped and statistics on what it had achieved
  • those with lower knowledge were also interested in examples, such as “x amount has been raised which has resulted in x amount of wells being built and x amount of lives changed”
  • participants wanted to see the tangible difference charities were making, and stories were a good way to connect with the audience

In the focus groups there was agreement that too much information could be off-putting

Level of information.

  • there was agreement that there is a ‘right’ amount of information which can be provided by charities
  • some felt that too much information provided by charities can make them less inclined to donate to those charities, though a certain amount is needed in order to trust
  • “quality over quantity” was discussed, as information needs to be simple, digestible and contain what they are looking for
  • “you don’t want to be overwhelmed by information, there can be too much”

Accessing information

  • for those with high knowledge of the Charity Commission, having all types of information was beneficial and led to higher trust
  • for those with low knowledge, just knowing the information was available was enough and they would only check it if they thought there might have been wrongdoing
  • quote from respondent: “Knowing that the information is out there is enough, I would only check for more when I think there might be wrongdoing”

Information from TV adverts

  • television adverts were a form of media which participants felt compelled them to involve themselves with charities
  • there was specific mention of adverts and charities relating to malnourished children – something which Comic Relief may have primed them to think about

Leaflets from charities

  • there was some discussion around leafleting from charities, which participants did not feel were useful sources of information – participants tended to agree that leaflets get discarded as soon as they arrive

Being approached by charities

  • charities using techniques such as door knocking would lead them to be more distrusting of the charity, and less inclined to donate
  • charities who door knock and/or stop people on the street were seen to be less trusted, though no specific charities were mentioned
  • quote from respondent: “’Chuggers’ ambush you in the high street…they position themselves near a cashpoint”

The majority didn’t have any suggestions for what more the Charity Commission could do to increase trust, but of those that did, transparency around finances and limits to salaries were the main suggestions

What more the charity commission could do to increase trust.

More transparency on finances/proof of where money is spent incl. admin, marketing, salaries 6%
Control salaries given to charity directors/CEO/senior executives incl. capping of salaries 3%
More advertisement/promotion incl. explain what they do 3%
Generally be more open/honest/transparent 3%
Take tougher action/be proactive in dealing with illegal/rogue charities 2%
Auditing/vetting/investigating charities 2%
Communicate more/provide more information 2%
Certify charities of good practice 1%
No, none, nothing 50%
Don’t know 26%

6% say that transparency around finances would lead to higher trust in charities, and think this is something the Charity Commission could facilitate. This is more likely to be suggested by those that are more sceptical about charities, for example they have medium trust in charities (8%) and recent news stories about charities have made them feel more negative about charities (12%).

Those that can’t think of anything that the Charity Commission can do either already have high trust in charities (53%) or are part of the No Interest segment (59%).

Participants from focus groups felt the Charity Commission could do more to increase awareness and let the public know what role they could play

1) increase awareness.

  • among those with higher knowledge, awareness of the role of the Charity Commission links to increased trust in charities as they knew they were regulated
  • however, there was an understanding that the Charity Commission does not have the resources to forensically investigate all charities, especially among the low knowledge group
  • participants wanted to know how the Charity Commission discover wrongdoing and what actions it takes
  • they also wanted to understand if it was possible for all wrongdoing to be found by the Charity Commission, as many believed several charities could be getting away with wrongdoing or mismanagement of funds
  • participants wanted more understanding of the role of the Charity Commission and how it can regulate all charities in England and Wales to reassure them that charities can not get away with wrongdoing
  • quote from respondent: “I want to know how the Charity Commission decide who to investigate, is it just those that are high profile and get a lot of media attention?”

2) The Public’s Role

  • participants also felt the Charity Commission should do more to let the public know what role they can play in monitoring charities
  • general consensus was that the Charity Commission could not possibly be across all wrongdoing, so they were unclear when wrongdoing was uncovered how it came to the Commission’s attention (many thought it was just media stories it investigated)
  • participants wanted the public to be able to get in touch and report any wrongdoing they believed was happening, so it was not just the Charity Commission’s investigations they were relying on to stop misconduct
  • this raises the possibility of a comms piece around how the public can report any wrongdoing they suspect, to reassure the public it is not just the Commission that has a say on which charities need investigating
  • quote from respondent: “We only hear about the high-profile cases, they can’t be across all charities… I don’t know of any way I can contact them to let them know about charities”

Contact with charities also correlates with trust - post-Covid, fewer people are donating to charity or using charity shops, while demand for charities’ services has risen

Charitable giving over time.

Donated money or goods, or raised funds for a charity Used a charity shop Volunteered for a charity Taken part in a charity campaign Worked for a charity
2020 62% 58% 17% 11% 7%
2021 54% 44% 14% 17% 7%
2022 54% 49% 12% 17% 5%
2023 52% 52% 15% 16% 7%
2024 47% 47% 16% 14% 9%

Receiving from charities over time

Attended a charity-run community facility (for example club  or community centre) Used other charity services (for example advice, animal welfare, outdoor space) Received food, financial, medical or similar help
2020 9% 7% 3%
2021 5% 6% 4%
2022 6% 6% 4%
2023 7% 6% 5%
2024 9% 8% 8%

Heard of the Charity Commission

Around half of the public have heard of the charity commission, with a slow decline in awareness since 2021.

Aware Unaware
2024 47% 48%
2023 48% 47%
2022 50% 44%
2021 54% 40%
2020 53% 42%

Those that are older and live in less deprived areas tend to be more likely to have heard about the Charity Commission:

  • live in the least deprived areas (58%)
  • have a degree or higher (58%)
  • social grade AB (61%)
  • had contact with a charity in the past year (51%)

Surprisingly, those belonging to the Untrusting and Uninvolved and Disengaged Donors segments tend to have higher awareness than average, despite having low engagement/involvement.

Trusting Helpers are also more likely to be aware of the Charity Commission, but this group have high trust and high involvement with charities.

1 in 5 that have heard of the Charity Commission recall seeing them in the media recently

What they have seen recently about the charity commission.

Recently heard about them (unspecific information) 12%
It is good/ important (general positive) 11%
They do charity investigations/checks 9%
Charity regulation/Monitoring 8%
Help people in need 5%
Captain Tom charity investigation 5%
They support charities 4%
Fundraising activities/donations 3%
Register charities 2%
Report publication 1%
Don’t know 19%
Other 10%

Qualitative participants awareness of the Charity Commission tended to come from media cases of charities being investigated for wrongdoing, or being involved in charities and therefore knowing the importance and the role of the Commission.

Where the public have seen media coverage, it tends to make them feel more positive about the Charity Commission - just 7% said it left them feeling more negative

How what they have seen has changed opinions of the charity commission.

It made me a lot more positive about the Charity Commission 30%
It made me a little more positive about the Charity Commission 36%
It made no difference to my opinion of the Charity Commission 24%
It made me a little more negative about the Charity Commission 4%
It made me a lot more negative about the Charity Commission 3%
Don’t know 2%
Summary: Positive 67%
Summary: Negative 7%

Not many differences were seen among segments, apart from Untrusting and Uninvolved, who were less likely to feel positively about the Charity Commission after reading/seeing/hearing something (46%), but no more likely to feel negatively (9%).

Claimed depth of knowledge of the Charity Commission

Despite awareness of the charity commission declining, claimed depth of knowledge has remained steady, percentage that claim to know the charity commission well.

Among those that have heard of the Charity Commission Among total population Heard of Charity Commission
2020 36% 19% 53%
2021 35% 19% 54%
2022 35% 18% 50%
2023 34% 17% 48%
2024 40% 19% 47%

Those who claim to know the Charity Commission well are more likely to be:

  • 16 – 34 (61%)
  • live in London (57%)
  • live in the most deprived areas (47%)
  • social grade AB (50%)
  • high trust in charities (45%)
  • have had contact with a charity in the past year (43%)
  • no Interest segment (52%)

Those that are 16 – 34 and in the No Interest segment are more likely to claim they know the Charity Commission well, but when asked what the role of the Charity Commission is, they were more likely to only select incorrect answers.

The majority of those who claim to know the Charity Commission well have a good grasp of the role, although some think it fines charities or campaigns on their behalf

The role of the charity commission (among those with claimed depth of knowledge).

Regulate charities across England and Wales 66%
Look into complaints & hold charities to account when things go wrong 60%
Maintain a register of all charities across England & Wales 59%
Provide guidance to the people that run charities about how they should run their charities 58%
Register new organisations that meet all the conditions for charitable status 55%
Give charities advice or permission if they need it 49%
Make charities pay a fine if they break the rules 33%
Campaign or lobby on behalf of charities 22%
Give money to charities 14%
None of the above 1%
Summary: Incorrect and Correct 49%
Summary: Correct (Only) 45%
Summary: Incorrect (Only) 6%

Those aged 16 – 34 (11%) and the No Interest segment (17%) were the most likely to only select incorrect answers.

Most of the public believe the balance of the Commission’s focus should be more on identifying and dealing with wrongdoing

The charity commission’s focus.

Should focus on Is focussing on
Identifying and dealing with wrongdoing by charities 57% 44%
Giving charities advice and guidance to ensure they follow the law 50% 46%
Keeping a public register of charities up to date 44% 49%
Enabling charities to be more effective 36% 27%
Informing the public about charities 25% 23%
Don’t know 1% 5%

In 2023, the majority (61%) said the balance of the Charity Commissions work ought to lie equally on identifying and dealing with wrongdoing and on supporting charities to do the right thing. Question wording has changed since 2023 so cannot be directly compared.

The public believe charity law has the largest influence on the Commission, followed by the charity sector

Extent the charity commission is influenced by each group among those that know the charity commission well.

Summary: Isn’t influenced Middle Summary: Is influenced Don’t know Summary: Is influenced (2023)
Charity law 7% 18% 72% 2% 78%
The charity sector 11% 19% 66% 4% 61%
The public 17% 26% 54% 4% 42%
The media 22% 24% 49% 4% 41%
Politicians 22% 26% 48% 4% 40%

Half are more likely to support charities after learning about the role of the Charity Commission, but a similar proportion are not impacted at all

Likelihood to support charities after learning about the charity commission.

A lot more likely Somewhat more likely Neither Somewhat less likely A lot less likely Don’t know
13% 36% 47% 1% 1% 1%

Half of respondents say they are more likely to support charities after reading about the role of the Charity Commission and just under half (47%) say it makes no difference.

Those that are more likely to support charities after reading about the role of the Charity Commission tend to be educated to degree level or above (58%), in socioeconomic grade AB (53%) and have high trust in charities (60%).

The opposite is seen for those that are not impacted by reading about the role of the Charity Commission, for example they are educated below degree level (49%), social grade DE (49%) and have low trust in charities (64%).

The Untrusting and Uninvolved segment (65%) and the No Interest segment (55%) were more likely to not be influenced by knowledge of the Charity Commission (selected neither), suggesting information does not increase trust for these segments, while the Trusting Helpers were more likely to support charities after learning about the role of the Charity Commission (56%).

In focus groups, participants’ views were also mixed depending on how much they knew about the Charity Commission

Low knowledge of the charity commission.

  • for those with less knowledge of the Charity Commission, learning about the role of the Charity Commission did not increase trust in charities
  • participants felt that charities would be registered and set up properly with the oversight of the Charity Commission, but did not think the Charity Commission could oversee all charities in England and Wales, therefore misconduct and mismanagement would still happen across charities
  • participants needed reassurance that the Charity Commission could monitor all charities, and to understand how the Charity Commission could do this
  • quote from respondent: “They can’t easily be across all charities… they can’t have enough staff to monitor all these charities”
  • quote from respondent: “I trust that they only register charities that meet the criteria, but it is the after bit that has the question marks. Once the charity is set up I’m not sure the confidence is there that the Charity Commission can be on top of monitoring them all, they are looking into it once the damage is done”

High Knowledge of the Charity Commission

  • those with more knowledge of the Charity Commission tended to have more trust in charities from knowing about the role of the Charity Commission
  • having a body to regulate charities reassured them that the charity sector, as a whole, would be operating to a high standard
  • existence of the register also reassured them that the information they needed was out there and accessible should they need to use it

Understanding of what a trustee does is relatively high and a sizeable minority would consider being a trustee

What is a charity trustee someone who is…:.

checks that the charity is being run as it should be 43%
is responsible for directing and governing a charity 42%
is responsible for making decisions about the running of the charity 41%
checks the charity is helping the people or causes it is supposed to 35%
provides advice to the charity 24%
does the day-to-day running of the charity 20%
receives support or money from the charity 13%
Don’t know 13%

Would you consider becoming a trustee?

Yes 13%
Maybe 35%
No 44%
I already am 1%
Don’t know 7%

Unsurprisingly, those most engaged in the charity sector are more likely to consider becoming a trustee:

  • 16 - 34 (20%)
  • degree educated (22%)
  • high trust in charities (16%)
  • contact with a charity in the past year (16%)
  • trusting Helpers (14%)
  • heard of the Charity Commission (19%)

Register of Charities

6 in 10 said they would use the charity register to check a charity is real, although checking a charity’s website is also a common way of verifying a charity, how would you check a charity was real.

Summary: Charity register 55%
Look at the charity’s website 44%
Look for a charity number 33%
Contact the Charity Commission 26%
Look for factual information on third party websites 22%
Contact the charity directly 21%
Search for information about the good cause through television, radio, newspapers and magazines 18%
Search for information about the good cause shared on social media 18%
Ask family or friends 17%
Look for a badge 10%

Close to 4 in 10 have at least some knowledge of the register of charities, but just 13% have ever accessed it

Knowledge of the register of charities.

A lot A little Heard of, but don’t know anything about it Not heard of Don’t know Summary: A lot/a little Summary: Heard of but don’t know/not heard of
6% 31% 34% 28% 1% 37% 28%

Those with high trust in charities are more likely to have knowledge of the register of charities (42%).

Accessed the register of charities

Yes 13%
No 84%
Don’t Know 3%

The majority of participants in the focus groups would not look at the charity register unless they suspected wrongdoing

The majority would not tend to look at the charity register.

  • there tends to be implicit trust that a charity is real so the public don’t feel they have to check if a charity is set up correctly, as by virtue of being a charity, it already should be
  • having a charity number on display is usually enough reassurance that they don’t need to verify if the charity number is real or not
  • understanding the end cause of the charity is more of a priority over checking if the charity is legitimate or not

Most would only use the register if they suspected wrongdoing

  • most would only consider using the register if they suspected there was wrongdoing as they feel they have no reason to check it otherwise
  • those with high knowledge of the Charity Commission are most likely to use the register as they know it exists and some see it as authoritative
  • discussing the register left a lot of participants intrigued and they were interested to see what was on it, suggesting more awareness of the register may increase interest

Participants wanted to be able to see financial information

  • they wanted to be able to see information about finances in a simple and digestible format, however there was criticism that the register didn’t highlight misuse of money which is what participants wanted to be able to check for themselves

Knowing that a charity is registered is now more likely to make the public think positively about the charity across a number of factors, in particular that it’s well run

Knowing a charity is registered also makes the public more confident that it is financially responsible. % that feel slightly/a lot more confident about each aspect after learning it is registered as a charity.

2020 2021 2022 2023 2024
That it operates to high ethical standard 79% 75% 74% 76% 81%
That a high proportion of the money it raises goes to those it is trying to help 79% 78% 78% 79% 81%
That it’s well-run 77% 71% 70% 74% 81%
That it’s easy to see how much the charity has raised, and how this money has been spent         80%
That it’s making an impact 78% 78% 76% 77% 79%
That it treats its employees well 69% 67% 65% 69% 75%
That it’s doing work central and local government can’t or won’t do 67% 66% 66% 68% 73%

Segmentation

Public trust in charities segmentation.

In 2024, a new segmentation was created to help the Charity Commission understand drivers of trust in Charities and how these differ by different groups in the population.

The segmentation divides respondents into groups based on their answers to the following questions:

  • C2.2. The people that run the charity have a range of different backgrounds and skills
  • C2.3. It is easy to see how much the charity has raised, and how this money has been spent
  • C2.6. The charity pushes for change in society
  • C2.8. The charity does work that central or local government can’t or won’t do
  • C2.10. It is clear who runs the charity and is responsible for making decisions
  • A1. Trust and confidence in charities overall
  • B7. Know the Charity Commission and what it does
  • E1. Level of contact with a charity in the last year

Not all statements at C2 were used as they did not provide enough differentiation between segments. (There was overall agreement on which were the most important.)

A breakdown of the demographics for each segment, and a further exploration of their differences can be found in the appendix.

% of population

Untrusting and Uninvolved Disengaged Donors No Interest Trusting Helpers Trusting Receivers
8% 8% 14% 62% 8%

Segment Profiles

Segment 1 - untrusting and uninvolved (8% of population).

This segment are less trusting of charities overall and tend not to have any involvement with them. Transparency is important to them.

  • 43% have high trust in charities
  • 22% know the Charity Commission
  • 74% Have had contact with charities
  • 51% Have donated
  • 10% Have volunteered
Drivers of Trust %
Diversity among leadership 52%
Transparency around finances 99%
Pushes for change in society 1%
Fills the gaps of the government 53%
Transparency in responsibilities and decision making 92%

Segment 2 - Disengaged Donors (8% of population)

Overall trust in charities is relatively low, as is contact, but they are more likely to have donated money or goods to a charity.

  • 55% have high trust in charities
  • 19% know the Charity Commission
  • 79% Have had contact with charities
  • 54% Have donated
  • 13% Have Volunteered
Drivers of trust %
Diversity among leadership 0%
Transparency around finances 95%
Pushes for change in society 95%
Fills the gaps of the government 76%
Transparency in responsibilities and decision making 99%

Segment 3 - No Interest (14% of population)

This segment have low trust and contact with charities, and information about the charity in any form does not tend to drive trust.

  • 39% have high trust in charities
  • 17% know the Charity Commission
  • 65% Have had contact with charities
  • 23% Have donated
  • 12% Have Volunteered
Drivers of trust %
Diversity among leadership 26%
Transparency around finances 9%
Pushes for change in society 29%
Fills the gaps of the government 25%
Transparency in responsibilities and decision making 27%

Segment 4 - Trusting Helpers (62% of population)

This segment has high trust and involvement (volunteering both time and money) with charities. The more information the better.

  • 64% have high trust in charities
  • 20% know the Charity Commission
  • 82% Have had contact with charities
  • 16% Have campaigned
  • 17% Have Volunteered
Drivers of Trust %
Diversity among leadership 99%
Transparency around finances 98%
Pushes for change in society 97%
Fills the gaps of the government 92%
Transparency in responsibilities and decision making 99%

Segment 5 - Trusting Receivers (8% of population)

This segment mainly have contact with charities to receive goods, money or services. They tend to trust charities and want transparency.

  • 61% have high trust in charities
  • 16% know the Charity Commission
  • 13% Attended a facility
  • 11% Received help
Drivers of Trust %
Diversity among leadership 62%
Transparency around finances 91%
Pushes for change in society 75%
Fills the gaps of the government 72%
Transparency in responsibilities and decision making 0%

Possible Actions by Segment

Untrusting and Uninvolved Transparency around finances and who is responsible for decisions are likely to build trust Engaging with this group will be difficult as they don’t like to be too involved with the sector Make the information as widely available and digestible as possible to reach this segment
Disengaged Donors They want transparency around how the charity operates and where the money goes, and to know the charity is pushing for change Getting this segment to engage with this information will be difficult as they have relatively low trust Consider if there is a way to offer the information they need at the point they are making a donation, or making a decision about a donation
No Interest There are no easy wins for this group – this group are not trusting or engaged with the charity sector Getting them to engage with the charity sector Don’t take action with this group as there is likely to be little gain from lots of resources
Trusting Helpers They want access to all types of information, make it easy for them to access it and ensure it is clearly signposted Making it easy to access all the information in an easily digestible way and making it available where they would expect it to be Keep it simple so they can take in a range of information
Trusting Receivers They want transparency from the charity around finances and to know that the charity helps the end cause Providing transparency around finances in a simple way Provide case studies of how money has been spent and how it has helped the end cause. Clearly signpost financial information.

Background and Methodology

Background: The Charity Commission has been collecting data on public trust in charities since 2005. BMG Research were commissioned by the Charity Commission to run 3 waves of their public tracker, starting with the 2024 wave. 2024 marks the last year of the current Charity Commission strategy so impact measures are tracked back where possible.

Research objectives: To understand public trust in charities, what affects public trust in charities, and awareness and knowledge of the Charity Commission.

Methodology: Research was split into two phases, a quantitative and qualitative phase. In the quantitative phase, an online panel was used to achieve a nationally representative split of participants from England and Wales. A boost was conducted to achieve a higher number of Welsh completes to allow for analysis for nation. Weighting has been applied to give a representative view of England and Wales. The qualitative phase was then conducted to explore the themes from the data. 3 focus groups were conducted with between 5 and 7 participants in each group. Each group had a mix of genders, ages, ethnicities and regions. The focus groups were split into those with high knowledge of the Charity Commission, medium knowledge and low knowledge.

Fieldwork dates: Quantitative fieldwork took place between the 12th of January and 2nd of February 2024 and the focus groups took place between the 17th of April and 2nd of May.

Number of completes: 4599 completes were achieved.

Weighting: The data was weighted by age by gender, region, education and ethnicity. Checks were also carried out to ensure the data collected was broadly representative by IMD and urban/rural. Targets were set to be nationally representative.

Comparability Over Time

It is important to note that the survey contents, and its administration have undergone a number of changes compared to previous years.  These changes were necessary to improve the relevance and robustness of the data collected, and to facilitate a new research partner.

Throughout this report comparisons are made to previous waves where there have been no substantial changes to the question wording or routing.  However, these comparisons should be treated as indicative only as there is likely to be some impact on the data from the changes detailed below. A such, statistical significance testing across waves has not been carried out.

The changes include:

A number of new questions: These have been added to reflect the current needs of the Charity Commission. As new questions have been added at various points throughout the survey there is a risk that responses to existing questions could have been impacted by the presence of the new questions. Some questions from previous waves have also been removed from the survey.

Some small changes to existing questions: These changes have been made to improve the quality of the data collected, and include changes such as adding in ‘don’t know’ options to allow respondents to answer more accurately. Direct comparisons to previous years data for questions have not been made.

A change in research partner: BMG were commissioned as a new research partner in 2023. Due to this change in research partner, there has also been a change in the panel providers that have been used. Although quotas have been used to ensure the sample is as representative of the population as possible, and a mix of panels have been used, each panel introduces their own inherent bias.

Likely change in weighting criteria: Although the survey results have been weighted to population statistics in previous years it was not clear what weighting criteria were used. Therefore, it was not possible to replicate the weighting scheme used previously.

Untrusting and Uninvolved segment

Age: Total Untrusting and Uninvolved
16 - 34 30% 22%
35 - 64 48% 42%
65+ 23% 36%
Gender: Untrusting and Uninvolved
Male 62%
Female 38%
Contact with Charities: Total Untrusting and Uninvolved
Had contact 79% 74%
Donated/raised funds 47% 51%
Used a charity shop 47% 47%
Volunteered for a charity 16% 10%
Worked for a charity 9% 4%
Education: Total Untrusting and Uninvolved
Degree or above 27% 28%
Below degree 67% 63%
No qualifications 6% 9%
Ethnicity: Total Untrusting and Uninvolved
White 85% 88%
Mixed 2% 1%
Asian 7% 8%
Black 3% 1%
Other 2% 2%
Media Consumption: Total Untrusting and Uninvolved
TV/Online 69% 71%
Radio 23% 24%
Tabloids 23% 27%
Broadsheet 34% 42%
None of the above 6% 4%
Trustee Consideration: Total Untrusting and Uninvolved
Yes 13% 9%
Maybe 35% 29%
No 44% 57%
I already am 1% 1%
Don’t know 7% 5%

Disengaged Donors segment

Age: Total Disengaged Donors
16 - 34 30% 29%
35 - 64 48% 47%
65+ 23% 24%
Gender: Disengaged Donors
Male 50%
Female 49%
Contact with charities: Total Disengaged Donors
Had contact 79% 79%
Donated/raised funds 47% 54%
Used a charity shop 47% 47%
Volunteered for a charity 16% 13%
Taken part in a charity campaign 14% 13%
Education: Total Disengaged Donors
Degree or above 27% 31%
Below degree 67% 67%
No qualifications 6% 2%
Ethnicity: Total Disengaged Donors
White 85% 86%
Mixed 2% 1%
Asian 7% 7%
Black 3% 1%
Other 2% 3%
Media consumption: Total Disengaged Donors
TV/Online 69% 73%
Radio 23% 22%
Tabloids 23% 22%
Broadsheet 34% 34%
None of the above 6% 5%
Trustee consideration: Total Disengaged Donors
Yes 13% 13%
Maybe 35% 35%
No 44% 45%
I already am 1% 2%
Don’t know 7% 5%

No Interest segment

Age: Total No Interest
16 - 34 30% 42%
35 - 64 48% 47%
65+ 23% 10%
Gender: No Interest
Male 59%
Female 40%
Contact with charities: Total No Interest
Had contact 79% 65%
Donated/raised funds 47% 23%
Used a charity shop 47% 26%
Volunteered for a charity 16% 12%
Attended an academy/faith school or university 6% 10%
Education: Total No Interest
Degree or above 27% 23%
Below degree 67% 66%
No qualifications 6% 9%
Ethnicity: Total No Interest
White 85% 82%
Mixed 2% 2%
Asian 7% 10%
Black 3% 3%
Other 2% 1%
Media consumption: Total No Interest
TV/Online 69% 50%
Radio 23% 17%
Tabloids 23% 20%
Broadsheet 34% 26%
None of the above 6% 13%
Trustee consideration: Total No Interest
Yes 13% 11%
Maybe 35% 34%
No 44% 43%
I already am 1% 1%
Don’t know 7% 11%

Trusting Helpers segment

Age: Total Trusting Helpers
16 - 34 30% 26%
35 - 64 48% 49%
65+ 23% 25%
Gender: Trusting Helpers
Male 43%
Female 56%
Contact with charities: Total Trusting Helpers
Had contact 79% 82%
Donated/raised funds 47% 52%
Used a charity shop 47% 52%
Volunteered for a charity 16% 17%
Taken part in a charity campaign 14% 16%
Education: Total Trusting Helpers
Degree or above 27% 27%
Below degree 67% 68%
No qualifications 6% 5%
Ethnicity: Total Trusting Helpers
White 85% 86%
Mixed 2% 2%
Asian 7% 7%
Black 3% 4%
Other 2% 2%
Media consumption: Total Trusting Helpers
TV/Online 69% 74%
Radio 23% 25%
Tabloids 23% 23%
Broadsheet 34% 35%
None of the above 6% 5%
Trustee consideration: Total Trusting Helpers
Yes 13% 14%
Maybe 35% 35%
No 44% 43%
I already am 1% 1%
Don’t know 7% 8%

Trusting Receivers segment

Age: Total Trusting Receivers
16 - 34 30% 48%
35 - 64 48% 41%
65+ 23% 10%
Gender: Trusting Receivers
Male 50%
Female 50%
Contact with charities: Total Trusting Receivers
Worked for a charity 9% 14%
Attended a charity-run facility 9% 13%
Received food/financial etc help 8% 11%
Used other charity services 8% 11%
Attended an academy/faith school or university 6% 12%
Education: Total Trusting Receivers
Degree or above 27% 27%
Below degree 67% 68%
No qualifications 6% 5%
Ethnicity: Total Trusting Receivers
White 85% 83%
Mixed 2% 3%
Asian 7% 7%
Black 3% 5%
Other 2% 2%
Media consumption: Total Trusting Receivers
TV/Online 69% 67%
Radio 23% 23%
Tabloids 23% 26%
Broadsheet 34% 31%
None of the above 6% 7%
Trustee consideration: Total Trusting Receivers
Yes 13% 15%
Maybe 35% 42%
No 44% 37%
I already am 1% 1%
Don’t know 7% 6%

Is this page useful?

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. Please fill in this survey (opens in a new tab) .

IMAGES

  1. Research Misconduct by the Numbers

    research misconduct cases 2022

  2. Research Misconduct: Research Integrity

    research misconduct cases 2022

  3. Research Misconduct Process at Penn State

    research misconduct cases 2022

  4. Types of research misconduct. 2022 Best

    research misconduct cases 2022

  5. PPT

    research misconduct cases 2022

  6. PPT

    research misconduct cases 2022

COMMENTS

  1. Case Summaries

    Research Misconduct Case Summaries; Case Summaries. This page contains cases in which administrative actions were imposed due to findings of research misconduct. The list only includes those who CURRENTLY have an imposed administrative actions against them. ... 2022. Case Summary: Brand, Toni M. Case Summary: Chang, Alice C. Case Summary ...

  2. List of scientific misconduct incidents

    Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical behavior in the publication of professional scientific research.A Lancet review on Handling of Scientific Misconduct in Scandinavian countries gave examples of policy definitions. In Denmark, scientific misconduct is defined as "intention[al] negligence leading to fabrication of the scientific message ...

  3. Exclusive: investigators found plagiarism and data falsification in

    In Garofalo's case, a committee found 11 cases of research misconduct — 7 concerning plagiarism and 4 image falsification — in 8 papers published while she was in Croce's laboratory (of ...

  4. Research Misconduct by the Numbers

    The data on this page reflect the Research Integrity and Administrative Investigations Division's allegations of research misconduct received, research misconduct cases opened and closed, and outcome of research misconduct cases closed by Fiscal Year (FY). This page will be updated yearly. FY 2023 (October 1, 2022 - September 30, 2023)

  5. 'Gagged and blindsided': how an allegation of research misconduct

    Swedish research misconduct agency swamped with cases in first year. Because the MIT investigation was confidential, Sasisekharan explains, he and his colleagues were effectively gagged for nearly ...

  6. Case Summary: Brigidi, Gian-Stefano

    Gian-Stefano Brigidi, Ph.D. University of California San Diego and University of Utah: Based on the report of an assessment conducted by the University of Utah (UU), and inquiry conducted by the University of California San Diego (UCSD), the Respondent's admission, and additional analysis conducted by the Office of Research Integrity (ORI) in its oversight review, ORI found that Gian-Stefano ...

  7. Trends in Extramural Research Integrity Allegations Received at NIH

    Table 1 shows the total number of new research integrity allegation types that NIH received between calendar years (CYs) 2013 and 2022. These include allegations related to traditional research misconduct and professional misconduct, such as peer review, foreign interference, harassment, grant fraud, and other types.

  8. Exclusive: official investigation reveals how superconductivity

    Rochester was finally forced to launch a full investigation to determine misconduct by the NSF. In October 2022, James Hamlin, a physicist at the University of Florida in Gainesville, submitted ...

  9. Case Summary: Jarrett, Stuart G.

    Research Misconduct. Case Summaries; Handling Misconduct. Technical Assistance ... (UK) College of Medicine, engaged in research misconduct under 42 C.F.R. Part 93 in research supported by U.S. Public Health Service ... Funded Project Dates: July 1, 2010-March 31, 2022. R01 CA207312-01, "Defining mechanisms of MC1R-enhanced nucleotide ...

  10. US Office of Research Integrity received 269 allegations of research

    The US Office of Research Integrity (ORI) received a total of 269 complaints of alleged research misconduct between Oct. 1, 2021 and Sept. 30, 2022, a new report released by the agency reveals ...

  11. Federal Register :: Findings of Research Misconduct

    Dated: March 14, 2022. Acting Director, Office of Research Integrity, Office of the Assistant Secretary for Health. [ FR Doc. 2022-05659 Filed 3-16-22; 8:45 am] Findings of research misconduct have been made against Shuo Chen, Ph.D. (Respondent), formerly a postdoctoral researcher, Department of Physics, University of California, Berkeley (UCB).

  12. Top Harvard Medical School Neuroscientist Accused of Research Misconduct

    Top Harvard Medical School neuroscientist Khalid Shah allegedly falsified data and plagiarized images across 21 papers, data manipulation expert Elisabeth M. Bik said Sunday.

  13. In Fraud We Trust: Top 5 Cases of Misconduct in University Research

    In this case, Visvanathan committed the plagiarism and Lushington knowingly refrained from reporting it to the university. Learn more about this case here. Columbia University Research Misconduct. The year was 2010. Bengü Sezen was finally caught falsifying data after ten years of continuously committing fraud.

  14. Stanford University's president investigated over research misconduct

    Republish. Stanford University's president, Marc Tessier-Lavigne, is the focus of an investigation alleging multiple manipulated images were included in at least four neurobiology papers that he co-authored. The investigation was announced soon after The Daily, the university student newspaper, reported the allegations, which have been raised ...

  15. A U.S. federal science watchdog made just three findings of misconduct

    Retraction Watch readers are likely familiar with the U.S. Office of Research Integrity (ORI), the agency that oversees institutional investigations into misconduct in research funded by the NIH, as well as focusing on education programs. Earlier this month, ORI released data on its case closures dating back to 2006. We've charted those data in the…

  16. PDF Research Misconduct & Detrimental Research Practices: Overview & Case

    Research Misconduct Findings are made when • The allegation is proven by a preponderance of the evidence • The misconduct is committed intentionally, knowingly, or ... More Queries than Misconduct Cases Opened Data from 2005-2022 265. 274. 223. 202. 185. 161. 251. 426. 442. 342. 263. 266. 232. 198 ...

  17. Deceiving scientific research, misconduct events are possibly a more

    Published online 2022 Aug 23. ... The aim of this article is not just trying to report a case of misconduct to the concerned stakeholders, but also to the research community as a whole in the hope other researchers might avoid similar experiences. ... The answer was simple, it was a clear research misconduct act and the data has been duplicated ...

  18. Research Misconduct News

    The Strategic Council's effort, which began in 2022, found that U.S. public confidence in science, the scientific community, and leaders of scientific communities is high relative to other civic, cultural, and governmental institutions for which researchers regularly collect such data. ... DHHS The Office of Research Integrity Misconduct Case ...

  19. PDF Observations from NSF Plagiarism Investigations and Strategies to

    March 4, 2022 OIG I-18-0002-PR Observations from NSF ... research misconduct and whose cases closed during fiscal years 2007-2017. These researchers often were employed in junior academic positions, recent degree recipients, educated in non-U.S. ... of research misconduct and for the inquiry, investigation, and adjudication of alleged ...

  20. Case Summary: Brand, Toni M.

    Toni M. Brand, Ph.D., University of Wisconsin-Madison and University of California San Francisco: Based on the reports of investigations conducted by the University of Wisconsin-Madison (UWM) and the University of California San Francisco (UCSF) and additional analysis conducted by the Office of Research Integrity (ORI) in its oversight review, ORI found that Dr. Toni M.

  21. Test Your Knowledge

    In this interactive session, experts break down several case studies and hear from the audience to explain Public Health Service (PHS) regulations on handling allegations and responsibilities of an institution receiving PHS funds. Tune in to the recording to join the conversation and check your knowledge on the ethical conduct of research! NIH ...

  22. Harvard cancer institute moves to retract six studies, correct 31 ...

    The Dana-Farber Cancer Institute, a prestigious Harvard teaching school, is moving to retract six studies and correct 31 others following allegations of data manipulation.

  23. FTX and Alameda Research Hit with $12.7 Billion Penalty: A ...

    I n a monumental decision, FTX and Alameda Research have been ordered to pay an eye-watering $12.7 billion to resolve a lawsuit filed by the Commodity Futures Trading Commission (CFTC). This court ...

  24. How are global university rankings adjusted for erroneous science

    Global university rankings (GURs), such as the Times Higher Education World University Ranking (THE WUR), Quacquarelli Symonds University World Rankings (QS UWR) and the Academic Ranking of World Universities (ARWU) are positively incremental, that is, they do not reflect any level of penalisation in response to unscholarly activity, especially in the field of research and publication.

  25. PDF 2022 ANNUAL REPORT

    The vast majority of the exonerations in Illinois continued to be cases tainted by misconduct of corrupt police officers led by Chicago Police Sgt. Ronald Watts, who planted drugs or weapons on people after they refused to pay bribes. We published 97 exonerations that occurred in 2022 based on this misconduct. We

  26. Ex-Defender Loses Case Over Judiciary's Misconduct Process (1)

    A federal judge has ruled against a former federal defender's claims that judiciary officials improperly handled her sexual harassment claim against her supervisor. Senior US District Judge William Young in Boston said in a Friday ruling that Caryn Strickland had failed to prove that her equal ...

  27. UTSA, TAMUSA Behavioral Intervention Teams host annual coalition

    Attendees engaged in comprehensive training and discussions on best practices, case studies, lessons learned and the latest research. Presentations and messaging focused on preventing targeted violence and emphasizing the importance of inter-institutional collaboration in safeguarding campus communities.

  28. ORI Case Closures

    ORI Case Closures. Posted on Wed, 02/09/2022 - 11:48. In calendar year 2021 (CY21), ORI closed 93 cases. This includes 3 cases with research misconduct findings, 26 cases in which ORI declined to pursue (DTP), 12 cases without findings of research misconduct (no-misconduct), and 52 cases closed during the assessment or inquiry stage (accession ...

  29. Case Summary: Leong, Daniel

    Case Summary: Leong, Daniel. Daniel Leong, Ph.D., Albert Einstein College of Medicine: Based on the report of an investigation conducted by Albert Einstein College of Medicine (AECM) and additional analysis conducted by the Office of Research Integrity (ORI) in its oversight review, ORI found that Daniel Leong, Ph.D. (Respondent), formerly a ...

  30. Public trust in charities 2024

    BMG Research were commissioned by the Charity Commission to run 3 waves of their public tracker, starting with the 2024 wave. 2024 marks the last year of the current Charity Commission strategy so ...