Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is a peer review in research

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, what is your plagiarism score.

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

You are using an outdated browser . Please upgrade your browser today !

What Is Peer Review and Why Is It Important?

It’s one of the major cornerstones of the academic process and critical to maintaining rigorous quality standards for research papers. Whichever side of the peer review process you’re on, we want to help you understand the steps involved.

This post is part of a series that provides practical information and resources for authors and editors.

Peer review – the evaluation of academic research by other experts in the same field – has been used by the scientific community as a method of ensuring novelty and quality of research for more than 300 years. It is a testament to the power of peer review that a scientific hypothesis or statement presented to the world is largely ignored by the scholarly community unless it is first published in a peer-reviewed journal.

It is also safe to say that peer review is a critical element of the scholarly publication process and one of the major cornerstones of the academic process. It acts as a filter, ensuring that research is properly verified before being published. And it arguably improves the quality of the research, as the rigorous review by like-minded experts helps to refine or emphasise key points and correct inadvertent errors.

Ideally, this process encourages authors to meet the accepted standards of their discipline and in turn reduces the dissemination of irrelevant findings, unwarranted claims, unacceptable interpretations, and personal views.

If you are a researcher, you will come across peer review many times in your career. But not every part of the process might be clear to you yet. So, let’s have a look together!

Types of Peer Review

Peer review comes in many different forms. With single-blind peer review , the names of the reviewers are hidden from the authors, while double-blind peer review , both reviewers and authors remain anonymous. Then, there is open peer review , a term which offers more than one interpretation nowadays.

Open peer review can simply mean that reviewer and author identities are revealed to each other. It can also mean that a journal makes the reviewers’ reports and author replies of published papers publicly available (anonymized or not). The “open” in open peer review can even be a call for participation, where fellow researchers are invited to proactively comment on a freely accessible pre-print article. The latter two options are not yet widely used, but the Open Science movement, which strives for more transparency in scientific publishing, has been giving them a strong push over the last years.

If you are unsure about what kind of peer review a specific journal conducts, check out its instructions for authors and/or their editorial policy on the journal’s home page.

Why Should I Even Review?

To answer that question, many reviewers would probably reply that it simply is their “academic duty” – a natural part of academia, an important mechanism to monitor the quality of published research in their field. This is of course why the peer-review system was developed in the first place – by academia rather than the publishers – but there are also benefits.

Are you looking for the right place to publish your paper? Find out here whether a De Gruyter journal might be the right fit.

Besides a general interest in the field, reviewing also helps researchers keep up-to-date with the latest developments. They get to know about new research before everyone else does. It might help with their own research and/or stimulate new ideas. On top of that, reviewing builds relationships with prestigious journals and journal editors.

Clearly, reviewing is also crucial for the development of a scientific career, especially in the early stages. Relatively new services like Publons and ORCID Reviewer Recognition can support reviewers in getting credit for their efforts and making their contributions more visible to the wider community.

The Fundamentals of Reviewing

You have received an invitation to review? Before agreeing to do so, there are three pertinent questions you should ask yourself:

  • Does the article you are being asked to review match your expertise?
  • Do you have time to review the paper?
  • Are there any potential conflicts of interest (e.g. of financial or personal nature)?

If you feel like you cannot handle the review for whatever reason, it is okay to decline. If you can think of a colleague who would be well suited for the topic, even better – suggest them to the journal’s editorial office.

But let’s assume that you have accepted the request. Here are some general things to keep in mind:

Please be aware that reviewer reports provide advice for editors to assist them in reaching a decision on a submitted paper. The final decision concerning a manuscript does not lie with you, but ultimately with the editor. It’s your expert guidance that is being sought.

Reviewing also needs to be conducted confidentially . The article you have been asked to review, including supplementary material, must never be disclosed to a third party. In the traditional single- or double-blind peer review process, your own anonymity will also be strictly preserved. Therefore, you should not communicate directly with the authors.

When writing a review, it is important to keep the journal’s guidelines in mind and to work along the building blocks of a manuscript (typically: abstract, introduction, methods, results, discussion, conclusion, references, tables, figures).

After initial receipt of the manuscript, you will be asked to supply your feedback within a specified period (usually 2-4 weeks). If at some point you notice that you are running out of time, get in touch with the editorial office as soon as you can and ask whether an extension is possible.

Some More Advice from a Journal Editor

  • Be critical and constructive. An editor will find it easier to overturn very critical, unconstructive comments than to overturn favourable comments.
  • Justify and specify all criticisms. Make specific references to the text of the paper (use line numbers!) or to published literature. Vague criticisms are unhelpful.
  • Don’t repeat information from the paper , for example, the title and authors names, as this information already appears elsewhere in the review form.
  • Check the aims and scope. This will help ensure that your comments are in accordance with journal policy and can be found on its home page.
  • Give a clear recommendation . Do not put “I will leave the decision to the editor” in your reply, unless you are genuinely unsure of your recommendation.
  • Number your comments. This makes it easy for authors to easily refer to them.
  • Be careful not to identify yourself. Check, for example, the file name of your report if you submit it as a Word file.

Sticking to these rules will make the author’s life and that of the editors much easier!

Explore new perspectives on peer review in this collection of blog posts published during Peer Review Week 2021

what is a peer review in research

[Title image by AndreyPopov/iStock/Getty Images Plus

David Sleeman

David Sleeman worked as a Senior Journals Manager in the field of Physical Sciences at De Gruyter.

You might also be interested in

Academia & Publishing

How to Maximize Your Message Through Social Media: A Global Masterclass from Library Professionals

Taking libraries into the future, part 2: an interview with mike jones and tomasz stompor, embracing diversity, equity and inclusion: social justice and the modern university, visit our shop.

De Gruyter publishes over 1,300 new book titles each year and more than 750 journals in the humanities, social sciences, medicine, mathematics, engineering, computer sciences, natural sciences, and law.

Pin It on Pinterest

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox , Microsoft Edge , Google Chrome , or Safari 14 or newer. If you are unable to, and need support, please send us your feedback .

We'd appreciate your feedback. Tell us what you think! opens in new tab/window

What is peer review?

Reviewers play a pivotal role in scholarly publishing. The peer review system exists to validate academic work, helps to improve the quality of published research, and increases networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation and has continued successfully with relatively minor changes for some 350 years.

Elsevier relies on the peer review process to uphold the quality and validity of individual articles and the journals that publish them.

Peer review has been a formal part of scientific communication since the first scientific journals appeared more than 300 years ago. The Philosophical Transactions opens in new tab/window of the Royal Society is thought to be the first journal to formalize the peer review process opens in new tab/window under the editorship of Henry Oldenburg (1618- 1677).

Despite many criticisms about the integrity of peer review, the majority of the research community still believes peer review is the best form of scientific evaluation. This opinion was endorsed by the outcome of a survey Elsevier and Sense About Science conducted in 2009 opens in new tab/window and has since been further confirmed by other publisher and scholarly organization surveys. Furthermore, a  2015 survey by the Publishing Research Consortium opens in new tab/window , saw 82% of researchers agreeing that “without peer review there is no control in scientific communication.”

To learn more about peer review, visit Elsevier’s free e-learning platform  Researcher Academy opens in new tab/window and see our resources below.

The review process

The peer review process

Types of peer review.

Peer review comes in different flavours. Each model has its own advantages and disadvantages, and often one type of review will be preferred by a subject community. Before submitting or reviewing a paper, you must therefore check which type is employed by the journal so you are aware of the respective rules. In case of questions regarding the peer review model employed by the journal for which you have been invited to review, consult the journal’s homepage or contact the editorial office directly.  

Single anonymized review

In this type of review, the names of the reviewers are hidden from the author. This is the traditional method of reviewing and is the most common type by far. Points to consider regarding single anonymized review include:

Reviewer anonymity allows for impartial decisions , as the reviewers will not be influenced by potential criticism from the authors.

Authors may be concerned that reviewers in their field could delay publication, giving the reviewers a chance to publish first.

Reviewers may use their anonymity as justification for being unnecessarily critical or harsh when commenting on the authors’ work.

Double anonymized review

Both the reviewer and the author are anonymous in this model. Some advantages of this model are listed below.

Author anonymity limits reviewer bias, such as on author's gender, country of origin, academic status, or previous publication history.

Articles written by prestigious or renowned authors are considered based on the content of their papers, rather than their reputation.

But bear in mind that despite the above, reviewers can often identify the author through their writing style, subject matter, or self-citation – it is exceedingly difficult to guarantee total author anonymity. More information for authors can be found in our  double-anonymized peer review guidelines .

Triple anonymized review

With triple anonymized review, reviewers are anonymous to the author, and the author's identity is unknown to both the reviewers and the editor. Articles are anonymized at the submission stage and are handled in a way to minimize any potential bias towards the authors. However, it should be noted that: 

The complexities involved with anonymizing articles/authors to this level are considerable.

As with double anonymized review, there is still a possibility for the editor and/or reviewers to correctly identify the author(s) from their writing style, subject matter, citation patterns, or other methodologies.

Open review

Open peer review is an umbrella term for many different models aiming at greater transparency during and after the peer review process. The most common definition of open review is when both the reviewer and author are known to each other during the peer review process. Other types of open peer review consist of:

Publication of reviewers’ names on the article page 

Publication of peer review reports alongside the article, either signed or anonymous 

Publication of peer review reports (signed or anonymous) with authors’ and editors’ responses alongside the article 

Publication of the paper after pre-checks and opening a discussion forum to the community who can then comment (named or anonymous) on the article 

Many believe this is the best way to prevent malicious comments, stop plagiarism, prevent reviewers from following their own agenda, and encourage open, honest reviewing. Others see open review as a less honest process, in which politeness or fear of retribution may cause a reviewer to withhold or tone down criticism. For three years, five Elsevier journals experimented with publication of peer review reports (signed or anonymous) as articles alongside the accepted paper on ScienceDirect ( example opens in new tab/window ).

Read more about the experiment

More transparent peer review

Transparency is the key to trust in peer review and as such there is an increasing call towards more  transparency around the peer review process . In an effort to promote transparency in the peer review process, many Elsevier journals therefore publish the name of the handling editor of the published paper on ScienceDirect. Some journals also provide details about the number of reviewers who reviewed the article before acceptance. Furthermore, in order to provide updates and feedback to reviewers, most Elsevier journals inform reviewers about the editor’s decision and their peers’ recommendations. 

Article transfer service: sharing reviewer comments

Elsevier authors may be invited to  transfer  their article submission from one journal to another for free if their initial submission was not successful. 

As a referee, your review report (including all comments to the author and editor) will be transferred to the destination journal, along with the manuscript. The main benefit is that reviewers are not asked to review the same manuscript several times for different journals. 

Tools and resources

Interesting reads.

Chapter 2 of Academic and Professional Publishing, 2012, by Irene Hames in 2012 opens in new tab/window

"Is Peer Review in Crisis?" Perspectives in Publishing No 2, August 2004, by Adrian Mulligan opens in new tab/window

“The history of the peer-review process” Trends in Biotechnology, 2002, by Ray Spier opens in new tab/window

Reviewers’ Update articles

Peer review using today’s technology

Lifting the lid on publishing peer review reports: an interview with Bahar Mehmani and Flaminio Squazzoni

How face-to-face peer review can benefit authors and journals alike

Innovation in peer review: introducing “volunpeers”

Results masked review: peer review without publication bias

Elsevier Researcher Academy modules

The certified peer reviewer course opens in new tab/window

Transparency in peer review opens in new tab/window

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on 6 May 2022 by Tegan George . Revised on 2 September 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer review.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism, run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymised) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymised comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymised) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymised) review – where the identities of the author, reviewers, and editors are all anonymised – does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimises potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymise everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimise back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarise the argument in your own words

Summarising the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organised. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticised, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the ‘compliment sandwich’, where you ‘sandwich’ your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarised or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • Reject the manuscript and send it back to author, or
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, September 02). What Is Peer Review? | Types & Examples. Scribbr. Retrieved 12 August 2024, from https://www.scribbr.co.uk/research-methods/peer-reviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a double-blind study | introduction & examples, a quick guide to experimental design | 5 steps & examples, data cleaning | a guide with examples & steps.

American Psychological Association Logo

Peer review

Psychological Services reviewer guidelines

Psychological Services guidelines for reviewers.

Conversation with Nick Bowman, PhD

Nick Bowman, PhD

Nick Bowman, PhD, associate editor for Technology, Mind, and Behavior sheds light on registered reports, outlining key features, misconceptions, and benefits of this unique article type.

Publishing in a scholarly journal: Part 3, Peer review

Part three, peer review

In this part of the series, we examine the role of peer reviewers.

How to become a journal editor

The psychology field is looking for fresh voices—why not add yours?

Join a reviewer mentorship program

Explore and join reviewer mentorship programs offered by various APA journals.

Learn how to review a manuscript

Peer review is an integral part of science and a valuable contribution to our field. Browse these resources and consider joining the community of APA reviewers.

Get recognized for peer review

Publons is a service that provides instant recognition for peer review and enables APA reviewers and action editors to maintain a verified record of their contributions for promotion and funding applications.

Little-known secrets for how to get published

Advice from seasoned psychologists for those seeking to publish in a journal for the first time

How to review a manuscript

Journal editors identify 10 key steps for would-be reviewers

How to find reviewer opportunities

What if you want to review journal manuscripts but the editors aren’t beating down your door?

Webinars and training

colleagues reviewing information on computer screen

Standards, guidelines, and regulations

Typing on computer keyboard

Guidelines for responsible conduct regarding scientific communication

Venn diagram logo for APA Style Journal Article Reporting Standards (JARS)

APA Style Journal Article Reporting Standards

National Science Foundation (NSF) Grant Proposal Guide

The Proposal & Award Policies & Procedures Guide (PAPPG) is the source for information about NSF's proposal and award process. Each version of the PAPPG applies to all proposals or applications submitted while that version is effective.

National Institutes of Health (NIH) Peer Review Policies and Practices

NIH resources about the regulations and processes that govern peer review, including management of conflicts of interest, applicant and reviewer responsibilities in maintaining the integrity in peer review, appeals, and more.

Office of Management and Budget (OMB) Final Information Quality Bulletin for Peer Review

Peer review at APA Journals

man typing on laptop keyboard with notebook and pencil next to him

APA Journals Peer Review Process

Like other scientific journals, APA journals utilize a peer review process to guide manuscript selection and publication decisions.

magnifying glass propped up against stack of journal articles

APA reviewers get recognized through Web of Science Reviewer Recognition Service

Web of Science Reviewer Recognition Service™ enables APA reviewers and action editors to maintain a verified record of their contributions.

What is peer review?

From a publisher’s perspective, peer review functions as a filter for content, directing better quality articles to better quality journals and so creating journal brands.

Running articles through the process of peer review adds value to them. For this reason publishers need to make sure that peer review is robust.

Editor Feedback

"Pointing out the specifics about flaws in the paper’s structure is paramount. Are methods valid, is data clearly presented, and are conclusions supported by data?” (Editor feedback)

“If an editor can read your comments and understand clearly the basis for your recommendation, then you have written a helpful review.” (Editor feedback)

Principles of Peer Review

Peer Review at Its Best

What peer review does best is improve the quality of published papers by motivating authors to submit good quality work – and helping to improve that work through the peer review process. 

In fact, 90% of researchers feel that peer review improves the quality of their published paper (University of Tennessee and CIBER Research Ltd, 2013).

What the Critics Say

The peer review system is not without criticism. Studies show that even after peer review, some articles still contain inaccuracies and demonstrate that most rejected papers will go on to be published somewhere else.

However, these criticisms should be understood within the context of peer review as a human activity. The occasional errors of peer review are not reasons for abandoning the process altogether – the mistakes would be worse without it.

Improving Effectiveness

Some of the ways in which Wiley is seeking to improve the efficiency of the process, include:

  • Reducing the amount of repeat reviewing by innovating around transferable peer review
  • Providing training and best practice guidance to peer reviewers
  • Improving recognition of the contribution made by reviewers

Visit our Peer Review Process and Types of Peer Review pages for additional detailed information on peer review.

Transparency in Peer Review

Wiley is committed to increasing transparency in peer review, increasing accountability for the peer review process and giving recognition to the work of peer reviewers and editors. We are also actively exploring other peer review models to give researchers the options that suit them and their communities.

Special Issues

Special Issues are subject to extensive review, during which journal Editors or Editorial Board input is solicited for each proposal. Our approval process includes an assessment of the rationale and scope of the proposed topic(s), and the expertise of Guest Editors, if any are involved. Special Issue articles must follow the same policies as described in the journal's Author Guidelines.

Editor/Editorial Board papers

Papers authored by Editors or Editorial Board members of the title are sent to Editors that are unaffiliated with the author or institution and monitored carefully to ensure there is no peer review bias.

  • UConn Library
  • Research Now
  • Explore Information
  • Understanding & Recognizing Peer Review

Explore Information — Understanding & Recognizing Peer Review

  • Getting the Lay of the Land
  • Why use Library Information?
  • The Information Lifecycle
  • Primary & Secondary Sources - Humanities & Social Sciences
  • Primary & Secondary Sources - Sciences
  • Help & Other Resources
  • Research Now Homepage

What Do You Mean by Peer Reviewed Sources?

(Source: NCSU Libraries)

What's so great about peer review?

Peer reviewed articles are often considered the most reliable and reputable sources in that field of study. Peer reviewed articles have undergone review (hence the "peer-review") by fellow experts in that field, as well as an editorial review process. The purpose of this is to ensure that, as much as possible, the finished product meets the standards of the field. 

Peer reviewed publications are one of the main ways researchers communicate with each other. 

Most library databases have features to help you discover articles from scholarly journals. Most articles from scholarly journals have gone through the peer review process. Many scholarly journals will also publish book reviews or start off with an editorial, which are not peer reviewed - so don't be tricked!

So that means I can turn my brain off, right?

Nope! You still need to engage with what you find. Are there additional scholarly sources with research that supports the source you've found, or have you encountered an outlier in the research? Have others been able to replicate the results of the research? Is the information old and outdated? Was this study on toothpaste (for example) funded by Colgate? 

You're engaging with the research - ultimately, you decide what belongs in your project, and what doesn't. You get to decide if a source is relevant or not. It's a lot of responsibility - but it's a lot of authority, too.

Understanding Types of Sources

  • Popular vs. Scholarly
  • Reading Scholarly Articles
  • Check Yourself!

          

Popular vs. scholarly articles.

When looking for articles to use in your assignment, you should realize that there is a difference between "popular" and "scholarly" articles.

Popular  sources, such as newspapers and magazines, are written by journalists or others for general readers (for example, Time, Rolling Stone, and National Geographic).

Scholarly  sources are written for the academic community, including experts and students, on topics that are typically footnoted and based on research (for example, American Literature or New England Review). Scholarly journals are sometimes referred to as "peer-reviewed," "refereed" or "academic."

How do you find scholarly or "peer-reviewed" journal articles?

The option to select  scholarly or peer-reviewed articles is typically available on the search page of each database.  Just check the box or select the option . You can also search Ulrich's Periodical Directory  to see if the journal is Refereed / Peer-reviewed.  

Popular Sources (Magazines & Newspapers) Inform and entertain the general public.

  • Are often written by journalists or professional writers for a general audience
  • Use language easily understood by general readers
  • Rarely give full citations for sources
  • Written for the general public
  • Tend to be shorter than journal articles

Scholarly or Academic Sources (Journals & Scholarly Books) Disseminate research and academic discussion among professionals in a discipline. 

  • Are written by and for faculty, researchers or scholars (chemists, historians, doctors, artists, etc.)
  • Uses scholarly or technical language
  • Tend to be longer articles about research
  • Include full citations for sources 
  • Are often refereed or peer reviewed (articles are reviewed by an editor and other specialists before being accepted for publication)
  • Publications may include book reviews and editorials which are not considered scholarly articles

Trade Publications Neither scholarly or popular sources, but could be a combination of both. Allows practitioners in specific industries to share market and production information that improves their businesses.

  • Not peer reviewed. Usually written by people in the field or with subject expertise
  • Shorter articles that are practical
  • Provides information about current events and trends 

What might you find in a scholarly article?

  • Title:  what the article is about
  • Authors and affiliations:  the writer of the article and the professional affiliations. The credentials may appear below the name or in a footnote.
  • Abstract: brief summary of the article. Gives you a general understanding  before you read the whole thing.
  • Introduction: general overview of the research topic or problem
  • Literature Review: what others have found on the same topic
  • Methods:  information about how the authors conducted their research
  • Results: key findings of the author's research
  • Discussion/Conclusion: summary of the results or findings
  • References: Citations to publications by other authors mentioned in the article
  • Anatomy of a Scholarly Article This tutorial from the NCSU Libraries provides an interactive module for learning about the unique structure and elements of many scholarly articles.

Green logo reading "check yourself" with "yourself" inside a check mark.

  • << Previous: Primary & Secondary Sources - Sciences
  • Next: Help & Other Resources >>
  • Last Updated: Aug 2, 2024 4:30 PM
  • URL: https://guides.lib.uconn.edu/exploreinfo

Creative Commons

Banner

Peer Reviewed Literature

What is peer review, terminology, peer review what does that mean, what types of articles are peer-reviewed, what information is not peer-reviewed, what about google scholar.

  • How do I find peer-reviewed articles?
  • Scholarly vs. Popular Sources

Research Librarian

For more help on this topic, please contact our Research Help Desk: [email protected] or 781-768-7303. Stay up-to-date on our current hours . Note: all hours are EST.

what is a peer review in research

This Guide was created by Carolyn Swidrak (retired).

Research findings are communicated in many ways.  One of the most important ways is through publication in scholarly, peer-reviewed journals.

Research published in scholarly journals is held to a high standard.  It must make a credible and significant contribution to the discipline.  To ensure a very high level of quality, articles that are submitted to scholarly journals undergo a process called peer-review.

Once an article has been submitted for publication, it is reviewed by other independent, academic experts (at least two) in the same field as the authors.  These are the peers.  The peers evaluate the research and decide if it is good enough and important enough to publish.  Usually there is a back-and-forth exchange between the reviewers and the authors, including requests for revisions, before an article is published. 

Peer review is a rigorous process but the intensity varies by journal.  Some journals are very prestigious and receive many submissions for publication.  They publish only the very best, most highly regarded research. 

The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.

Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.)  For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

Peer-reviewed and refereed are identical terms.

From  Peer Review in 3 Minutes  [Video], by the North Carolina State University Library, 2014, YouTube (https://youtu.be/rOCQZ7QnoN0).

Peer reviewed articles can include:

  • Original research (empirical studies)
  • Review articles
  • Systematic reviews
  • Meta-analyses

There is much excellent, credible information in existence that is NOT peer-reviewed.  Peer-review is simply ONE MEASURE of quality. 

Much of this information is referred to as "gray literature."

Government Agencies

Government websites such as the Centers for Disease Control (CDC) publish high level, trustworthy information.  However, most of it is not peer-reviewed.  (Some of their publications are peer-reviewed, however. The journal Emerging Infectious Diseases, published by the CDC is one example.)

Conference Proceedings

Papers from conference proceedings are not usually peer-reviewed.  They may go on to become published articles in a peer-reviewed journal. 

Dissertations

Dissertations are written by doctoral candidates, and while they are academic they are not peer-reviewed.

Many students like Google Scholar because it is easy to use.  While the results from Google Scholar are generally academic they are not necessarily peer-reviewed.  Typically, you will find:

  • Peer reviewed journal articles (although they are not identified as peer-reviewed)
  • Unpublished scholarly articles (not peer-reviewed)
  • Masters theses, doctoral dissertations and other degree publications (not peer-reviewed)
  • Book citations and links to some books (not necessarily peer-reviewed)
  • Next: How do I find peer-reviewed articles? >>
  • Last Updated: Feb 12, 2024 9:39 AM
  • URL: https://libguides.regiscollege.edu/peer_review

Understanding the peer review process

What is peer review a guide for authors.

The peer review process starts once you have submitted your paper to a journal.

After submission, your paper will be sent for assessment by independent experts in your field. The reviewers are asked to judge the validity, significance, and originality of your work.

Below we expand on what peer review is, and how it works.

What is peer review? And why is important?

what is a peer review in research

Peer review is the independent assessment of your research paper by experts in your field. The purpose of peer review is to evaluate the paper’s quality and suitability for publication.

As well as peer review acting as a form of quality control for academic journals, it is a very useful source of feedback for you. The feedback can be used to improve your paper before it is published.

So at its best, peer review is a collaborative process, where authors engage in a dialogue with peers in their field, and receive constructive support to advance their work.

Use our free guide to discover how you can get the most out of the peer review process.

Why is peer review important?

Peer review is vitally important to uphold the high standards of scholarly communications, and maintain the quality of individual journals. It is also an important support for the researchers who author the papers.

Every journal depends on the hard work of reviewers who are the ones at the forefront of the peer review process. The reviewers are the ones who test and refine each article before publication. Even for very specialist journals, the editor can’t be an expert in the topic of every article submitted. So, the feedback and comments of carefully selected reviewers are an essential guide to inform the editor’s decision on a research paper.

There are also practical reasons why peer review is beneficial to you, the author. The peer review process can alert you to any errors in your work, or gaps in the literature you may have overlooked.

Researchers consistently tell us that their final published article is better than the version they submitted before peer review. 91% of respondents to a  Sense about Science peer review survey  said that their last paper was improved through peer review. A  Taylor & Francis study  supports this, finding that most researchers, across all subject areas, rated the contribution of peer review towards improving their article as 8 or above out of 10.

Read the infographic with information about peer review for journal articles.

Enlarge the infographic

Choose the right journal for your research: Think. Check. Submit

We support Think. Check. Submit. , an initiative launched by a coalition of scholarly communications organizations. It provides the tools to help you choose the right journal for your work.

Think. Check. Submit. was established because there are some journals which do not provide the quality assurance and services that should be delivered by a reputable journal. In particular, many of these journals do not make sure there is thorough peer review or editor feedback process in place.

That means, if you submit to one of these journals, you will not benefit from helpful article feedback from your peers. It may also lead to others being skeptical about the validity of your published results.

You should therefore make sure that you submit your work to a journal you can trust. By using the checklist provided on the Think. Check. Submit. website , you can make an informed choice.

Peer review integrity at Taylor & Francis

Vector illustration of a tick within a circle.

Every full research article published in a Taylor & Francis journal has been through peer review, as outlined in the journal’s aims & scope information. This means that the article’s quality, validity, and relevance has been assessed by independent peers within the research field.

We believe in the integrity of peer review with every journal we publish, ascribing to the following statement:

All published research articles in this journal have undergone rigorous peer review, based on initial editor screening, anonymous refereeing by independent expert referees, and consequent revision by article authors when required.

Different types of peer review

Peer review takes different forms and each type has pros and cons. The type of peer review model used will often vary between journals, even of the same publisher. So, check your chosen journal’s peer-review policy before you submit , to make sure you know what to expect and are comfortable with your paper being reviewed in that way.

Every Taylor & Francis journal publishes a statement describing the type of peer review used by the journal within the aims & scope section on Taylor & Francis Online.

Below we go through the most common types of peer review.

Vector illustration showing a person in a blue jumper with hand on chin thinking.

Common types of peer review

Single-anonymous peer review.

This type of peer review is also called ‘single-blind review’. In this model, the reviewers know that you are the author of the article, but you don’t know the identities of the reviewers.

Single-anonymous review is most common for science and medicine journals.

Find out more about the pros and cons of  single-anonymous peer review .

Double-anonymous peer review

In this model, which is also known as ‘double-blind review’, the reviewers don’t know that you are the author of the article. And you don’t know who the reviewers are either. Double-anonymous review is particularly common in humanities and some social sciences’ journals.

Discover more about the pros and cons of  double-anonymous peer review .

If you are submitting your article for double-anonymous peer review, make sure you know  how to make your article anonymous .

Open peer review

There is no one agreed definition of open peer review. In fact,  a recent study  identified 122 different definitions of the term. Typically, it will mean that the reviewers know you are the author and also that their identity will be revealed to you at some point during the review or publication process.

Find out more about  open peer review .

Post-publication peer review

In post-publication peer review models, your paper may still go through one of the other types of peer review first. Alternatively, your paper may be published online almost immediately, after some basic checks. Either way, once it is published, there will then be an opportunity for invited reviewers (or even readers) to add their own comments or reviews.

You can learn about the pros and cons of  post-publication peer review here.

Registered Reports

The  Registered Reports  process splits peer review into two parts.

The first round of peer review takes place after you’ve designed your study, but before you’ve collected or analyzed any data. This allows you to get feedback on both the question you’re looking to answer, and the experiment you’ve designed to test it.

If your manuscript passes peer review, the journal will give you an in-principle acceptance (IPA). This indicates that your article will be published as long as you successfully complete your study according to the pre-registered methods and submit an evidence-based interpretation of the results.

Explore Registered Reports at Taylor & Francis .

F1000 Research: Open and post-publication peer review

F1000Research  is part of the Taylor & Francis Group. It operates an innovative peer review process which is fully transparent and takes place after an article has been published.

How it works

Before publication, authors are asked to  suggest at least five potential reviewers  who are experts in the field. The reviewers also need to be able to provide unbiased reports on the article.

Submitted articles are published rapidly, after passing a series of pre-publication checks that assess, originality, readability, author eligibility, and compliance with F1000Research’s policies and ethical guidelines.

Once the article is published, expert reviewers are formally invited to review.

The peer review process is entirely open and transparent. Each peer review report, plus the approval status selected by the reviewer, is published with the reviewer’s name and affiliation alongside the article.

Authors are encouraged to respond openly to the peer review reports and can publish revised versions of their article if they wish. New versions are clearly linked and easily navigable, so that readers and reviewers can quickly find the latest version of an article.

The article remains published regardless of the reviewers’ reports. Articles that pass peer review are indexed in Scopus, PubMed, Google Scholar and other bibliographic databases.

How our publishing process works for articles

what is a peer review in research

1. Article submission

Submitting an article is easy with our single-page submission system.

The in-house editorial team carries out a basic check on each submission to ensure that all policies are adhered to.

2. Publication and data deposition

Once the authors have analysed the manuscript, the article (with its associated source data) is published within a week, enabling immediate viewing and caution.

3. Open peer review & user commenting

Expert reviewers are selected and invited. Their reports and names are published alongside the article, together with the authors’ responses and comments from registered users.

4. Article revision

Authors are encouraged to publish revised versions of their article. All versions of an article are linked and independently citable.

Articles that pass peer review are indexed in external databases such as PubMed, Scopus and Google Scholar.

Discover more about how the F1000Research model works .

Get to know the peer review process

Peer review follows a number of steps, beginning with submitting your article to a journal.

Step 1: Editor assessment

When your manuscript arrives at the journal’s editorial office it will receive an initial desk assessment by the journal’s editor or editorial office. They will check that it’s broadly suitable for the journal.

They will ask questions such as:

Is this the right journal for this article?

Does the paper cover a suitable topic according to the journal’s  aims & scope ?

Has the author followed the journal’s guidelines in the  instructions for authors ? They will check that your paper meets the basic requirements of the journal, such as word count, language clarity, and format.

Has the author included everything that’s needed for peer review? They will check that there is an abstract, author affiliation details, any figures, and research-funder information.

Does it make a significant contribution to the existing literature?

Vector illustration of a character with an arm extended and a speech bubble.

If your article doesn’t pass these initial checks the editor might reject the article immediately. This is known as a ‘desk reject’ and it is a decision made at the editor’s discretion, based on their substantial experience and subject expertise. By having this initial screening in place, it can enable a quick decision if your manuscript isn’t suitable for the journal. This means you can submit your article to another journal quickly.

If your article does pass the initial assessment, it will move to the next stage, and into peer review.

“As an editor, when you first get a submission, at one level you’re simply filtering. A fairly small proportion do not get sent out by me for review. Sometimes they simply fall outside the scope of the journal.”

– Michael Reiss, Founding Editor of Sex Education

Step 2: First round of peer review

Next, the editor will find and contact other researchers who are experts in your field, and will ask them to review the paper. A minimum of two independent reviewers is normally required for every research article. The aims and scope of each journal will outline their peer review policy in detail.

The reviewers will be asked to read and comment on your article. They may also be invited to advise the editor whether your article is suitable for publication in that journal.

So, what are the reviewers looking for?

This depends on the subject area, but they will be checking that:

Your work is original or new.

The study design and methodology are appropriate and described so that others could replicate what you have done.

You’ve engaged with all the relevant current scholarship.

The results are appropriately and clearly presented.

Your conclusions are reliable, significant, and supported by the research.

The paper fits the scope of the journal.

The work is of a high enough standard to be published in the journal.

If you have not already  shared your research data publicly , peer reviewers may request to see your datasets, to support validation of the results in your article.

Once the editor has received and considered the reviewer reports, as well as making their own assessment of your work, they will let you know their decision. The reviewer reports will be shared with you, along with any additional guidance from the editor.

If you get a straight acceptance, congratulations, your article is ready to move to publication. But, please note, that this isn’t common. Very often, you will need to revise your article and resubmit it. Or it may be that the editor decides your paper needs to be rejected by that journal.

Please note that the final editorial decision on a paper and the choice of who to invite to review is always the editor’s decision. For further details on this, please see  our peer review appeals and complaints policy.

Vector illustration showing a character pointing to a checklist with a speech bubble above their head.

Step 3: Revise and resubmit

It is very common for the editor and reviewers to have suggestions about how you can improve your paper before it is ready to be published. They might have only a few straightforward recommendations (‘minor amendments’) or require more substantial changes before your paper will be accepted for publication (‘major amendments’). Authors often tell us that the reviewers’ comments can be extremely helpful, to make sure that their article is of a high quality.

During this stage of the process you will have time to amend your article based on the reviewers’ comments, resubmitting it with any or all changes made. Make sure you know how to respond to reviewer comments, we cover this in the next section.

Once you resubmit your manuscript the editor will look through the revisions. They will often send it out for a second round of peer review, asking the reviewers to assess how you’ve responded to their comments.

After this, you may be asked to make further revisions, or the paper might be rejected if the editor thinks that the changes you’ve made are not adequate. However, if your revisions have now brought the paper up to the standard required by that journal, it then moves to the next stage.

Vector illustration of a pencil with the tip pointing down.

If you do not intend to make the revisions suggested by the journal and resubmit your paper for consideration, please make sure you formally withdraw your paper from consideration by the journal before you submit elsewhere.

Make sure you resubmit

If you have not already shared your research data publicly , peer reviewers may request to see your datasets to support the validation of the results in your article.

Step 4: Accepted

And that’s it, you’ve made it through peer review. The next step is  production

How long does peer review take?

Editorial teams work very hard to progress papers through peer review as quickly as possible. But it is important to be aware that this part of the process can take time.

The first stage is for the editor to find suitably qualified expert reviewers who are available. Given the competing demands of research life, nobody can agree to every review request they receive. It’s therefore not uncommon for a paper to go through several cycles of requests before the editor finds reviewers who are both willing and able to accept.

Then, the reviewers who do accept the request, have to find time alongside their own research, teaching, and writing, to give your paper thorough consideration.

Please do keep this in mind if you don’t receive a decision on your paper as quickly as you would like. If you’ve submitted your paper via an online system, you can use it to track the progress of your paper through peer review. Otherwise, if you need an update on the status of your paper, please get in touch with the editor.

Many journals publish key dates alongside new articles, including when the paper was submitted, accepted, and published online. While you’re at the stage of choosing a journal to submit to, take a look at these dates for a range of recent articles published in the journals you’re considering. While each article will have a slightly different timeline, this may help you to get an idea of how long publication may take.

A 360⁰ view of peer review

Peer review is a process that involves various players – the author, the reviewer and the editor to name a few. And depending on which of these hats you have on, the process can look quite different.

To help you uncover the 360⁰ peer review view,  read these interviews  with an editor, author, and reviewer.

what is a peer review in research

How to respond to reviewer comments

If the editor asks you to revise your article, you will be given time to make the required changes before resubmitting.

Vector illustration of a character wearing blue, holding a laptop in one hand, and other hand in their pocket.

When you receive the reviewers’ comments, try not to take personal offence to any criticism of your article (even though that can be hard).

Some researchers find it helpful to put the reviewer report to one side for a few days after they’ve read it for the first time. Once you have had chance to digest the idea that your article requires further work, you can more easily address the reviewer comments objectively.

When you come back to the reviewer report, take time to read through the editor and reviewers’ advice carefully, deciding what changes you will make to your article in response. Taking their points on board will make sure your final article is as robust and impactful as possible.

Please make sure that you address all the reviewer and editor comments in your revisions.

It may be helpful to resubmit your article along with a two-column grid outlining how you’ve revised your manuscript. On one side of the grid list each of the reviewers’ comments and opposite them detail the alterations you’ve made in response. This method can help you to order your thoughts, and clearly demonstrate to the editor and reviewers that you’ve considered all of their feedback.

If there are any review comments which you don’t understand or don’t know how to respond to, please get in touch with the journal’s editor and ask for their advice.

What if you don’t agree with the reviewers’ comments?

If there’s a review comment that you don’t agree with, it is important that you don’t ignore it. Instead, include an explanation of why you haven’t made that change with your resubmission. The editor can then make an assessment and include your explanation when the amended article is sent back to the reviewers.

You are entitled to defend your position but, when you do, make sure that the tone of your explanation is assertive and persuasive, rather than defensive or aggressive.

“Where possible, a little constructive advice on how to make use of the views of the referees can make all the difference, and the editor has the responsibility of deciding when and how to do this.”

– Gary McCulloch, Editor, British Journal of Educational Studies

What if my paper is rejected?

Nobody enjoys having their paper rejected by a journal, but it is a fact of academic life. It happens to almost all researchers at some point in their career. So, it is important not to let the experience knock you back. Instead, try to use it as a valuable learning opportunity.

Take time to understand why your paper has been rejected

If a journal rejects your manuscript, it may be for one of many reasons. Make sure that you understand why your paper has been rejected so that you can learn from the experience. This is especially important if you are intending to submit the same article to a different journal.

Are there fundamental changes that need to be made before the paper is ready to be published, or was this simply a case of submitting to the wrong journal? If you are unsure why your article has been rejected, then please contact the journal’s editor for advice.

Vector illustration showing a mug of hot drink with a teabag string over the side.

Some of the common reasons manuscripts are rejected

The author has submitted their paper to the wrong journal: it doesn’t fit the  aims & scope  or fails to engage with issues addressed by the journal.

The manuscript is not a true journal article, for instance it is too journalistic or clearly a thesis chapter.

The manuscript is too long or too short.

There is poor regard of the journal’s conventions, or for academic writing in general.

Poor style, grammar, punctuation or English throughout the manuscript. Get  English language editing  assistance.

The manuscript does not make any new contribution to the subject.

The research has not been properly contextualized.

There is a poor theoretical framework used. There are  actio nable recommendations to improve your manuscript .

The manuscript is poorly presented.

The manuscript is libelous or unethical.

Carefully consider where to submit next

When you made your original submission, you will probably have had a shortlist of journals you were considering. Return to that list but, before you move to your second choice, you may wish to assess whether any feedback you’ve received during peer review has changed your opinion. Your article may also be quite different if it has been through any rounds of revision. It can be helpful at this stage to re-read the  aims & scope  statements of your original shortlisted journals.

Once you have selected which journal to submit to next, make sure that you read through its information for authors and reformat your article to fit its requirements. Again, it is important to use the feedback from the peer review process to your advantage as you rewrite and reformat the manuscript.

Is ‘transferring’ an option?

A growing number of publishers offer a  transfer or cascade service  to authors when their paper is rejected. This process is designed for papers which aren’t suitable for the journal they were originally submitted to.

Vector illustration of a blue ladder leaning to the right.

If your article falls into this category then one or more alternative journals from the same publisher will be suggested. You will have the option either to submit to one of those suggested journals for review or to withdraw your article.

If you choose to transfer your article this will usually save you time. You won’t need to enter all of the details into a new submission system. Once you’ve made any changes to your paper, bearing in mind previous editor or reviewer comments, the article will be submitted to the new journal on your behalf.

We have some more information about  article transfers, and also some FAQs about the Taylor & Francis transfer process.

Why you should become a peer reviewer

When you’re not in the middle of submitting or revising your own article, you should consider becoming a reviewer yourself.

There are many demands on a researcher’s time, so it is a legitimate question to ask why some of that precious time should be spent reviewing someone else’s work. How does being a reviewer help you in your career? Here are some of the benefits.

Keep up with the latest thinking As a reviewer you get an early view of the exciting new research being done in your field. Not only that, peer review gives you a role in helping to evaluate and improve this new work.

Improve your own writing Carefully reviewing articles written by other researchers can give you an insight into how you can make your own work better. Unlike when you are reading articles as part of your research, the process of reviewing encourages you to think critically about what makes an article good (or not so good). This could be related to writing style, presentation, or the clarity of explanations.

Boost your career While a lot of reviewing is anonymous, there are schemes to recognize the important contribution of reviewers. You can also include reviewing work on your resume. Your work as a reviewer will be of interest to appointment or promotion committees who are looking for evidence of service to the profession.

Become part of a journal’s community Many journals act as the center of a network of researchers who are in conversation about key themes and developments in the field. Becoming a reviewer is a great way to get involved with that group. This can give you the opportunity to build new connections for future collaborations. Being a regular reviewer may also be the first step to becoming a member of the journal’s editorial board.

Vector illustration of a pink light bulb and a small character in blue sat on top, with their arms in the air.

Your research community needs you

Of course, being a reviewer is not just about the benefits it can bring you. The  Taylor & Francis peer review survey  found that these are the top 3 reasons why researchers choose to review:

Being an active member of the academic community Peer review is the bedrock of academic publishing. The work of reviewers is essential in helping every piece of research to become as good as it can be. By being a reviewer, you will play a vital part in advancing the research area that you care about.

Reciprocating the benefit Researchers regularly talk about the benefits to their own work from being reviewed by others. Gratitude to the reviewers who have improved your work is a great motivation to make one’s own contribution of service to the community.

Enjoying being able to help improve papers Reviewing is often anonymous, with only the editor knowing the important contribution you’ve made. However, many reviewers attest that it is work that makes them feel good, knowing that they have been able to support a fellow researcher.

How to be an effective peer reviewer

Our popular  guide to becoming a peer reviewer  covers everything you need to know to get started, including:

How to become a peer reviewer

Writing review reports: step-by-step

Ethical guidelines for peer reviewers

Reviewer recognition

Read the  Taylor & Francis reviewer guidelines .

“Reviewers are the lifeblood of any journal”

– Mike J. Smith, Editor-in-Chief of Journal of Maps

Further reading

We hope you’ve found this short introduction to peer review helpful. For further useful advice check out the following resources.

Further resources

Cover of Article submission and peer review eBook

Peer Review: the nuts and bolts A guide to peer review written by early career researchers, for early career researchers and published by Sense about Science.

A guide to becoming a peer reviewer An overview of what’s involved in becoming a reviewer for a Taylor & Francis journal.

Ethical guidelines for peer reviewer Produced by COPE, the Committee on Publication Ethics, setting out the standards all peer reviewers should follow.

Using peer review effectively: quick tips Advice available to staff and students at institutions with a Vitae membership.

Publishing tips, direct to your inbox

Expert tips and guidance on getting published and maximizing the impact of your research. Register now for weekly insights direct to your inbox.

Explore related posts

Insights topic: Peer review

what is a peer review in research

Tips on how to become a peer reviewer

Behind the scenes of peer review: meet the taylor & francis editorial office team.

what is a peer review in research

what is a peer review in research

Peer Review

What is peer review.

  • Additional support This link opens in a new window

Peer-reviewed journal articles (also called scholarly or refereed articles) are written by expert researchers and reviewed by other experts in the field. Peer review refers to a process in which information sources are examined and approved by a number of experts in that subject area before being published. Scholarly articles or books will usually be read by two or three academic reviewers who will suggest to reject or accept the article/chapters (with no, minor or major revisions) for publication. Peer-reviewed journal articles are often used as the main component of academic research. Using peer-reviewed material is a good way to make sure that the information you are receiving is creditable and correct.

Watch this video below for an introduction to the peer review process:

More information and useful links

  • Peer review process A handy explainer from BioMed Central outlining the different types of peer review, plus a flowchart of the process
  • What is peer review? A comprehensive overview of the peer review process from Elsevier
  • Peer-reviewed literature for health Explainer of peer-reviewed articles from the National Library of Medicine and some of the major health databases with peer-reviewed content

Where do peer-reviewed articles fit into the information timeline?

The timeline below demonstrates the creation of scholarly information, including where peer-reviewed fits in. It is very important to consider when a peer-reviewed article was published in order to understand where it fits into the scholarly conversation you are studying.

  • Next: Additional support >>
  • Last Updated: Jun 30, 2023 4:32 PM
  • URL: https://libguides.adelaide.edu.au/c.php?g=913180

what is a peer review in research

Research Square Company condemns Russia's invasion of Ukraine. Read our statement →

Share early., improve your manuscript., make an impact..

Total eclipse of the sun . Daniele57C [ CC BY ]

Featured Preprints

Recent videos, learn how research square videos can boost the impact of your research, featured subjects, from draft to impact, we offer a full range of services no matter where you are in your research., share early.

Post your manuscript as a preprint directly to Research Square or while under consideration at a participating journal through In Review . Posting early lets you showcase your work to funders and potential collaborators and get more citations.

  • Preprint with Research Square

Improve your manuscript

Improve your manuscript with AJE’s English language editing, formatting, and figure preparation services. Research Square supports community commenting and inline annotation, allowing you to gather feedback prior to peer review.

  • Editing services

Make an impact

Communicating your research clearly and accurately has never been more important. Our Research Promotion products are custom created by expert illustrators and scientific script writers to provide a snapshot of the key findings from your latest study.

  • Research Promotion

Trusted, proven, and ready to help you succeed

Research Square is a leading author, editorial, and video services provider. We are a trusted partner to many of the leading academic publishers, institutions, and societies worldwide.

How can we support your research?

  • STFM Journals
  • Family Medicine
  • Annals of Family Medicine

what is a peer review in research

  • About PRiMER

PROFESSIONAL DEVELOPMENT PERSPECTIVE

Peer review is primary: presentations, publications, promotions, and practice, kendall m. campbell, md | edgar figueroa, md, mph | donna baluchi, mlis | josé e. rodríguez, md.

PRiMER. 2024;8:42.

Published: 8/5/2024 | DOI: 10.22454/PRiMER.2024.148162

Peer review is primarily thought of as the process used to determine whether manuscripts are published in medical or other academic journals. While a publication may be one outcome of peer review, this article shares a model of 4 Ps to remind faculty of some important additional applications of peer review. The 4 Ps are publication, presentation, promotion, and practice. The medical literature offers few reasons why faculty should get involved in peer review. In this article, we define peer review, illustrate the role of peer review in four important processes, describe how the volume of material to review has changed over time, and share how participation in these processes promotes career advancement. Understanding the peer review process and its benefits can encourage professionals to participate in peer review in any of the four Ps as they recognize the benefits to their discipline and their career.

Introduction

Achieving promotion and tenure can be an important marker of academic success for any faculty member. Early career faculty may benefit from demystifying the elements needed in a promotion packet and the relative importance of those elements as they are reviewed by institutional promotions committees. Excellence in education, clinical care, and research or scholarship are required to varying degrees in promotion criteria. 1 Peer review is the process by which promotion committees and external referees ascribe value to the various components documented on the curriculum vitae (CV). 2 While commonly thought of primarily in the context of reviewing manuscripts, peer review is, in reality, much more central than that for advancement in our academic careers. 3,4 In this professional development perspective, we define peer review; discuss its importance in our model of the 4 Ps— p ublication, p resentation, p romotion, and p ractice; and encourage faculty interested in academic advancement to actively participate in peer review in multiple domains. 5-8 We also illustrate the importance of recruiting colleagues more often to perform peer reviews. Serving as a peer reviewer in any capacity is a critical and vital service to the discipline and ensures that our scholarship is high quality, rigorous, and reflects a diversity of thought and expertise. In academia, no one progresses from instructor level to full professor without participating in peer review.

Publication

Table 1 describes multiple writing products that are available in family medicine journals and other places. Authors can peer review for the journals in which they seek publication, such as evidence-based practice and PubMed indexed journals. These reviews require reading and assessing the quality of the manuscript, checking references, and fact-checking manuscript claims. Faculty can choose how often they review articles, but frequent reviewing helps reviewers gain experience with publication, improve their writing skills, and enhance their CV. 5, 9,10

what is a peer review in research

Presentation

Peer review also can be used to determine which presentations are accepted for conferences. Some family medicine conferences use a committee of peers to review each submission and to select presenters. Peer reviewers for conference presentations spend time reviewing submissions and choosing the best ones, with the goal of ensuring high-quality offerings for conference attendees. 11-13 In this type of review, the materials presented may be abstracts, responses to prompts required in the submission, and relevance to the conference themes and categories. Overall, this could be a significant volume of material to review for each presentation, and many reviews may be required in a short time frame. Reviewing presentation proposals benefits the individual faculty member performing the review in that it helps define what is important in the discipline and can serve as inspiration for projects or processes in their career.

Academic advancement depends on the skills developed in reviewing presentations and publications, as well as on the academic medicine community employing those skills to review promotion packets. Promotion review can take a long time, and it requires a detailed, rigorous assessment, which is well-documented in the literature. 14,15 Teaching, research, and clinical statements, individuals’ publication portfolios, and national and international reputation are all evaluated in the promotion review. This is a time-consuming and volume-intensive review. Because of the low numbers of family medicine faculty in tenure lines at the professor rank, 16-18 few are qualified to review promotion packages of those seeking promotion to full professor. 19,20 In addition, those who are seeking promotion may be reviewed by those at the level they are seeking or above (eg, associate professors can evaluate those being promoted to assistant professors) This type of reviewing too can be added to your CV, can help others in the field advance, and can help inform individuals as to the activities that are valued in promotions at other institutions.

Peer review for practice takes on many forms. When we apply for a state license, the state medical board reviews our application. They are our peers. When we apply for privileges in a hospital system, the credentialing board also reviews our application. These boards are also made up of our peers, and frequently they will ask peers not on the board to review applications. While reviewing applications for licensure is not the same as reviewing for publications, presentations, or promotions, the skills gained by reviewing manuscripts, presentations, and promotion packets can be transferred to reviewing applications for practice and credentialing. This type of review has direct benefit for our patients by ensuring that those who treat them have met minimum qualifications and keeps the reviewer abreast of advances in the requirements for patient care.

Conclusions

In this article we have further characterized how peer review is at the heart of four major activities in an academic career: publication, presentation, promotion, and practice (Figure 1). In summary, the peer review process is fundamental for academic medicine faculty who are seeking promotion to the next rank. In sharing levels of review, we have defined the highest level as an individual review by a faculty peer of similar rank, training, expertise, interest, and specialty. We have further shared that those who review CVs and other promotion documents can estimate the number of peer reviews a faculty member has received and use that information for decision-making regarding promotion.

The peer review activities presented in this article can be listed on the faculty member’s CV under the heading “Service to the Profession.” Entries can include journals for which you review, service on committees that review presentations, credentialing applications, and promotions/tenure.

Peer review is so central to our discipline that it behooves all of us to participate as reviewers, in publication, presentation, promotion, and practice. Early career faculty can begin peer reviewing for journals as soon as they get their first faculty appointment. Peer review also can be the nidus of ideas that inspire the reviewer, improve the reviewer’s knowledge base, and provide a source of continuing medical education credits. 9 Reviewing the work of others for publication, presentation, promotion, or practice not only serves the discipline, but it helps the individual reviewer become a better author, presenter, and evaluator of academic and practicing health care providers. In short, peer review is a primary activity for family medicine faculty.

what is a peer review in research

Acknowledgments

The authors acknowledge the Society of Teachers of Family Medicine and the American Board of Family Medicine Foundation for their ongoing support of the Leadership Through Scholarship Fellowship through which this work has been made possible.

Financial Support: This work is partially supported by the Society of Teachers of Family Medicine and the American Board of Family Medicine through a grant to fund the Leadership Through Scholarship Fellowship.

Presentations: Some of the content from this manuscript was presented at the 51st Annual North American Primary Care Research Group’s meeting, October 31 to November 3, 2023, San Francisco, California, in a session entitled “Peer Reviewing: A Workshop With Editors of Family Medicine Journals.”

  • Milner RJ, Flotte TR, Thorndyke LE. Defining scholarship for today and tomorrow.  J Contin Educ Health Prof . 2023;43(2):133-138.  doi:10.1097/CEH.0000000000000473
  • Campbell KM, Rodríguez JE. Gearing up: accelerating your CV to promotion and tenure.  PRiMER . 2024;8(1):1.  doi:10.22454/PRiMER.2024.782013
  • Smith R. Peer review: a flawed process at the heart of science and journals.  J R Soc Med . 2006;99(4):178-182.  doi:10.1177/014107680609900414
  • Kelly J, Sadeghieh T, Adeli K. Peer review in scientific publications: benefits, critiques, and a survival guide.  EJIFCC . 2014;25(3):227-243.
  • Morley CP, Prunuske J. Conducting a manuscript peer review.  PRiMER . 2023;7:35.  doi:10.22454/PRiMER.2023.674484
  • Peh WC. Peer review: concepts, variants and controversies.  Singapore Med J . 2022;63(2):55-60.  doi:10.11622/smedj.2021139
  • Frasca D. Writing an effective peer review.  Fam Med . 2023;55(8):566.  doi:10.22454/FamMed.2023.616815
  • Watling C, Ginsburg S, Lingard L. Don’t be reviewer 2! reflections on writing effective peer review comments.  Perspect Med Educ . 2021;10(5):299-303.  doi:10.1007/S40037-021-00670-Z
  • Sempokuya T, McDonald N, Bilal M. How to be a great peer reviewer.  ACG Case Rep J . 2023;9(12):e00932.  doi:10.14309/crj.0000000000000932
  • Morley CP, Grammer S. Now more than ever: reflections on the state and importance of peer review.  PRiMER . 2021;5(36):36.  doi:10.22454/PRiMER.2021.216183
  • Deveugele M, Silverman J. Peer-review for selection of oral presentations for conferences: are we reliable?  Patient Educ Couns . 2017;100(11):2,147-2,150.  doi:10.1016/j.pec.2017.06.007
  • Ioannidis JPA, Berkwits M, Flanagin A, Bloom T. Peer review and scientific publication at a crossroads: call for research for the 10th International Congress on Peer Review and Scientific Publication.  JAMA . 2023;330(13):1,232-1,235.  doi:10.1001/jama.2023.17607
  • Culmer N, Drowos J, DeMasi M, et al. Pursuing scholarship: creating effective conference submissions.  PRiMER . 2024;8:13.  doi:10.22454/PRiMER.2024.345782
  • Weidner A, Brazelton T, Altman W. The challenges of external letters for promotion: academic family medicine’s attempts to address the issue.  Ann Fam Med . 2023;21(6):559-560.  doi:10.1370/afm.3061
  • Minor S, Stumbar SE, Drowos J, et al. Writing an external letter of review for promotion.  PRiMER . 2023;7:34.  doi:10.22454/PRiMER.2023.447836
  • Xierali IM, Nivet MA, Syed ZA, Shakil A, Schneider FD. Recent trends in faculty promotion in U.S. medical schools: implications for recruitment, retention, and diversity and inclusion.  Acad Med . 2021;96(10):1,441-1,448.  doi:10.1097/ACM.0000000000004188
  • Xierali IM, Nivet MA, Rayburn WF. Diversity of department chairs in family medicine at US medical schools.  J Am Board Fam Med . 2022;35(1):152-157.  doi:10.3122/jabfm.2022.01.210298
  • Fisher ZE, Rodríguez JE, Campbell KM. A review of tenure for Black, Latino, and Native American faculty in academic medicine.  South Med J . 2017;110(1):11-17.  doi:10.14423/SMJ.0000000000000593
  • Salajegheh M, Hekmat SN, Macky M. Challenges and solutions for the promotion of medical sciences faculty members in Iran: a systematic review.  BMC Med Educ . 2022;22(1):406.  doi:10.1186/s12909-022-03451-2
  • Mullangi S, Blutt MJ, Ibrahim S. Is it time to reimagine academic promotion and tenure?  JAMA Health Forum . 2020;1(2):e200164.  doi:10.1001/jamahealthforum.2020.0164

Lead Author

Kendall M. Campbell, MD

Affiliations: Department of Family Medicine, University of Texas Medical Branch, Galveston, TX

Edgar Figueroa, MD, MPH - Weill Cornell Medicine, New York, NY

Donna Baluchi, MLIS - Spencer S. Eccles Health Sciences Library, University of Utah, Salt Lake City, UT

José E. Rodríguez, MD - Department of Family and Preventive Medicine, University of Utah Health, Salt Lake City, UT

Corresponding Author

José E. Rodríguez, MD

Correspondence: Department of Family and Preventive Medicine, University of Utah Health, Salt Lake City, UT

Email: [email protected]

There are no comments for this article.

Downloads & info, related content.

  • Academic Environment
  • Faculty Development
  • Promotion and Tenure

Campbell KM, Figueroa E, Baluchi D, Rodríguez JE. Peer Review Is Primary: Presentations, Publications, Promotions, and Practice. PRiMER. 2024;8:42. https://doi.org/10.22454/PRiMER.2024.148162

Citation files in RIS format are importable by EndNote, ProCite, RefWorks, Mendeley, and Reference Manager.

  • RIS (EndNote, Reference Manager, ProCite, Mendeley, RefWorks)
  • BibTex (JabRef, BibDesk, LaTeX)

Search Results

what is a peer review in research

Contact STFM

2024 © Society of Teachers of Family Medicine. All Rights Reserved.

  • Systematic review
  • Open access
  • Published: 07 August 2024

Models and frameworks for assessing the implementation of clinical practice guidelines: a systematic review

  • Nicole Freitas de Mello   ORCID: orcid.org/0000-0002-5228-6691 1 , 2 ,
  • Sarah Nascimento Silva   ORCID: orcid.org/0000-0002-1087-9819 3 ,
  • Dalila Fernandes Gomes   ORCID: orcid.org/0000-0002-2864-0806 1 , 2 ,
  • Juliana da Motta Girardi   ORCID: orcid.org/0000-0002-7547-7722 4 &
  • Jorge Otávio Maia Barreto   ORCID: orcid.org/0000-0002-7648-0472 2 , 4  

Implementation Science volume  19 , Article number:  59 ( 2024 ) Cite this article

422 Accesses

6 Altmetric

Metrics details

The implementation of clinical practice guidelines (CPGs) is a cyclical process in which the evaluation stage can facilitate continuous improvement. Implementation science has utilized theoretical approaches, such as models and frameworks, to understand and address this process. This article aims to provide a comprehensive overview of the models and frameworks used to assess the implementation of CPGs.

A systematic review was conducted following the Cochrane methodology, with adaptations to the "selection process" due to the unique nature of this review. The findings were reported following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines. Electronic databases were searched from their inception until May 15, 2023. A predetermined strategy and manual searches were conducted to identify relevant documents from health institutions worldwide. Eligible studies presented models and frameworks for assessing the implementation of CPGs. Information on the characteristics of the documents, the context in which the models were used (specific objectives, level of use, type of health service, target group), and the characteristics of each model or framework (name, domain evaluated, and model limitations) were extracted. The domains of the models were analyzed according to the key constructs: strategies, context, outcomes, fidelity, adaptation, sustainability, process, and intervention. A subgroup analysis was performed grouping models and frameworks according to their levels of use (clinical, organizational, and policy) and type of health service (community, ambulatorial, hospital, institutional). The JBI’s critical appraisal tools were utilized by two independent researchers to assess the trustworthiness, relevance, and results of the included studies.

Database searches yielded 14,395 studies, of which 80 full texts were reviewed. Eight studies were included in the data analysis and four methodological guidelines were additionally included from the manual search. The risk of bias in the studies was considered non-critical for the results of this systematic review. A total of ten models/frameworks for assessing the implementation of CPGs were found. The level of use was mainly policy, the most common type of health service was institutional, and the major target group was professionals directly involved in clinical practice. The evaluated domains differed between the models and there were also differences in their conceptualization. All the models addressed the domain "Context", especially at the micro level (8/12), followed by the multilevel (7/12). The domains "Outcome" (9/12), "Intervention" (8/12), "Strategies" (7/12), and "Process" (5/12) were frequently addressed, while "Sustainability" was found only in one study, and "Fidelity/Adaptation" was not observed.

Conclusions

The use of models and frameworks for assessing the implementation of CPGs is still incipient. This systematic review may help stakeholders choose or adapt the most appropriate model or framework to assess CPGs implementation based on their specific health context.

Trial registration

PROSPERO (International Prospective Register of Systematic Reviews) registration number: CRD42022335884. Registered on June 7, 2022.

Peer Review reports

Contributions to the literature

Although the number of theoretical approaches has grown in recent years, there are still important gaps to be explored in the use of models and frameworks to assess the implementation of clinical practice guidelines (CPGs). This systematic review aims to contribute knowledge to overcome these gaps.

Despite the great advances in implementation science, evaluating the implementation of CPGs remains a challenge, and models and frameworks could support improvements in this field.

This study demonstrates that the available models and frameworks do not cover all characteristics and domains necessary for a complete evaluation of CPGs implementation.

The presented findings contribute to the field of implementation science, encouraging debate on choices and adaptations of models and frameworks for implementation research and evaluation.

Substantial investments have been made in clinical research and development in recent decades, increasing the medical knowledge base and the availability of health technologies [ 1 ]. The use of clinical practice guidelines (CPGs) has increased worldwide to guide best health practices and to maximize healthcare investments. A CPG can be defined as "any formal statements systematically developed to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances" [ 2 ] and has the potential to improve patient care by promoting interventions of proven benefit and discouraging ineffective interventions. Furthermore, they can promote efficiency in resource allocation and provide support for managers and health professionals in decision-making [ 3 , 4 ].

However, having a quality CPG does not guarantee that the expected health benefits will be obtained. In fact, putting these devices to use still presents a challenge for most health services across distinct levels of government. In addition to the development of guidelines with high methodological rigor, those recommendations need to be available to their users; these recommendations involve the diffusion and dissemination stages, and they need to be used in clinical practice (implemented), which usually requires behavioral changes and appropriate resources and infrastructure. All these stages involve an iterative and complex process called implementation, which is defined as the process of putting new practices within a setting into use [ 5 , 6 ].

Implementation is a cyclical process, and the evaluation is one of its key stages, which allows continuous improvement of CPGs development and implementation strategies. It consists of verifying whether clinical practice is being performed as recommended (process evaluation or formative evaluation) and whether the expected results and impact are being reached (summative evaluation) [ 7 , 8 , 9 ]. Although the importance of the implementation evaluation stage has been recognized, research on how these guidelines are implemented is scarce [ 10 ]. This paper focused on the process of assessing CPGs implementation.

To understand and improve this complex process, implementation science provides a systematic set of principles and methods to integrate research findings and other evidence-based practices into routine practice and improve the quality and effectiveness of health services and care [ 11 ]. The field of implementation science uses theoretical approaches that have varying degrees of specificity based on the current state of knowledge and are structured based on theories, models, and frameworks [ 5 , 12 , 13 ]. A "Model" is defined as "a simplified depiction of a more complex world with relatively precise assumptions about cause and effect", and a "framework" is defined as "a broad set of constructs that organize concepts and data descriptively without specifying causal relationships" [ 9 ]. Although these concepts are distinct, in this paper, their use will be interchangeable, as they are typically like checklists of factors relevant to various aspects of implementation.

There are a variety of theoretical approaches available in implementation science [ 5 , 14 ], which can make choosing the most appropriate challenging [ 5 ]. Some models and frameworks have been categorized as "evaluation models" by providing a structure for evaluating implementation endeavors [ 15 ], even though theoretical approaches from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 13 ]. Two frameworks that can specify implementation aspects that should be evaluated as part of intervention studies are RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) [ 16 ] and PRECEDE-PROCEED (Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development) [ 17 ]. Although the number of theoretical approaches has grown in recent years, the use of models and frameworks to evaluate the implementation of guidelines still seems to be a challenge.

This article aims to provide a complete map of the models and frameworks applied to assess the implementation of CPGs. The aim is also to subside debate and choices on models and frameworks for the research and evaluation of the implementation processes of CPGs and thus to facilitate the continued development of the field of implementation as well as to contribute to healthcare policy and practice.

A systematic review was conducted following the Cochrane methodology [ 18 ], with adaptations to the "selection process" due to the unique nature of this review (details can be found in the respective section). The review protocol was registered in PROSPERO (registration number: CRD42022335884) on June 7, 2022. This report adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 19 ] and a completed checklist is provided in Additional File 1.

Eligibility criteria

The SDMO approach (Types of Studies, Types of Data, Types of Methods, Outcomes) [ 20 ] was utilized in this systematic review, outlined as follows:

Types of studies

All types of studies were considered for inclusion, as the assessment of CPG implementation can benefit from a diverse range of study designs, including randomized clinical trials/experimental studies, scale/tool development, systematic reviews, opinion pieces, qualitative studies, peer-reviewed articles, books, reports, and unpublished theses.

Studies were categorized based on their methodological designs, which guided the synthesis, risk of bias assessment, and presentation of results.

Study protocols and conference abstracts were excluded due to insufficient information for this review.

Types of data

Studies that evaluated the implementation of CPGs either independently or as part of a multifaceted intervention.

Guidelines for evaluating CPG implementation.

Inclusion of CPGs related to any context, clinical area, intervention, and patient characteristics.

No restrictions were placed on publication date or language.

Exclusion criteria

General guidelines were excluded, as this review focused on 'models for evaluating clinical practice guidelines implementation' rather than the guidelines themselves.

Studies that focused solely on implementation determinants as barriers and enablers were excluded, as this review aimed to explore comprehensive models/frameworks.

Studies evaluating programs and policies were excluded.

Studies that only assessed implementation strategies (isolated actions) rather than the implementation process itself were excluded.

Studies that focused solely on the impact or results of implementation (summative evaluation) were excluded.

Types of methods

Not applicable.

All potential models or frameworks for assessing the implementation of CPG (evaluation models/frameworks), as well as their characteristics: name; specific objectives; levels of use (clinical, organizational, and policy); health system (public, private, or both); type of health service (community, ambulatorial, hospital, institutional, homecare); domains or outcomes evaluated; type of recommendation evaluated; context; limitations of the model.

Model was defined as "a deliberated simplification of a phenomenon on a specific aspect" [ 21 ].

Framework was defined as "structure, overview outline, system, or plan consisting of various descriptive categories" [ 21 ].

Models or frameworks used solely for the CPG development, dissemination, or implementation phase.

Models/frameworks used solely for assessment processes other than implementation, such as for the development or dissemination phase.

Data sources and literature search

The systematic search was conducted on July 31, 2022 (and updated on May 15, 2023) in the following electronic databases: MEDLINE/PubMed, Centre for Reviews and Dissemination (CRD), the Cochrane Library, Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Epistemonikos, Global Health, Health Systems Evidence, PDQ-Evidence, PsycINFO, Rx for Change (Canadian Agency for Drugs and Technologies in Health, CADTH), Scopus, Web of Science and Virtual Health Library (VHL). The Google Scholar database was used for the manual selection of studies (first 10 pages).

Additionally, hand searches were performed on the lists of references included in the systematic reviews and citations of the included studies, as well as on the websites of institutions working on CPGs development and implementation: Guidelines International Networks (GIN), National Institute for Health and Care Excellence (NICE; United Kingdom), World Health Organization (WHO), Centers for Disease Control and Prevention (CDC; USA), Institute of Medicine (IOM; USA), Australian Department of Health and Aged Care (ADH), Healthcare Improvement Scotland (SIGN), National Health and Medical Research Council (NHMRC; Australia), Queensland Health, The Joanna Briggs Institute (JBI), Ministry of Health and Social Policy of Spain, Ministry of Health of Brazil and Capes Theses and Dissertations Catalog.

The search strategy combined terms related to "clinical practice guidelines" (practice guidelines, practice guidelines as topic, clinical protocols), "implementation", "assessment" (assessment, evaluation), and "models, framework". The free term "monitoring" was not used because it was regularly related to clinical monitoring and not to implementation monitoring. The search strategies adapted for the electronic databases are presented in an additional file (see Additional file 2).

Study selection process

The results of the literature search from scientific databases, excluding the CRD database, were imported into Mendeley Reference Management software to remove duplicates. They were then transferred to the Rayyan platform ( https://rayyan.qcri.org ) [ 22 ] for the screening process. Initially, studies related to the "assessment of implementation of the CPG" were selected. The titles were first screened independently by two pairs of reviewers (first selection: four reviewers, NM, JB, SS, and JG; update: a pair of reviewers, NM and DG). The title screening was broad, including all potentially relevant studies on CPG and the implementation process. Following that, the abstracts were independently screened by the same group of reviewers. The abstract screening was more focused, specifically selecting studies that addressed CPG and the evaluation of the implementation process. In the next step, full-text articles were reviewed independently by a pair of reviewers (NM, DG) to identify those that explicitly presented "models" or "frameworks" for assessing the implementation of the CPG. Disagreements regarding the eligibility of studies were resolved through discussion and consensus, and by a third reviewer (JB) when necessary. One reviewer (NM) conducted manual searches, and the inclusion of documents was discussed with the other reviewers.

Risk of bias assessment of studies

The selected studies were independently classified and evaluated according to their methodological designs by two investigators (NM and JG). This review employed JBI’s critical appraisal tools to assess the trustworthiness, relevance and results of the included studies [ 23 ] and these tools are presented in additional files (see Additional file 3 and Additional file 4). Disagreements were resolved by consensus or consultation with the other reviewers. Methodological guidelines and noncomparative and before–after studies were not evaluated because JBI does not have specific tools for assessing these types of documents. Although the studies were assessed for quality, they were not excluded on this basis.

Data extraction

The data was independently extracted by two reviewers (NM, DG) using a Microsoft Excel spreadsheet. Discrepancies were discussed and resolved by consensus. The following information was extracted:

Document characteristics : author; year of publication; title; study design; instrument of evaluation; country; guideline context;

Usage context of the models : specific objectives; level of use (clinical, organizational, and policy); type of health service (community, ambulatorial, hospital, institutional); target group (guideline developers, clinicians; health professionals; health-policy decision-makers; health-care organizations; service managers);

Model and framework characteristics : name, domain evaluated, and model limitations.

The set of information to be extracted, shown in the systematic review protocol, was adjusted to improve the organization of the analysis.

The "level of use" refers to the scope of the model used. "Clinical" was considered when the evaluation focused on individual practices, "organizational" when practices were within a health service institution, and "policy" when the evaluation was more systemic and covered different health services or institutions.

The "type of health service" indicated the category of health service where the model/framework was used (or can be used) to assess the implementation of the CPG, related to the complexity of healthcare. "Community" is related to primary health care; "ambulatorial" is related to secondary health care; "hospital" is related to tertiary health care; and "institutional" represented models/frameworks not specific to a particular type of health service.

The "target group" included stakeholders related to the use of the model/framework for evaluating the implementation of the CPG, such as clinicians, health professionals, guideline developers, health policy-makers, health organizations, and service managers.

The category "health system" (public, private, or both) mentioned in the systematic review protocol was not found in the literature obtained and was removed as an extraction variable. Similarly, the variables "type of recommendation evaluated" and "context" were grouped because the same information was included in the "guideline context" section of the study.

Some selected documents presented models or frameworks recognized by the scientific field, including some that were validated. However, some studies adapted the model to this context. Therefore, the domain analysis covered all models or frameworks domains evaluated by (or suggested for evaluation by) the document analyzed.

Data analysis and synthesis

The results were tabulated using narrative synthesis with an aggregative approach, without meta-analysis, aiming to summarize the documents descriptively for the organization, description, interpretation and explanation of the study findings [ 24 , 25 ].

The model/framework domains evaluated in each document were studied according to Nilsen et al.’s constructs: "strategies", "context", "outcomes", "fidelity", "adaptation" and "sustainability". For this study, "strategies" were described as structured and planned initiatives used to enhance the implementation of clinical practice [ 26 ].

The definition of "context" varies in the literature. Despite that, this review considered it as the set of circumstances or factors surrounding a particular implementation effort, such as organizational support, financial resources, social relations and support, leadership, and organizational culture [ 26 , 27 ]. The domain "context" was subdivided according to the level of health care into "micro" (individual perspective), "meso" (organizational perspective), "macro" (systemic perspective), and "multiple" (when there is an issue involving more than one level of health care).

The "outcomes" domain was related to the results of the implementation process (unlike clinical outcomes) and was stratified according to the following constructs: acceptability, appropriateness, feasibility, adoption, cost, and penetration. All these concepts align with the definitions of Proctor et al. (2011), although we decided to separate "fidelity" and "sustainability" as independent domains similar to Nilsen [ 26 , 28 ].

"Fidelity" and "adaptation" were considered the same domain, as they are complementary pieces of the same issue. In this study, implementation fidelity refers to how closely guidelines are followed as intended by their developers or designers. On the other hand, adaptation involves making changes to the content or delivery of a guideline to better fit the needs of a specific context. The "sustainability" domain was defined as evaluations about the continuation or permanence over time of the CPG implementation.

Additionally, the domain "process" was utilized to address issues related to the implementation process itself, rather than focusing solely on the outcomes of the implementation process, as done by Wang et al. [ 14 ]. Furthermore, the "intervention" domain was introduced to distinguish aspects related to the CPG characteristics that can impact its implementation, such as the complexity of the recommendation.

A subgroup analysis was performed with models and frameworks categorized based on their levels of use (clinical, organizational, and policy) and the type of health service (community, ambulatorial, hospital, institutional) associated with the CPG. The goal is to assist stakeholders (politicians, clinicians, researchers, or others) in selecting the most suitable model for evaluating CPG implementation based on their specific health context.

Search results

Database searches yielded 26,011 studies, of which 107 full texts were reviewed. During the full-text review, 99 articles were excluded: 41 studies did not mention a model or framework for assessing the implementation of the CPG, 31 studies evaluated only implementation strategies (isolated actions) rather than the implementation process itself, and 27 articles were not related to the implementation assessment. Therefore, eight studies were included in the data analysis. The updated search did not reveal additional relevant studies. The main reason for study exclusion was that they did not use models or frameworks to assess CPG implementation. Additionally, four methodological guidelines were included from the manual search (Fig.  1 ).

figure 1

PRISMA diagram. Acronyms: ADH—Australian Department of Health, CINAHL—Cumulative Index to Nursing and Allied Health Literature, CDC—Centers for Disease Control and Prevention, CRD—Centre for Reviews and Dissemination, GIN—Guidelines International Networks, HSE—Health Systems Evidence, IOM—Institute of Medicine, JBI—The Joanna Briggs Institute, MHB—Ministry of Health of Brazil, NICE—National Institute for Health and Care Excellence, NHMRC—National Health and Medical Research Council, MSPS – Ministerio de Sanidad Y Política Social (Spain), SIGN—Scottish Intercollegiate Guidelines Network, VHL – Virtual Health Library, WHO—World Health Organization. Legend: Reason A –The study evaluated only implementation strategies (isolated actions) rather than the implementation process itself. Reason B – The study did not mention a model or framework for assessing the implementation of the intervention. Reason C – The study was not related to the implementation assessment. Adapted from Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. https://doi.org/10.1136/bmj.n71 . For more information, visit:

According to the JBI’s critical appraisal tools, the overall assessment of the studies indicates their acceptance for the systematic review.

The cross-sectional studies lacked clear information regarding "confounding factors" or "strategies to address confounding factors". This was understandable given the nature of the study, where such details are not typically included. However, the reviewers did not find this lack of information to be critical, allowing the studies to be included in the review. The results of this methodological quality assessment can be found in an additional file (see Additional file 5).

In the qualitative studies, there was some ambiguity regarding the questions: "Is there a statement locating the researcher culturally or theoretically?" and "Is the influence of the researcher on the research, and vice versa, addressed?". However, the reviewers decided to include the studies and deemed the methodological quality sufficient for the analysis in this article, based on the other information analyzed. The results of this methodological quality assessment can be found in an additional file (see Additional file 6).

Documents characteristics (Table  1 )

The documents were directed to several continents: Australia/Oceania (4/12) [ 31 , 33 , 36 , 37 ], North America (4/12 [ 30 , 32 , 38 , 39 ], Europe (2/12 [ 29 , 35 ] and Asia (2/12) [ 34 , 40 ]. The types of documents were classified as cross-sectional studies (4/12) [ 29 , 32 , 34 , 38 ], methodological guidelines (4/12) [ 33 , 35 , 36 , 37 ], mixed methods studies (3/12) [ 30 , 31 , 39 ] or noncomparative studies (1/12) [ 40 ]. In terms of the instrument of evaluation, most of the documents used a survey/questionnaire (6/12) [ 29 , 30 , 31 , 32 , 34 , 38 ], while three (3/12) used qualitative instruments (interviews, group discussions) [ 30 , 31 , 39 ], one used a checklist [ 37 ], one used an audit [ 33 ] and three (3/12) did not define a specific instrument to measure [ 35 , 36 , 40 ].

Considering the clinical areas covered, most studies evaluated the implementation of nonspecific (general) clinical areas [ 29 , 33 , 35 , 36 , 37 , 40 ]. However, some studies focused on specific clinical contexts, such as mental health [ 32 , 38 ], oncology [ 39 ], fall prevention [ 31 ], spinal cord injury [ 30 ], and sexually transmitted infections [ 34 ].

Usage context of the models (Table  1 )

Specific objectives.

All the studies highlighted the purpose of guiding the process of evaluating the implementation of CPGs, even if they evaluated CPGs from generic or different clinical areas.

Levels of use

The most common level of use of the models/frameworks identified to assess the implementation of CPGs was policy (6/12) [ 33 , 35 , 36 , 37 , 39 , 40 ]. In this level, the model is used in a systematic way to evaluate all the processes involved in CPGs implementation and is primarily related to methodological guidelines. This was followed by the organizational level of use (5/12) [ 30 , 31 , 32 , 38 , 39 ], where the model is used to evaluate the implementation of CPGs in a specific institution, considering its specific environment. Finally, the clinical level of use (2/12) [ 29 , 34 ] focuses on individual practice and the factors that can influence the implementation of CPGs by professionals.

Type of health service

Institutional services were predominant (5/12) [ 33 , 35 , 36 , 37 , 40 ] and included methodological guidelines and a study of model development and validation. Hospitals were the second most common type of health service (4/12) [ 29 , 30 , 31 , 34 ], followed by ambulatorial (2/12) [ 32 , 34 ] and community health services (1/12) [ 32 ]. Two studies did not specify which type of health service the assessment addressed [ 38 , 39 ].

Target group

The focus of the target group was professionals directly involved in clinical practice (6/12) [ 29 , 31 , 32 , 34 , 38 , 40 ], namely, health professionals and clinicians. Other less related stakeholders included guideline developers (2/12) [ 39 , 40 ], health policy decision makers (1/12) [ 39 ], and healthcare organizations (1/12) [ 39 ]. The target group was not defined in the methodological guidelines, although all the mentioned stakeholders could be related to these documents.

Model and framework characteristics

Models and frameworks for assessing the implementation of cpgs.

The Consolidated Framework for Implementation Research (CFIR) [ 31 , 38 ] and the Promoting Action on Research Implementation in Health Systems (PARiHS) framework [ 29 , 30 ] were the most commonly employed frameworks within the selected documents. The other models mentioned were: Goal commitment and implementation of practice guidelines framework [ 32 ]; Guideline to identify key indicators [ 35 ]; Guideline implementation checklist [ 37 ]; Guideline implementation evaluation tool [ 40 ]; JBI Implementation Framework [ 33 ]; Reach, effectiveness, adoption, implementation and maintenance (RE-AIM) framework [ 34 ]; The Guideline Implementability Framework [ 39 ] and an unnamed model [ 36 ].

Domains evaluated

The number of domains evaluated (or suggested for evaluation) by the documents varied between three and five, with the majority focusing on three domains. All the models addressed the domain "context", with a particular emphasis on the micro level of the health care context (8/12) [ 29 , 31 , 34 , 35 , 36 , 37 , 38 , 39 ], followed by the multilevel (7/12) [ 29 , 31 , 32 , 33 , 38 , 39 , 40 ], meso level (4/12) [ 30 , 35 , 39 , 40 ] and macro level (2/12) [ 37 , 39 ]. The "Outcome" domain was evaluated in nine models. Within this domain, the most frequently evaluated subdomain was "adoption" (6/12) [ 29 , 32 , 34 , 35 , 36 , 37 ], followed by "acceptability" (4/12) [ 30 , 32 , 35 , 39 ], "appropriateness" (3/12) [ 32 , 34 , 36 ], "feasibility" (3/12) [ 29 , 32 , 36 ], "cost" (1/12) [ 35 ] and "penetration" (1/12) [ 34 ]. Regarding the other domains, "Intervention" (8/12) [ 29 , 31 , 34 , 35 , 36 , 38 , 39 , 40 ], "Strategies" (7/12) [ 29 , 30 , 33 , 35 , 36 , 37 , 40 ] and "Process" (5/12) [ 29 , 31 , 32 , 33 , 38 ] were frequently addressed in the models, while "Sustainability" (1/12) [ 34 ] was only found in one model, and "Fidelity/Adaptation" was not observed. The domains presented by the models and frameworks and evaluated in the documents are shown in Table  2 .

Limitations of the models

Only two documents mentioned limitations in the use of the model or frameworks. These two studies reported limitations in the use of CFIR: "is complex and cumbersome and requires tailoring of the key variables to the specific context", and "this framework should be supplemented with other important factors and local features to achieve a sound basis for the planning and realization of an ongoing project" [ 31 , 38 ]. Limitations in the use of other models or frameworks are not reported.

Subgroup analysis

Following the subgroup analysis (Table  3 ), five different models/frameworks were utilized at the policy level by institutional health services. These included the Guideline Implementation Evaluation Tool [ 40 ], the NHMRC tool (model name not defined) [ 36 ], the JBI Implementation Framework + GRiP [ 33 ], Guideline to identify key indicators [ 35 ], and the Guideline implementation checklist [ 37 ]. Additionally, the "Guideline Implementability Framework" [ 39 ] was implemented at the policy level without restrictions based on the type of health service. Regarding the organizational level, the models used varied depending on the type of service. The "Goal commitment and implementation of practice guidelines framework" [ 32 ] was applied in community and ambulatory health services, while "PARiHS" [ 29 , 30 ] and "CFIR" [ 31 , 38 ] were utilized in hospitals. In contexts where the type of health service was not defined, "CFIR" [ 31 , 38 ] and "The Guideline Implementability Framework" [ 39 ] were employed. Lastly, at the clinical level, "RE-AIM" [ 34 ] was utilized in ambulatory and hospital services, and PARiHS [ 29 , 30 ] was specifically used in hospital services.

Key findings

This systematic review identified 10 models/ frameworks used to assess the implementation of CPGs in various health system contexts. These documents shared similar objectives in utilizing models and frameworks for assessment. The primary level of use was policy, the most common type of health service was institutional, and the main target group of the documents was professionals directly involved in clinical practice. The models and frameworks presented varied analytical domains, with sometimes divergent concepts used in these domains. This study is innovative in its emphasis on the evaluation stage of CPG implementation and in summarizing aspects and domains aimed at the practical application of these models.

The small number of documents contrasts with studies that present an extensive range of models and frameworks available in implementation science. The findings suggest that the use of models and frameworks to evaluate the implementation of CPGs is still in its early stages. Among the selected documents, there was a predominance of cross-sectional studies and methodological guidelines, which strongly influenced how the implementation evaluation was conducted. This was primarily done through surveys/questionnaires, qualitative methods (interviews, group discussions), and non-specific measurement instruments. Regarding the subject areas evaluated, most studies focused on a general clinical area, while others explored different clinical areas. This suggests that the evaluation of CPG implementation has been carried out in various contexts.

The models were chosen independently of the categories proposed in the literature, with their usage categorized for purposes other than implementation evaluation, as is the case with CFIR and PARiHS. This practice was described by Nilsen et al. who suggested that models and frameworks from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 14 , 15 , 42 , 43 ].

The results highlight the increased use of models and frameworks in evaluation processes at the policy level and institutional environments, followed by the organizational level in hospital settings. This finding contradicts a review that reported the policy level as an area that was not as well studied [ 44 ]. The use of different models at the institutional level is also emphasized in the subgroup analysis. This may suggest that the greater the impact (social, financial/economic, and organizational) of implementing CPGs, the greater the interest and need to establish well-defined and robust processes. In this context, the evaluation stage stands out as crucial, and the investment of resources and efforts to structure this stage becomes even more advantageous [ 10 , 45 ]. Two studies (16,7%) evaluated the implementation of CPGs at the individual level (clinical level). These studies stand out for their potential to analyze variations in clinical practice in greater depth.

In contrast to the level of use and type of health service most strongly indicated in the documents, with systemic approaches, the target group most observed was professionals directly involved in clinical practice. This suggests an emphasis on evaluating individual behaviors. This same emphasis is observed in the analysis of the models, in which there is a predominance of evaluating the micro level of the health context and the "adoption" subdomain, in contrast with the sub-use of domains such as "cost" and "process". Cassetti et al. observed the same phenomenon in their review, in which studies evaluating the implementation of CPGs mainly adopted a behavioral change approach to tackle those issues, without considering the influence of wider social determinants of health [ 10 ]. However, the literature widely reiterates that multiple factors impact the implementation of CPGs, and different actions are required to make them effective [ 6 , 46 , 47 ]. As a result, there is enormous potential for the development and adaptation of models and frameworks aimed at more systemic evaluation processes that consider institutional and organizational aspects.

In analyzing the model domains, most models focused on evaluating only some aspects of implementation (three domains). All models evaluated the "context", highlighting its significant influence on implementation [ 9 , 26 ]. Context is an essential effect modifier for providing research evidence to guide decisions on implementation strategies [ 48 ]. Contextualizing a guideline involves integrating research or other evidence into a specific circumstance [ 49 ]. The analysis of this domain was adjusted to include all possible contextual aspects, even if they were initially allocated to other domains. Some contextual aspects presented by the models vary in comprehensiveness, such as the assessment of the "timing and nature of stakeholder engagement" [ 39 ], which includes individual engagement by healthcare professionals and organizational involvement in CPG implementation. While the importance of context is universally recognized, its conceptualization and interpretation differ across studies and models. This divergence is also evident in other domains, consistent with existing literature [ 14 ]. Efforts to address this conceptual divergence in implementation science are ongoing, but further research and development are needed in this field [ 26 ].

The main subdomain evaluated was "adoption" within the outcome domain. This may be attributed to the ease of accessing information on the adoption of the CPG, whether through computerized system records, patient records, or self-reports from healthcare professionals or patients themselves. The "acceptability" subdomain pertains to the perception among implementation stakeholders that a particular CPG is agreeable, palatable or satisfactory. On the other hand, "appropriateness" encompasses the perceived fit, relevance or compatibility of the CPG for a specific practice setting, provider, or consumer, or its perceived fit to address a particular issue or problem [ 26 ]. Both subdomains are subjective and rely on stakeholders' interpretations and perceptions of the issue being analyzed, making them susceptible to reporting biases. Moreover, obtaining this information requires direct consultation with stakeholders, which can be challenging for some evaluation processes, particularly in institutional contexts.

The evaluation of the subdomains "feasibility" (the extent to which a CPG can be successfully used or carried out within a given agency or setting), "cost" (the cost impact of an implementation effort), and "penetration" (the extent to which an intervention or treatment is integrated within a service setting and its subsystems) [ 26 ] was rarely observed in the documents. This may be related to the greater complexity of obtaining information on these aspects, as they involve cross-cutting and multifactorial issues. In other words, it would be difficult to gather this information during evaluations with health practitioners as the target group. This highlights the need for evaluation processes of CPGs implementation involving multiple stakeholders, even if the evaluation is adjusted for each of these groups.

Although the models do not establish the "intervention" domain, we thought it pertinent in this study to delimit the issues that are intrinsic to CPGs, such as methodological quality or clarity in establishing recommendations. These issues were quite common in the models evaluated but were considered in other domains (e.g., in "context"). Studies have reported the importance of evaluating these issues intrinsic to CPGs [ 47 , 50 ] and their influence on the implementation process [ 51 ].

The models explicitly present the "strategies" domain, and its evaluation was usually included in the assessments. This is likely due to the expansion of scientific and practical studies in implementation science that involve theoretical approaches to the development and application of interventions to improve the implementation of evidence-based practices. However, these interventions themselves are not guaranteed to be effective, as reported in a previous review that showed unclear results indicating that the strategies had affected successful implementation [ 52 ]. Furthermore, model domains end up not covering all the complexity surrounding the strategies and their development and implementation process. For example, the ‘Guideline implementation evaluation tool’ evaluates whether guideline developers have designed and provided auxiliary tools to promote the implementation of guidelines [ 40 ], but this does not mean that these tools would work as expected.

The "process" domain was identified in the CFIR [ 31 , 38 ], JBI/GRiP [ 33 ], and PARiHS [ 29 ] frameworks. While it may be included in other domains of analysis, its distinct separation is crucial for defining operational issues when assessing the implementation process, such as determining if and how the use of the mentioned CPG was evaluated [ 3 ]. Despite its presence in multiple models, there is still limited detail in the evaluation guidelines, which makes it difficult to operationalize the concept. Further research is needed to better define the "process" domain and its connections and boundaries with other domains.

The domain of "sustainability" was only observed in the RE-AIM framework, which is categorized as an evaluation framework [ 34 ]. In its acronym, the letter M stands for "maintenance" and corresponds to the assessment of whether the user maintains use, typically longer than 6 months. The presence of this domain highlights the need for continuous evaluation of CPGs implementation in the short, medium, and long term. Although the RE-AIM framework includes this domain, it was not used in the questionnaire developed in the study. One probable reason is that the evaluation of CPGs implementation is still conducted on a one-off basis and not as a continuous improvement process. Considering that changes in clinical practices are inherent over time, evaluating and monitoring changes throughout the duration of the CPG could be an important strategy for ensuring its implementation. This is an emerging field that requires additional investment and research.

The "Fidelity/Adaptation" domain was not observed in the models. These emerging concepts involve the extent to which a CPG is being conducted exactly as planned or whether it is undergoing adjustments and adaptations. Whether or not there is fidelity or adaptation in the implementation of CPGs does not presuppose greater or lesser effectiveness; after all, some adaptations may be necessary to implement general CPGs in specific contexts. The absence of this domain in all the models and frameworks may suggest that they are not relevant aspects for evaluating implementation or that there is a lack of knowledge of these complex concepts. This may suggest difficulty in expressing concepts in specific evaluative questions. However, further studies are warranted to determine the comprehensiveness of these concepts.

It is important to note the customization of the domains of analysis, with some domains presented in the models not being evaluated in the studies, while others were complementarily included. This can be seen in Jeong et al. [ 34 ], where the "intervention" domain in the evaluation with the RE-AIM framework reinforced the aim of theoretical approaches such as guiding the process and not determining norms. Despite this, few limitations were reported for the models, suggesting that the use of models in these studies reflects the application of these models to defined contexts without a deep critical analysis of their domains.

Limitations

This review has several limitations. First, only a few studies and methodological guidelines that explicitly present models and frameworks for assessing the implementation of CPGs have been found. This means that few alternative models could be analyzed and presented in this review. Second, this review adopted multiple analytical categories (e.g., level of use, health service, target group, and domains evaluated), whose terminology has varied enormously in the studies and documents selected, especially for the "domains evaluated" category. This difficulty in harmonizing the taxonomy used in the area has already been reported [ 26 ] and has significant potential to confuse. For this reason, studies and initiatives are needed to align understandings between concepts and, as far as possible, standardize them. Third, in some studies/documents, the information extracted was not clear about the analytical category. This required an in-depth interpretative process of the studies, which was conducted in pairs to avoid inappropriate interpretations.

Implications

This study contributes to the literature and clinical practice management by describing models and frameworks specifically used to assess the implementation of CPGs based on their level of use, type of health service, target group related to the CPG, and the evaluated domains. While there are existing reviews on the theories, frameworks, and models used in implementation science, this review addresses aspects not previously covered in the literature. This valuable information can assist stakeholders (such as politicians, clinicians, researchers, etc.) in selecting or adapting the most appropriate model to assess CPG implementation based on their health context. Furthermore, this study is expected to guide future research on developing or adapting models to assess the implementation of CPGs in various contexts.

The use of models and frameworks to evaluate the implementation remains a challenge. Studies should clearly state the level of model use, the type of health service evaluated, and the target group. The domains evaluated in these models may need adaptation to specific contexts. Nevertheless, utilizing models to assess CPGs implementation is crucial as they can guide a more thorough and systematic evaluation process, aiding in the continuous improvement of CPGs implementation. The findings of this systematic review offer valuable insights for stakeholders in selecting or adjusting models and frameworks for CPGs evaluation, supporting future theoretical advancements and research.

Availability of data and materials

Abbreviations.

Australian Department of Health and Aged Care

Canadian Agency for Drugs and Technologies in Health

Centers for Disease Control and

Consolidated Framework for Implementation Research

Cumulative Index to Nursing and Allied Health Literature

Clinical practice guideline

Centre for Reviews and Dissemination

Guidelines International Networks

Getting Research into Practice

Health Systems Evidence

Institute of Medicine

The Joanna Briggs Institute

Ministry of Health of Brazil

Ministerio de Sanidad y Política Social

National Health and Medical Research Council

National Institute for Health and Care Excellence

Promoting action on research implementation in health systems framework

Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

International Prospective Register of Systematic Reviews

Reach, effectiveness, adoption, implementation, and maintenance framework

Healthcare Improvement Scotland

United States of America

Virtual Health Library

World Health Organization

Medicine I of. Crossing the Quality Chasm: A New Health System for the 21st Century. 2001. Available from: http://www.nap.edu/catalog/10027 . Cited 2022 Sep 29.

Field MJ, Lohr KN. Clinical Practice Guidelines: Directions for a New Program. Washington DC: National Academy Press. 1990. Available from: https://www.nap.edu/read/1626/chapter/8 Cited 2020 Sep 2.

Dawson A, Henriksen B, Cortvriend P. Guideline Implementation in Standardized Office Workflows and Exam Types. J Prim Care Community Heal. 2019;10. Available from: https://pubmed.ncbi.nlm.nih.gov/30900500/ . Cited 2020 Jul 15.

Unverzagt S, Oemler M, Braun K, Klement A. Strategies for guideline implementation in primary care focusing on patients with cardiovascular disease: a systematic review. Fam Pract. 2014;31(3):247–66. Available from: https://academic.oup.com/fampra/article/31/3/247/608680 . Cited 2020 Nov 5.

Article   PubMed   Google Scholar  

Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-015-0242-0 . Cited 2022 May 1.

Article   Google Scholar  

Mangana F, Massaquoi LD, Moudachirou R, Harrison R, Kaluangila T, Mucinya G, et al. Impact of the implementation of new guidelines on the management of patients with HIV infection at an advanced HIV clinic in Kinshasa, Democratic Republic of Congo (DRC). BMC Infect Dis. 2020;20(1):N.PAG-N.PAG. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=146325052&amp .

Browman GP, Levine MN, Mohide EA, Hayward RSA, Pritchard KI, Gafni A, et al. The practice guidelines development cycle: a conceptual tool for practice guidelines development and implementation. 2016;13(2):502–12. https://doi.org/10.1200/JCO.1995.13.2.502 .

Killeen SL, Donnellan N, O’Reilly SL, Hanson MA, Rosser ML, Medina VP, et al. Using FIGO Nutrition Checklist counselling in pregnancy: A review to support healthcare professionals. Int J Gynecol Obstet. 2023;160(S1):10–21. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85146194829&doi=10.1002%2Fijgo.14539&partnerID=40&md5=d0f14e1f6d77d53e719986e6f434498f .

Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):1–12. Available from: https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-015-0089-9 . Cited 2020 Nov 5.

Cassetti V, M VLR, Pola-Garcia M, AM G, J JPC, L APDT, et al. An integrative review of the implementation of public health guidelines. Prev Med reports. 2022;29:101867. Available from: http://www.epistemonikos.org/documents/7ad499d8f0eecb964fc1e2c86b11450cbe792a39 .

Eccles MP, Mittman BS. Welcome to implementation science. Implementation Science BioMed Central. 2006. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-1-1 .

Damschroder LJ. Clarity out of chaos: Use of theory in implementation research. Psychiatry Res. 2020;1(283):112461.

Handley MA, Gorukanti A, Cattamanchi A. Strategies for implementing implementation science: a methodological overview. Emerg Med J. 2016;33(9):660–4. Available from: https://pubmed.ncbi.nlm.nih.gov/26893401/ . Cited 2022 Mar 7.

Wang Y, Wong ELY, Nilsen P, Chung VC ho, Tian Y, Yeoh EK. A scoping review of implementation science theories, models, and frameworks — an appraisal of purpose, characteristics, usability, applicability, and testability. Implement Sci. 2023;18(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-023-01296-x . Cited 2024 Jan 22.

Moullin JC, Dickson KS, Stadnick NA, Albers B, Nilsen P, Broder-Fingert S, et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun. 2020;1(1):1–12. Available from: https://implementationsciencecomms.biomedcentral.com/articles/10.1186/s43058-020-00023-7 . Cited 2022 May 20.

Glasgow RE, Vogt TM, Boles SM. *Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322. Available from: /pmc/articles/PMC1508772/?report=abstract. Cited 2022 May 22.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Asada Y, Lin S, Siegel L, Kong A. Facilitators and Barriers to Implementation and Sustainability of Nutrition and Physical Activity Interventions in Early Childcare Settings: a Systematic Review. Prev Sci. 2023;24(1):64–83. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85139519721&doi=10.1007%2Fs11121-022-01436-7&partnerID=40&md5=b3c395fdd2b8235182eee518542ebf2b .

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions. version 6. Cochrane; 2022. Available from: https://training.cochrane.org/handbook. Cited 2022 May 23.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372. Available from: https://www.bmj.com/content/372/bmj.n71 . Cited 2021 Nov 18.

M C, AD O, E P, JP H, S G. Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. Higgins JP, Green S, eds Cochrane Handb Syst Rev Interv. 2011;Version 5.

Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):1–8. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-019-0957-4 . Cited 2024 Jan 22.

Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):1–10. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-016-0384-4 . Cited 2022 May 20.

JBI. JBI’s Tools Assess Trust, Relevance & Results of Published Papers: Enhancing Evidence Synthesis. Available from: https://jbi.global/critical-appraisal-tools . Cited 2023 Jun 13.

Drisko JW. Qualitative research synthesis: An appreciative and critical introduction. Qual Soc Work. 2020;19(4):736–53.

Pope C, Mays N, Popay J. Synthesising qualitative and quantitative health evidence: A guide to methods. 2007. Available from: https://books.google.com.br/books?hl=pt-PT&lr=&id=L3fbE6oio8kC&oi=fnd&pg=PR6&dq=synthesizing+qualitative+and+quantitative+health+evidence&ots=sfELNUoZGq&sig=bQt5wt7sPKkf7hwKUvxq2Ek-p2Q#v=onepage&q=synthesizing=qualitative=and=quantitative=health=evidence& . Cited 2022 May 22.

Nilsen P, Birken SA, Edward Elgar Publishing. Handbook on implementation science. 542. Available from: https://www.e-elgar.com/shop/gbp/handbook-on-implementation-science-9781788975988.html . Cited 2023 Apr 15.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-4-50 . Cited 2023 Jun 13.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. Available from: https://pubmed.ncbi.nlm.nih.gov/20957426/ . Cited 2023 Jun 11.

Bahtsevani C, Willman A, Khalaf A, Östman M, Ostman M. Developing an instrument for evaluating implementation of clinical practice guidelines: a test-retest study. J Eval Clin Pract. 2008;14(5):839–46. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=105569473&amp . Cited 2023 Jan 18.

Balbale SN, Hill JN, Guihan M, Hogan TP, Cameron KA, Goldstein B, et al. Evaluating implementation of methicillin-resistant Staphylococcus aureus (MRSA) prevention guidelines in spinal cord injury centers using the PARIHS framework: a mixed methods study. Implement Sci. 2015;10(1):130. Available from: https://pubmed.ncbi.nlm.nih.gov/26353798/ . Cited 2023 Apr 3.

Article   PubMed   PubMed Central   Google Scholar  

Breimaier HE, Heckemann B, Halfens RJGG, Lohrmann C. The Consolidated Framework for Implementation Research (CFIR): a useful theoretical framework for guiding and evaluating a guideline implementation process in a hospital-based nursing practice. BMC Nurs. 2015;14(1):43. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=109221169&amp . Cited 2023 Apr 3.

Chou AF, Vaughn TE, McCoy KD, Doebbeling BN. Implementation of evidence-based practices: Applying a goal commitment framework. Health Care Manage Rev. 2011;36(1):4–17. Available from: https://pubmed.ncbi.nlm.nih.gov/21157225/ . Cited 2023 Apr 30.

Porritt K, McArthur A, Lockwood C, Munn Z. JBI Manual for Evidence Implementation. JBI Handbook for Evidence Implementation. JBI; 2020. Available from: https://jbi-global-wiki.refined.site/space/JHEI . Cited 2023 Apr 3.

Jeong HJJ, Jo HSS, Oh MKK, Oh HWW. Applying the RE-AIM Framework to Evaluate the Dissemination and Implementation of Clinical Practice Guidelines for Sexually Transmitted Infections. J Korean Med Sci. 2015;30(7):847–52. Available from: https://pubmed.ncbi.nlm.nih.gov/26130944/ . Cited 2023 Apr 3.

GPC G de trabajo sobre implementación de. Implementación de Guías de Práctica Clínica en el Sistema Nacional de Salud. Manual Metodológico. 2009. Available from: https://portal.guiasalud.es/wp-content/uploads/2019/01/manual_implementacion.pdf . Cited 2023 Apr 3.

Australia C of. A guide to the development, implementation and evaluation of clinical practice guidelines. National Health and Medical Research Council; 1998. Available from: https://www.health.qld.gov.au/__data/assets/pdf_file/0029/143696/nhmrc_clinprgde.pdf .

Health Q. Guideline implementation checklist Translating evidence into best clinical practice. 2022.

Google Scholar  

Quittner AL, Abbott J, Hussain S, Ong T, Uluer A, Hempstead S, et al. Integration of mental health screening and treatment into cystic fibrosis clinics: Evaluation of initial implementation in 84 programs across the United States. Pediatr Pulmonol. 2020;55(11):2995–3004. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2005630887&from=export . Cited 2023 Apr 3.

Urquhart R, Woodside H, Kendell C, Porter GA. Examining the implementation of clinical practice guidelines for the management of adult cancers: A mixed methods study. J Eval Clin Pract. 2019;25(4):656–63. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=137375535&amp . Cited 2023 Apr 3.

Yinghui J, Zhihui Z, Canran H, Flute Y, Yunyun W, Siyu Y, et al. Development and validation for evaluation of an evaluation tool for guideline implementation. Chinese J Evidence-Based Med. 2022;22(1):111–9. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2016924877&from=export .

Breimaier HE, Halfens RJG, Lohrmann C. Effectiveness of multifaceted and tailored strategies to implement a fall-prevention guideline into acute care nursing practice: a before-and-after, mixed-method study using a participatory action research approach. BMC Nurs. 2015;14(1):18. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=103220991&amp .

Lai J, Maher L, Li C, Zhou C, Alelayan H, Fu J, et al. Translation and cross-cultural adaptation of the National Health Service Sustainability Model to the Chinese healthcare context. BMC Nurs. 2023;22(1). Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85153237164&doi=10.1186%2Fs12912-023-01293-x&partnerID=40&md5=0857c3163d25ce85e01363fc3a668654 .

Zhao J, Li X, Yan L, Yu Y, Hu J, Li SA, et al. The use of theories, frameworks, or models in knowledge translation studies in healthcare settings in China: a scoping review protocol. Syst Rev. 2021;10(1):13. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7792291 .

Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50. Available from: https://pubmed.ncbi.nlm.nih.gov/22898128/ . Cited 2023 Apr 4.

Phulkerd S, Lawrence M, Vandevijvere S, Sacks G, Worsley A, Tangcharoensathien V. A review of methods and tools to assess the implementation of government policies to create healthy food environments for preventing obesity and diet-related non-communicable diseases. Implement Sci. 2016;11(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-016-0379-5 . Cited 2022 May 1.

Buss PM, Pellegrini FA. A Saúde e seus Determinantes Sociais. PHYSIS Rev Saúde Coletiva. 2007;17(1):77–93.

Pereira VC, Silva SN, Carvalho VKSS, Zanghelini F, Barreto JOMM. Strategies for the implementation of clinical practice guidelines in public health: an overview of systematic reviews. Heal Res Policy Syst. 2022;20(1):13. Available from: https://health-policy-systems.biomedcentral.com/articles/10.1186/s12961-022-00815-4 . Cited 2022 Feb 21.

Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof. 2004;24 Suppl 1:S31-7. Available from: https://pubmed.ncbi.nlm.nih.gov/15712775/ . Cited 2021 Nov 9.

Lotfi T, Stevens A, Akl EA, Falavigna M, Kredo T, Mathew JL, et al. Getting trustworthy guidelines into the hands of decision-makers and supporting their consideration of contextual factors for implementation globally: recommendation mapping of COVID-19 guidelines. J Clin Epidemiol. 2021;135:182–6. Available from: https://pubmed.ncbi.nlm.nih.gov/33836255/ . Cited 2024 Jan 25.

Lenzer J. Why we can’t trust clinical guidelines. BMJ. 2013;346(7913). Available from: https://pubmed.ncbi.nlm.nih.gov/23771225/ . Cited 2024 Jan 25.

Molino C de GRC, Ribeiro E, Romano-Lieber NS, Stein AT, de Melo DO. Methodological quality and transparency of clinical practice guidelines for the pharmacological treatment of non-communicable diseases using the AGREE II instrument: A systematic review protocol. Syst Rev. 2017;6(1):1–6. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-017-0621-5 . Cited 2024 Jan 25.

Albers B, Mildon R, Lyon AR, Shlonsky A. Implementation frameworks in child, youth and family services – Results from a scoping review. Child Youth Serv Rev. 2017;1(81):101–16.

Download references

Acknowledgements

Not applicable

This study is supported by the Fundação de Apoio à Pesquisa do Distrito Federal (FAPDF). FAPDF Award Term (TOA) nº 44/2024—FAPDF/SUCTI/COOBE (SEI/GDF – Process 00193–00000404/2024–22). The content in this article is solely the responsibility of the authors and does not necessarily represent the official views of the FAPDF.

Author information

Authors and affiliations.

Department of Management and Incorporation of Health Technologies, Ministry of Health of Brazil, Brasília, Federal District, 70058-900, Brazil

Nicole Freitas de Mello & Dalila Fernandes Gomes

Postgraduate Program in Public Health, FS, University of Brasília (UnB), Brasília, Federal District, 70910-900, Brazil

Nicole Freitas de Mello, Dalila Fernandes Gomes & Jorge Otávio Maia Barreto

René Rachou Institute, Oswaldo Cruz Foundation, Belo Horizonte, Minas Gerais, 30190-002, Brazil

Sarah Nascimento Silva

Oswaldo Cruz Foundation - Brasília, Brasília, Federal District, 70904-130, Brazil

Juliana da Motta Girardi & Jorge Otávio Maia Barreto

You can also search for this author in PubMed   Google Scholar

Contributions

NFM and JOMB conceived the idea and the protocol for this study. NFM conducted the literature search. NFM, SNS, JMG and JOMB conducted the data collection with advice and consensus gathering from JOMB. The NFM and JMG assessed the quality of the studies. NFM and DFG conducted the data extraction. NFM performed the analysis and synthesis of the results with advice and consensus gathering from JOMB. NFM drafted the manuscript. JOMB critically revised the first version of the manuscript. All the authors revised and approved the submitted version.

Corresponding author

Correspondence to Nicole Freitas de Mello .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

13012_2024_1389_moesm1_esm.docx.

Additional file 1: PRISMA checklist. Description of data: Completed PRISMA checklist used for reporting the results of this systematic review.

Additional file 2: Literature search. Description of data: The search strategies adapted for the electronic databases.

13012_2024_1389_moesm3_esm.doc.

Additional file 3: JBI’s critical appraisal tools for cross-sectional studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for cross-sectional studies.

13012_2024_1389_MOESM4_ESM.doc

Additional file 4: JBI’s critical appraisal tools for qualitative studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for qualitative studies.

13012_2024_1389_MOESM5_ESM.doc

Additional file 5: Methodological quality assessment results for cross-sectional studies. Description of data: Methodological quality assessment results for cross-sectional studies using JBI’s critical appraisal tools.

13012_2024_1389_MOESM6_ESM.doc

Additional file 6: Methodological quality assessment results for the qualitative studies. Description of data: Methodological quality assessment results for qualitative studies using JBI’s critical appraisal tools.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Freitas de Mello, N., Nascimento Silva, S., Gomes, D.F. et al. Models and frameworks for assessing the implementation of clinical practice guidelines: a systematic review. Implementation Sci 19 , 59 (2024). https://doi.org/10.1186/s13012-024-01389-1

Download citation

Received : 06 February 2024

Accepted : 01 August 2024

Published : 07 August 2024

DOI : https://doi.org/10.1186/s13012-024-01389-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Practice guideline
  • Evidence-Based Practice
  • Implementation science

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is a peer review in research

  • Open access
  • Published: 07 August 2024

Management training programs in healthcare: effectiveness factors, challenges and outcomes

  • Lucia Giovanelli 1 ,
  • Federico Rotondo 2 &
  • Nicoletta Fadda 1  

BMC Health Services Research volume  24 , Article number:  904 ( 2024 ) Cite this article

77 Accesses

Metrics details

Different professionals working in healthcare organizations (e.g., physicians, veterinarians, pharmacists, biologists, engineers, etc.) must be able to properly manage scarce resources to meet increasingly complex needs and demands. Due to the lack of specific courses in curricular university education, particularly in the field of medicine, management training programs have become an essential element in preparing health professionals to cope with global challenges. This study aims to examine factors influencing the effectiveness of management training programs and their outcomes in healthcare settings, at middle-management level, in general and by different groups of participants: physicians and non-physicians, participants with or without management positions.

A survey was used for gathering information from a purposive sample of professionals in the healthcare field attending management training programs in Italy. Factor analysis, a set of ordinal logistic regressions and an unpaired two-sample t-test were used for data elaboration.

The findings show the importance of diversity of pedagogical approaches and tools and debate, and class homogeneity, as effectiveness factors. Lower competencies held before the training programs and problems of dialogue and discussion during the course are conducive to innovative practice introduction. Interpersonal and career outcomes are greater for those holding management positions.

Conclusions

The study reveals four profiles of participants with different gaps and needs. Training programs should be tailored based on participants’ profiles, in terms of pedagogical approaches and tools, and preserve class homogeneity in terms of professional backgrounds and management levels to facilitate constructive dialogue and solution finding approach.

Peer Review reports

Several healthcare systems worldwide have identified management training as a precondition for developing appropriate strategies to address global challenges such as, on one hand, poor health service outcomes in front of increased health expenditure, particularly for pharmaceuticals, personnel shortages and low productivity, and on the other hand in terms of unbalanced quality and equal access to healthcare across the population [ 1 ]. The sustainability of health systems itself seems to be associated with the presence of leaders, at all levels of health organizations, who are able to correctly manage scarce resources to meet increasingly complex health needs and demands, at the same time motivating health personnel under an increasing amount of stress and steering their behaviors towards the system’s goals, in order to drive the transition towards more decentralized, interorganizational and patient-centered care models [ 2 ].

Recently, professional training as an activity aimed at increasing learning of new capabilities (reskilling) and improving existing ones (upskilling) during the lifetime of individuals (lifelong learning) has been identified by the European Commission as one of the seven flagship programs to be developed in the National Recovery and Resilience Plans (NRRP) to support the achievement of European Union’s goals, such as green and digital transitions, innovation, economic and social inclusion and occupation [ 3 ]. As a consequence, many member states have implemented training programs to face current and future challenges in health, which often represents a core mission in their NRRPs.

The increased importance of developing management training programs is also related to the rigidity and focalization of university degree courses in medicine, which do not provide physicians with the basic tools for fulfilling managerial roles [ 4 ]. Furthermore, taking on these roles does not automatically mean filling existing gaps in management capabilities and skills [ 5 ]. Several studies have demonstrated that, in the health setting, management competencies are influenced by positions and management levels as well as by organization and system’s features [ 6 , 7 ]. Hence, training programs aimed at increasing management competencies cannot be developed without considering these differences.

To date, few studies have focused on investigating management training programs in healthcare [ 8 ]. In particular, much more investigation is required on methods, contents, processes and challenges determining the effectiveness of training programs addressed to health managers by taking into account different environments, positions and management levels [ 1 ]. A gap also exists in the assessment of management training programs’ outcomes [ 9 ]. This study aims to examine factors influencing the effectiveness and outcomes of management training, at the middle-management level, in healthcare. It intends to answer the following research questions: which factors influence the management training process? Which relationships exist between management competencies held before the program, factors of effectiveness, critical issues encountered, and results achieved or prefigured at the end of the program? Are there differences, in terms of factors of effectiveness, challenges and outcomes, between the following groups of management training programs’ participants: physicians and non-physicians, participants with or without management positions?

Management training in healthcare

Currently, there is a wide debate about the added value of management to health organizations [ 10 ] and thus about the importance of spreading management competencies within health organizations to improve their performance. Through a systematic review, Lega et al. [ 11 ] highlighted four approaches to examine the impact of management on healthcare performance, focusing on management practices, managers’ characteristics, engagement of professionals in performance management and organizational features and management styles.

Although findings have not always been univocal, several studies suggest a positive relationship between management competencies and practices and outcomes in healthcare organizations, both from a clinical and financial point of view [ 12 ]. Among others, Vainieri et al. [ 13 ] found, in the Italian setting, a positive association between top management’s competencies and organizational performance, assessed through a multidimensional perspective. This study also reveals the mediating effect of information sharing, in terms of strategy, results and organization structure, in the relationship between managerial competencies and performance.

The key role of management competencies clearly emerges for health executives, who have to turn system policies into a vision, and then articulate it into effective strategies and actions within their organizations to steer and engage professionals [ 14 , 15 , 16 , 17 , 18 , 19 ]. However, health systems are increasingly complex and continually changing across contexts and health service levels. This means the role of health executives is evolving as well and identifying the capacities they need to address current and emerging issues becomes more difficult. For instance, a literature review conducted by Figueroa et al. [ 20 ] sheds light on priorities and challenges for health leadership at three structural levels: macro context (international and national), meso context (organizations) and micro context (individual healthcare managers).

Doctor-managers are requested to carry both clinical tasks and tasks related to budgeting, goal setting and performance evaluation. As a consequence, a growing stream of research has speculated whether managers with a clinical background actually affect healthcare performance outcomes, but studies have produced inconclusive findings. In relation to this topic, Sarto and Veronesi [ 21 ] carried out a literature review showing a generally positive impact of clinical leadership on different types of outcome measures, with only a few studies reporting negative impacts on financial and social performance. Morandi et al. [ 22 ] focused on doctor-managers who have become middle managers and investigated the potential bias in performance appraisal due to the mismatch between self-reported and official performance data. At the individual level, the role played by managerial behavior, training, engagement, and perceived organizational support was analyzed. Among others indications they suggested that training programs should be revised to reduce bias in performance appraisal. Tasi et al. [ 23 ] conducted a cross-sectional analysis of the 115 largest U.S. hospitals, divided into physician-led and non-physician-led, which revealed that physician-led hospital systems have higher quality ratings across all specialities and more inpatient days per hospital bed than non-physician-led hospitals. No differences between the groups were found in total revenue and profit margins. The main implication of their study is that hospital systems may benefit from the presence of physician leadership to improve the quality and efficiency of care delivered to patients as long as education and training are able to adequately prepare them. The main issue, as also observed by others [ 4 , 24 ], is that university education in medicine still includes little focus on aspects such as collaborative management, communication and coordination, and leadership skills. Such a circumstance motivates the call for further training. Regarding the implementation of training programs, Liang et al. [ 1 ] have recently shown how it is hindered, among others, by a lack of sufficient knowledge about needed competencies and existing gaps. Their analysis, which focuses on senior managers from three categories in Chinese hospitals, shows that before commencing the programs senior managers had not acquired adequate management competencies either through formal or informal training. It is worth noticing that significant differences exist between hospital categories and management levels. For this reason, they recommend using a systemic approach to design training programs, which considers different hospital types, management levels and positions. Yarbrough et al. [ 6 ] examined how competence training worked in healthcare organizations and the competencies needed for leaders at different points of their careers at various organizational levels. They carried out a cross-sectional survey of 492 US hospital executives, whose most significant result was that competence training is effective in healthcare organizations.

Walston and Khaliq [ 25 ], from a survey of 2,001 hospital CEOs across the US concluded that the greatest contribution of continuing education is to keep CEOs updated on technological and market changes that impact their current job responsibilities. Conversely, it does not seem to be valued for career or succession planning. About the methods of continuing education, an increasing use of some internet-based tools was found. Walston et al. [ 26 ] identified the factors affecting continuing education, finding, among others, that CEOs from for-profit and larger hospitals tend to take less continuing education, whereas senior managers' commitment to continuing education is influenced by region, gender, the CEO's personal continuing education hours and the focus on change.

Furthermore, the principles that inspire modern healthcare models, such as dehospitalization, horizontal coordination and patient-centeredness, imply the increased importance of middle managers, within single structures but also along clinical pathways and projects, to create and sustain high performances [ 27 , 28 , 29 ].

Whaley and Gillis [ 8 ] investigated the development of training programs aimed at increasing managerial competencies and leadership of middle managers, both from clinical and nonclinical backgrounds, in the US context. By adopting the top managers’ perspective, they found a widespread difficulty in aligning training needs and program contents. A 360° assessment of the competencies of Australian middle-level health service managers from two public hospitals was then conducted by Liang et al. [ 7 ] to identify managerial competence levels and training and development needs. The assessment found competence gaps and confirmed that managerial strengths and weaknesses varied across management groups from different organizations. In general, several studies have shown that leading at various organizational levels, in healthcare, does not necessarily require the same levels and types of competencies.

Liang et al. [ 30 ] explored the core competencies required for middle to senior-level managers in Victorian public hospitals. By adopting mixed methods, they confirmed six core competencies and provided guidance to the development of the competence-based educational approach for training the current and future management workforce. Liang et al. [ 31 ] then focused on the poorly investigated area of community health services, which are one of the main solutions to reducing the increasing demand for hospital care in general, and, in particular, in the reforms of the Australian health system. Their study advanced the understanding of the key competencies required by senior and mid-level managers for effective and efficient community health service delivery. A following cross-sectional study by AbuDagga et al. [ 32 ] highlighted that some community health services, such as home healthcare and hospice agencies, also need specific cultural competence training to be effective, in terms of reducing health disparities.

Using both qualitative and quantitative methods, Liang et al. [ 33 ] developed a management competence framework. Such a framework was then validated on a sample of 117 senior and middle managers working in two public hospitals and five community services in Victoria, Australia [ 34 ]. Fanelli et al. [ 35 ] used mixed methods to identify the following specific managerial competencies, which healthcare professionals perceive as crucial to improve their performance: quality evaluation based on outcomes, enhancement of professional competencies, programming based on process management, project cost assessment, informal communication style and participatory leadership.

Loh [ 5 ], through a qualitative analysis conducted in Australian hospitals, examined the motivation behind the choice of medically trained managers to undertake postgraduate management training. Interesting results stemming from the analysis include the fact that doctors often move into management positions without first undertaking training, but also that clinical experience alone does not lead to required management competencies. It is also interesting to remark that effective postgraduate management training for doctors requires a combination of theory and practice, and that doctors choose to undertake training mostly to gain credibility.

Ravaghi et al. [ 36 ] conducted a literature review to assess the evidence on the effectiveness of different types of training and educational programs delivered to hospital managers. The analysis identifies a set of aspects that are impacted by training programs. Training programs focus on technical, interpersonal and conceptual skills, and positive effects are mainly reported for technical skills. Numerous challenges are involved in designing and delivering training programs, including lack of time, difficulty in employing competencies in the workplace, also due to position instability, continuous changes in the health system environment, and lack of support by policymakers. One of the more common flaws concerns the fact that managers are mainly trained as individuals, but they work in teams. The implications of the study are that increased investments and large-scale planning are required to develop the knowledge and competencies of hospital managers. Another shortage concerns the outcome measurement of training programs, which is a usually neglected issue in the literature [ 9 ]. It also emerges that the training programs performing best are specific, structured and comprehensive.

Kakemam and Liang [ 2 ] conducted a literature review to shed light on the methods used to assess management competencies, and, thus, professional development needs in healthcare. Their analysis confirms that most studies focus on middle and senior managers and demonstrate great variability in methods and processes of assessment. As a consequence, they elaborate a framework to guide the design and implementation of management competence studies in different contexts and countries.

In the end, the literature has long pointed out that developing and strengthening the competencies and skills of health managers represent a core goal for increasing the efficiency and effectiveness of health systems, and management training is crucial for achieving such a goal [ 37 ]. The reasons can be summarized as follows: university education has scarcely been able to provide physicians and, in general, health operators, with adequate, or at least basic, managerial competencies and skills; over time, professionals have been involved in increasingly complex and rapidly changing working environments, requiring increased management responsibilities as well as new competencies and skills; in many settings, for instance in Italy, delays in the enforcement of law requiring the attendance of specific management training courses to take up a leadership position, hindered the acquisition of new competencies and the improvement of existing ones by those already managing health organizations, structures and services.

For the purposes of this study, management competencies refer to the possession and ability to use skills and tools for service organization and service planning, control and evaluation, evidence-informed decision-making and human resource management in the healthcare field.

Management training in the Italian National Health System

The reform of the Italian National Health System (INHS), implemented by Legislative Decree No. 502/1992 and inspired by neo-managerial theories, introduced the role of the general manager and assigned new responsibilities to managers.

However, the inadequate performance achieved in the first years of the application of the reform highlighted the cultural gap that made the normative adoption of managerial approach and tools unproductive on the operational level. Legislation evolved accordingly, and in order to hold management positions, management training became mandatory. Decree-Law No. 583/1996 (converted into Law No. 4/1997) provided that the requirements and criteria for access to the top management level were to be determined. Therefore, Presidential Decree No. 484/1997 determined these requirements and also the requirements and criteria to access the middle-management level of INHS’ healthcare authorities. This regulation also imposed the acquisition of a specific management training certificate, dictated rules concerning the duration, contents, and teaching methods of management training courses issuing this certificate, and indicated the requirements for attendance. Immediately afterwards, Legislative Decree No. 229/1999 amended the discipline of medical management and health professions and promoted continuous training in healthcare. It also regulated management training, which became an essential requirement for the appointments of health directors and directors of complex structures in the healthcare authorities, for the categories of physicians, dentists, veterinarians, pharmacists, biologists, chemists, physicists and psychologists.

The second pillar of the INHS reform was the regionalization of the INHS. Therefore, the Regions had to organize the courses to achieve management training certificates on the basis of specific agreements with the State, which regulated the contents, the methodology, the duration and the procedures for obtaining certification. The State-Regions Conference approved the first interregional agreement on management training in July 2003, whereas the State-Regions Agreement of 16 May 2019 regulated the training courses. The mandatory contents of the management training outlined the skills and behaviors expected from general managers and other top management key players (Health Director, Administrative Director and Social and Health Director), but also for all middle managers.

A survey was used to gather information from a purposive sample of professionals in the healthcare field taking part in management training programs. In particular, a structured questionnaire was submitted to 140 participants enrolled in two management programs organized by an Italian university: a second-level specializing master course and a training program carried out in collaboration with the Region. The programs awarded participants the title needed to be appointed as a director of a ward or administrative unit in a public healthcare organization, and share the same scientific committee, teaching staff, administrative staff and venue. The respondents’ profile is shown in Table  1 .

It is worth pointing out that the teaching staff is characterized by diversity: teachers have different educational and professional backgrounds, are practitioners or academics, and come from different Italian regions.

The questionnaire was submitted and completed in presence and online between November 2022 and February 2023. All participants decided to take part in the analysis spontaneously and gave their consent, being granted total anonymity.

The questionnaire, which was developed for this study and based on the literature, consisted of 64 questions shared in the following five sections: participant profile (10 items), management competencies held by participants before the training program (4 items), effectiveness factors of the training program (23 items), challenges to effectiveness (10 items), and outcomes of the training program (17 items) (an English language version of the questionnaire is attached to this paper as a supplementary file). In particular, the second section aimed to shed light on the participants’ situation regarding management competencies held before the start of the training program and how they were acquired; the third section aimed to collect participants’ opinions regarding how the program was conducted and the factors influencing its effectiveness; the fourth section aimed to collect participants’ opinions regarding the main obstacles encountered during the program; and the fifth section aimed to reveal the main outcomes of the program in terms of knowledge, skills, practices and career.

Except for those of the first section, which collected personal information, all the items of the next four categories – management competencies, effectiveness factors, challenges and outcome — were measured through a 5-point Likert scale. To ensure that the content of the questionnaire was appropriate, clear and relevant, a pre-testing was conducted in October 2022 by asking four academics and four practitioners, both physicians and not, with and without management positions, to fill it out. The aim was to understand whether the questionnaire really addressed the information needs behind the study and was easily and correctly understood by respondents. Therefore, the four individuals involved in the pre-testing were asked to fill it out simultaneously but independently, and at the end of the compilation, a focus group that included them and the three authors was used to collect their opinions and suggestions. After this phase, the following changes were made: in the ‘Participant profile’ section, ‘Veterinary medicine’ was added to the fields accounting for the ‘Educational background’ (item 3); in Sect. 2, it was decided to modify the explanation given to ‘basic management competencies’ and align it to what required by Presidential Decree No. 484/1997; in Sect. 3, item 25 was added to catch a missing aspect that respondents considered important, and brackets were added to the description of items 15, 16 and 29 to clarify the concepts of mixed and homogenous class and pedagogical approaches and tools; in Sect. 4, in the description of item 40, the words ‘find the energy required’ were added to avoid confusion with items 38 and 39, whereas brackets were added to items 41 and 45 to provide more explanation; in Sect. 5, brackets were added to the description of item 51 to increase clarity, and the last item was divided into two (now items 63 and 64) to distinguish the training program’s impact on career at different times.

With reference to the methods, first, a factor analysis based on the principal component method was conducted within each section of the questionnaire (except for the first again), in order to reduce the number of variables and shed light on the factors influencing the management training process. Bartlett's sphericity test and the Kaiser–Meyer–Olkin (KMO) value were performed to assess sampling adequacy, whereas factors were extracted following the Kaiser criterion, i.e., eigenvalues greater than unity, and total variance explained. The rotation method used was the Varimax method with Kaiser normalization, except for the second section (i.e., management competencies held by participants before the training program) that), which did not require rotation since a single factor emerged from the analysis. Bartlett's sphericity test was statistically significant ( p  < 0.001) in all sections, KMO values were all greater than 0.65 (average value 0.765), and the total variances explained were all greater than 65% (average value of approximately 70.89%), which are acceptable values for such analysis.

Second, a set of ordinal logistic regressions were performed to assess the relationships existing between management competencies held before the start of the course, effectiveness factors, challenges, and outcomes of the training program.

The factors that emerged from the factor analysis were used as independent variables, whereas some significant outcome items accounting for different performance aspects were selected as dependent variables: improved management competencies, innovation practices, professional relationships, and career prospects. Ordered logit regressions were used because the dependent variables (outcomes) were measured on ordinal scales. Some control variables for the respondent profiles were included in the regression models: age, gender, educational background, management position, and working in the healthcare field.

With the aim of understanding which explanatory variables could exert an influence, a backward elimination method was used, adopting a threshold level of significance values below 0.20 ( p  < 0.20). Table 4 shows the results of regressions with independent variables obtained following the criterion mentioned above. All four models respected the null hypothesis, which means that the proportional odds assumption behind the ordered logit regressions had not been rejected ( p  > 0.05). Third and last, an unpaired two-sample t-test was used to examine the differences between groups of participants in the management training programs selected based on two criteria: physicians and non-physicians, and participants with or without management positions.

First, descriptive statistics is useful for understanding the aspects participants considered the most and least important by category. This can be done by focusing on the items of the four sections of the questionnaire (except for the first one depicting participant profiles) that were given the highest and lowest scores at the sample level and by different groups of participants (physicians and non-physicians, participants with or without management positions). Table 2 summarizes the mean values and standard deviations by group of these higher and lower scores. Focusing on management competencies, all groups reported having mainly acquired them through professional experience, except for non-physicians who attributed major significance to postgraduate training programs, with a mean value of 3.05 out of 5. All groups agreed on the poor role of university education in providing management competencies, with mean values for the sample and all four groups below 2.5. It is worth noting that this item exhibits the lowest value for physicians (1.67) and the highest for non-physicians (2.37). In addition, physicians are the group attributing the lowest values to postgraduate education and professional experience for acquiring management competencies. In reference to factors of effectiveness, all groups also agree on the necessity of mixing theoretical and practical lessons during the training program with mean values of well above 4.5, whereas exclusive use of self-assessment is generally viewed as the most ineffective practice, except for non-physician, who attribute the lowest value to remote lessons (mean 1.82). Among the challenges, the whole sample and physicians and participants without management positions see the lack of financial support from their organization as the main problem (mean 4.10), while non-physicians and participants with management positions believe this is represented by a lack of time, with mean values, respectively, of 3.75 and 4. All agree that dialogue and discussion during the course have been the least relevant of the problems, with mean values below 1.5. Outcomes show generally high values, as revealed by the fact that the lowest values exhibit mean values around 3.5. It is worth noting that an increased understanding of the healthcare systems has been the main benefit gained from the program, with mean values equal to or higher than 4.50. The lowest positive impact is attributed by all attendees to improved relationships with superiors and top management, with mean values between 3.44 and 3.74, with the exception of participants without management positions who mention improved career prospects.

To shed light on the factors influencing the management training process, the findings of the factor analyses conducted by category are reported. Starting from the management competencies held before the training program, the following single factor was extracted from the four items, named and interpreted as follows:

Basic management competencies, which measures the level of management competencies acquired by participants through higher education, post-graduate training and professional experience.

The effectiveness factors are then grouped into six factors, named and explained as follows:

Diversity and debate, which aggregates five items assessing the importance of diversity in participants’ and teachers’ educational and professional backgrounds and pedagogical approaches and tools, as well as level of participant engagement and discussion during lessons and in carrying out the project work required to complete the program.

Specialization, which includes three items accounting for a robust knowledge of healthcare systems by focusing on teachers’ profiles and lessons’ theoretical approaches.

Lessons in presence, which groups three items explaining that in-presence lessons increase learning outcomes and discussion among participants.

Final self-assessment, made up of three items asserting that learning outcomes should be assessed by participants themselves at the end of the course.

Written intermediate assessment, composed of two items explaining that mid-terms assessment should only be written.

Homogeneous class, which is made up of a single component accounting for participants’ similarity in terms of professional backgrounds and management levels, tasks and responsibilities.

The challenges are aggregated into the following four factors:

Lack of time, which includes three items reporting scarce time and energy for lessons and study.

Problems of dialogue and discussion, which groups three items focusing on difficulties in relating to and debating with other participants and teachers.

Low support from organization, which is made up of two items reporting poor financial support and low value given to the initiative from participants’ own organizations.

Organizational issues, which aggregates two items demonstrating scarce flexibility and collaboration by superiors and colleagues of participants’ own organizations and unfamiliarity to study.

Table 3 shows the component matrix with saturation coefficients and factors obtained for the management competencies held before the training program (unrotated), effectiveness factors (rotated), and challenges (rotated).

A set of ordinal logistic regressions was performed to examine the relationships between management competencies held before the start of the course, effectiveness factors, challenges and outcomes of the training program. The results, shown in Table  4 , are articulated into four models, one for each selected outcome. In relation to model 1, the factors ‘diversity and debate’ ( p  < 0.001), ‘written intermediate assessment’ ( p  < 0.05) and ‘homogeneous class’ ( p  < 0.001) have a significant positive impact on the improvement of management competencies, which is also increased by low values attributed to ‘problems of dialogue and discussion’ ( p  < 0.01). In model 2, the change of professional practices in light of lessons learned during the program, selected as an innovation outcome, is then positively affected by ‘diversity and debate’ ( p  < 0.001), ‘homogeneous class’ ( p  < 0.05) and ‘organizational issues’ ( p  < 0.01), while it was negatively influenced by a high value of ‘basic management competencies’ held before the course ( p  < 0.05). Regarding model 3, ‘Diversity and debate’ ( p  < 0.001) and ‘homogeneous class’ ( p  < 0.01) have a significant positive effect on the improvement of professional relationships as well, whereas the same is negatively affected by ‘lessons in presence’ ( p  < 0.05). Finally, concerning model 4, the outcome career prospects benefit from ‘diversity and debate’ ( p  < 0.05) and ‘homogeneous class’ ( p  < 0.01), since both factors exert a positive effect. ‘Low support from organization’ negatively influences career prospects ( p  < 0.001). Table 4 also shows that the LR test of proportionality of odds across the response categories cannot be rejected (all four p  > 0.05).

Finally, it is worth noting that none of the control variables reflecting the respondent profiles (age, gender, management position, working in the healthcare field, and educational background) was found to be statistically significant. These variables are not reported in Table  4 because regression models were obtained following a backward elimination method, as explained in the method section.

In the end, the t-test reveals significant differences between physicians and non-physicians, as well as between participants with or without management positions. Table 5 shows only figures of t-test statistically significant with regards to competencies held before the attendance of the course, the factors of effectiveness, challenges of the training program, and outcomes achieved. In the first comparison, non-physicians show higher management competencies at the start of the program, with a mean value of 0.31, while physicians suffer from less support from their own organization with a mean value of 0.13 compared to -0.18, the mean value of the non-physicians. Concerning the second comparison, participants with management positions have higher management competencies at the start of the program (0.19 versus -0.13) and suffer more from lack of time, with higher mean values compared to participants without managerial positions, respectively 0.23 and -0.16. For what concerns the factors related to the effectiveness of the training program, participants with management positions exhibit a lower mean value in relation to written mid-term assessments, -0.24 versus 0.17, reported by participants with management positions. Differently, the final self-assessment at the end of the program is higher for participants with management positions, 0.24 compared to -0.17, the mean value of the participants without management positions. This latter category feels more the problem of low support from their organizations, with a mean value of 0.16 compared to -0.23, and is slightly less motivated by possible career improvement, with a mean value of 3.31 compared to 3.73 reported by participants with management positions.

The results stemming from the different analyses are now considered and interpreted in the light of the extant literature. Personal characteristics such as gender and age, differently from what was found by Walston et al. [ 26 ] for executives’ continuing education, and professional characteristics such as seniority and working in public or private sectors, do not seem to affect participation in management training programs.

The findings clearly show the outstanding importance of ‘diversity and debate’ and ‘class homogeneity’ as factors of effectiveness, since they positively impact all outcomes: competencies, innovation, professional relationships and career. These factors capture two key aspects complementing each other: on the one hand, participants and teachers’ different backgrounds provide the class with a wider pool of resources and expertise, whereas the use of pedagogical tools fostering discussion enriches the educational experience and stimulates creativity. On the other hand, due to the high level of professionalism in the setting, sharing common management levels means similar tasks and responsibilities, as well as facing similar problems. Consequently, speaking the same language leads to deeper knowledge and effective technical solutions.

In relation to the improvement of management competencies, it also emerges the critical role of a good class atmosphere, that is, the absence of problems of dialogue and discussion. ‘Diversity and debate’ and ‘class homogeneity’, as explained before, seem to contribute to this, since they enhance freedom of expression and fair confrontation, leading to improved learning outcomes. It is interesting to notice that the problems of dialogue and discussion turned out to be the least relevant challenge across the sample.

Two interesting points come from the factors affecting innovation. First, it seems that lower competencies before the training programs lead to the development of more innovative practices. The reason is that holding fewer basic competencies means a greater scope for action once new capabilities are learned: the spirit of openness is conducive to breaking down routines, and innovative practices hindered by a lack of knowledge and tools can thus be introduced. The reason is that holding fewer basic competencies means greater scope for action once new capabilities are learned: the spirit of openness is conducive to breaking down routines, and innovative practices hindered by a lack of knowledge and tools can thus be introduced. This extends the findings of previous studies since the employment of competencies in the workplace is influenced by the starting competence equipment of professionals [ 36 ], and those showing gaps have more room to recover, also in terms of motivation to change, that is, understanding the importance of meeting current and future challenges [ 26 ]. Second, more innovative practices are introduced by participants perceiving more organizational issues. This may reveal, on the one side, a stronger individual motivation towards professional growth of participants who suffer from lack of flexibility and collaboration from their own superiors and colleagues. In this regard, poor tolerance, flexibility and permissions in their workplace act as a stimulus to innovation, which can be viewed as a way of challenging the status quo. On the other side, in line with the above-mentioned concept, this confirms that unfamiliarity with the study increases the innovative potential of participants. Since this study reveals that physicians are neither adequately educated from a management point of view nor incentivized to attend post-graduation training programs, it points out how important is extending continuing education to all health professional categories [ 25 , 26 ].

The topic of competencies held by different categories needs more attention. The study reveals that physicians and participants without management positions start the program with less basic competencies. At the sample level, higher education is viewed as the most ineffective tool to provide such competencies, whereas professional experience is seen as the best way to gather them. Actually, non-physicians give the highest value to postgraduate education, which suggests they are those more interested or incentivized to take part in continuing education. Although holding managerial positions does not automatically mean having higher competencies [ 5 ], it is evident that such a professional experience contributes to filling existing gaps. Physicians stand out as the category for which university education, postgraduate education and professional experience exert the lowest impact on management competence improvement. Considering the relationship between competence held before the course and innovation, as described above, engaging physicians in training programs, even more if they do not have management responsibilities, has a major impact on health organizations’ development prospects. The findings also point out that effective management training requires a combination of theory and practice for all categories of professionals, not just for physicians, as observed by Loh [ 5 ].

The main outcome, in general and for all participant categories, is an increased understanding of how healthcare systems work, which anticipates increased competencies. This confirms the importance of knowledge on the healthcare environment [ 31 ], and clarifies the order of aspects impacted by training programs as reported by Ravaghi et al. [ 36 ]: first conceptual, then technical, and finally interpersonal. However, interpersonal outcomes are by far greater for those holding management positions, which extends the findings by Liang et al. [ 31 ]. In particular, participants already managing units report the greatest impacts in terms of ability to understand colleagues’ problems, improvement of professional relationships and collaboration with colleagues from other units. Obviously, participants with management positions, more than others, feel the lack of collaborative and communication skills, which represents one of the main flaws of university education in the field of medicine [ 4 ] and is also often neglected in management training [ 36 ]. This also confirms that different management levels show specific competence requirements and education needs [ 6 , 7 ]. 

It is then important to discuss the negative effect of lessons in presence on the improvement of professional relationships. At first glance, it may sound strange, but its real meaning emerges from a comprehensive interpretation of all the findings. First, it does not mean that remote lessons are more effective, as revealed by the fact that they, as a factor of effectiveness, are attributed very low values and, for all categories of participants, lower values than those attributed to lessons in presence and hybrid lessons. Non-physicians, in particular, attribute them the lowest value at all. At most, remote lessons are viewed as convenient rather than effective. The negative influence of lessons in presence can be explained by the fact that a specific category, i.e., those with management positions, rate this aspect much more important than other participants and, as reported above, find much more benefits in terms of improved relationships from management training. Participants with management positions, due to their tasks and responsibilities, suffer more than others from lack of time to be devoted to course participation. For them, as for the category of non-physicians, lack of time represents the main challenge to effectively attending the course. In the literature, such a problem is well considered, and lack of time is also viewed as a challenge to apply the skills learned during the course [ 36 ]. Considering that class discussion and homogeneity contribute to fostering relationships, a comprehensive reading of the findings reveals that due to workload, participants with management positions see particularly convenient and still effective remote lessons. Furthermore, if the class is formed by participants sharing similar professional backgrounds and management levels, debate is not precluded and interpersonal relationships improved as a consequence. From the observation of single items, it can be concluded that participants with management positions and in general those with higher basic management competencies at the start of the program, prefer more flexible and leaner training programs: intermediate assessment through conversation, self-assessment at the end of the course, more concentrated scheduled lessons and greater use of remote lessons.

Differently from what was found by Walston and Khaliq [ 25 ], the findings highlight that participants with management positions value the impact of management training on career prospects positively. These participants are also those more supported by their own organizations. Conversely, the lack of support, especially in terms of inadequate funds devoted to these initiatives, strongly affects physicians and participants without management positions, which clarifies what this challenge is about and who is mainly affected by it [ 36 ]. Low incentives mean having attended fewer training programs in the past, which, together with less management experience, explains why they have developed less competencies. Among the outcomes of the training program, the little attention paid by organizations is also testified by the lowest values attributed by all categories, except for participants without management positions, to the improvement of relationships with superiors and top management.

In general, the study contributes to a better understanding of the outcomes of management training programs in healthcare and their determinants [ 9 ]. In particular, it sheds light on gaps and education needs [ 1 ] by category of health professionals [ 2 ]. The research findings have major implications for practice, which can be drawn after identifying the four profiles of participants revealed by the study. All profiles share common characteristics, such as value given to debate, diversity of pedagogical approaches and tools and class homogeneity, rather than the need for a deeper comprehension of healthcare systems. However, they present characteristics that determine specific issues and education gaps, which are summarized as follows:

Physicians without management positions: low competencies at the start of the program and scarce incentives for attending the course from their own organization;

Physicians with management positions: they partially compensate for competence gaps through professional experience, suffer from lack of time, and are motivated by the chance to improve their career prospects;

Non-physicians without management positions: they partially fill competence gaps through postgraduate education, suffer from lack of time, and have scarce incentives for attending the course from their own organization;

Non-physicians with management positions: they partially bridge competence gaps through postgraduate education and professional experience, are the most affected by a lack of time, and are motivated by the chance to improve their career prospects.

Recommendations are outlined for different levels of action:

For policymakers, it is suggested to strengthen the ability of higher education courses in medicine and related fields to advance the understanding of healthcare systems’ structure and operation, as well as their current and future challenges. Such a new approach in the design curricula should then have as a main goal the provision of adequate management competencies.

For healthcare organizations, it is suggested to incentivize the acquisition of management competencies by all categories of professionals through postgraduate education and training programs. This means supporting them from both financial and organizational point of view, for instance, in terms of more flexible working conditions. Special attention should be paid to physicians who, even without executive roles, manage resources and directly impact the organization's effectiveness and efficiency levels through their day-by-day activity, and are the players holding the greatest innovative potential within the organization. Concerning the executives, especially in the current changing context of healthcare systems, much higher attention should be paid to fostering interpersonal skills, in terms of communication and cooperation.

For those designing training programs, it is suggested to tailor courses on the basis of participants’ profiles, using different pedagogical approaches and tools, for instance, in terms of teacher composition, lesson delivery methods and learning assessment methods, while preserving class homogeneity in terms of professional backgrounds and management levels to facilitate constructive dialogue and solution finding approaches. Designing ad hoc training programs would give the possibility to meet the needs of participants from an organizational point of view as well as, for instance, in terms of program length and lesson concentration.

Limitations

This study has some limitations, which pave the way for future research. First, it is context-specific by country, since it is carried out within the INHS, which mandatorily requires health professionals to attend management training programs to hold certain positions. It is then context-specific by training program, since it focuses on management training programs providing participants with the title to be appointed as a director of a ward or administrative unit in a public healthcare organization. This determines the kind of management competencies included in the study, which are those mandatorily required for such a middle-management category. Therefore, there is a need to extend research and test these findings on different types of management training programs, participants and countries. Second, this study is based on a survey of participants’ perceptions, which causes two kinds of unavoidable issues: although based on the literature and pre-tested, the questionnaire could not be able to measure what it intends to or capture detailed and nuanced insights from respondents, and responses may be affected by biases due to reactive effects. Third, a backward elimination method was adopted to select variables in model building. Providing a balance between simplicity and fit of models, this variable selection technique is not consequences-free. Despite advantages such as starting the process with all variables included, removing the least important early, and leaving the most important in, it also has some disadvantages. The major is that once a variable is deleted from the model, it is not included anymore, although it may become significant later [ 38 ]. For these reasons, it is intended to reinforce research with new data sources, such as teachers’ perspectives and official assessments, and different variable selection strategies. A combination of qualitative and quantitative methods for data elaboration could then be used to deepen the analysis of the relationships between motivations, effectiveness factors and outcomes. Furthermore, since the investigation of competence development, acquisition of new competencies and the transfer of acquired competencies was beyond the purpose of this study, a longitudinal approach will be used to collect data from participants attending future training programs to track changes and identify patterns.

Availability of data and materials

An English-language version of the questionnaire used in this study is attached to this paper as a supplementary file. The raw data collected via the questionnaire are not publicly available due to privacy and other restrictions. However, datasets generated and analyzed during the current study may be available from the corresponding author upon reasonable request.

Abbreviations

Italian National Health System

Kaiser–Meyer–Olkin

National Recovery and Resilience Plan

Liang Z, Howard PF, Wang J, Xu M, Zhao M. Developing senior hospital managers: does ‘one size fit all’? – evidence from the evolving Chinese health system. BMC Health Serv Res. 2020;20(281):1–14. https://doi.org/10.1186/s12913-020-05116-6 .

Article   Google Scholar  

Kakemam E, Liang Z. Guidance for management competency identification and development in the health context: a systematic scoping review. BMC Health Serv Res. 2023;23(421):1–13. https://doi.org/10.1186/s12913-023-09404-9 .

European Commission. Annual Sustainable Growth Strategy. 2020. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0575&from=en

Blakeney EAR, Ali HN, Summerside N. Sustaining improvements in relational coordination following team training and practice change: a longitudinal analysis. Health Care Manag Rev. 2021;46(4):349–57. https://doi.org/10.1097/HMR.0000000000000288 .

Loh E. How and why medically-trained managers undertake postgraduate management training - a qualitative study from Victoria. J Health Organ Manag. 2015;29(4):438–54. https://doi.org/10.1108/jhom-10-2013-0233 .

Article   PubMed   Google Scholar  

Yarbrough LA, Stowe M, Haefner J. Competency assessment and development among health-care leaders: results of a cross-sectional survey. Health Serv Manag Res. 2012;25(2):78–86. https://doi.org/10.1258/hsmr.2012.012012 .

Liang Z, Blackstock FC, Howard PF, Briggs DS, Leggat SG, Wollersheim D, Edvardsson D, Rahman A. An evidence-based approach to understanding the competency development needs of the health service management workforce in Australia. BMC Health Serv Res. 2018;18(976):1–12. https://doi.org/10.1186/s12913-018-3760-z .

Whaley A, Gillis WE. Leadership development programs for health care middle managers: an exploration of the top management team member perspective. Health Care Manag Rev. 2018;43(1):79–89. https://doi.org/10.1097/HMR.0000000000000131 .

Campbell C, Lomperis A, Gillespie K, Arrington B. Competency-based healthcare management education: the Saint Louise University experience. J Health Adm Educ. 2006;23:135–68.

PubMed   Google Scholar  

Issel ML. Value Added of Management to Health Care Organizations. Health Care Manag Rev. 2020;45(2):95. https://doi.org/10.1097/HMR.0000000000000280 .

Lega F, Prenestini A, Spurgeon P. Is management essential to improving the performance and sustainability of health care systems and organizations? a systematic review and a roadmap for future studies. Value Health. 2013;16(1 Suppl.):S46–51. https://doi.org/10.1016/j.jval.2012.10.004 .

Bloom N, Propper C, Seiler S, Van Reenen J. Management practices in hospitals. Health, Econometrics and Data Group (HEDG) working papers 09/23, HEDG, c/o department of economics, University of York. 2009.

Vainieri M, Ferrè F, Giacomelli G, Nuti S. Explaining performance in health care: how and when top management competencies make the difference. Health Care Manag Rev. 2019;44(4):306–17. https://doi.org/10.1097/HMR.0000000000000164 .

Del Vecchio M, Carbone C. Stabilità dei Direttori Generali nelle aziende sanitarie. In: Anessi Pessina E, Cantù E, editors. Rapporto OASI 2002 L’aziendalizzazione della sanità in Italia. Milano, Italy: Egea; 2002. p. 268–301.

Google Scholar  

McAlearney AS. Leadership development in healthcare: a qualitative study. J Organ Behav. 2006;27:967–82.

McAlearney AS. Using leadership development pro- grams to improve quality and efficiency in healthcare. J Healthcare Manag. 2008;53:319–31.

McAlearney AS. Executive leadership development in U.S. health systems. J Healthcare Manag. 2010;55:207–22.

McAlearney AS, Fisher D, Heiser K, Robbins D, Kelleher K. Developing effective physician leaders: changing cultures and transforming organizations. Hosp Top. 2005;83(2):11–8.

Thompson JM, Kim TH. A profile of hospitals with leadership development programs. Health Care Manag. 2013;32(2):179–88. https://doi.org/10.1097/HCM.0b013e31828ef677 .

Figueroa C, Harrison R, Chauhan A, Meyer L. Priorities and challenges for health leadership and workforce management globally: a rapid review. BMC Health Serv Res. 2019;19(239):1–11. https://doi.org/10.1186/s12913-019-4080-7 .

Sarto F, Veronesi G. Clinical leadership and hospital performance: assessing the evidence base. BMC Health Serv Res. 2016;16(169):85–109. https://doi.org/10.1186/s12913-016-1395-5 .

Morandi F, Angelozzi D, Di Vincenzo F. Individual and job-related determinants of bias in performance appraisal: the case of middle management in health care organizations. Health Care Manag Rev. 2021;46(4):299–307. https://doi.org/10.1097/HMR.0000000000000268 .

Tasi MC, Keswani A, Bozic KJ. Does physician leadership affect hospital quality, operational efficiency, and financial performance? Health Care Manag Rev. 2019;44(3):256–62. https://doi.org/10.1097/hmr.0000000000000173 .

Hopkins J, Fassiotto M, Ku MC. Designing a physician leadership development program based on effective models of physician education. Health Care Manag Rev. 2018;43(4):293–302. https://doi.org/10.1097/HMR.0000000000000146 .

Walston SL, Khaliq AA. The importance and use of continuing education: findings of a national survey of hospital executives. J Health Admin Educ. 2010;27(2):113–25.

Walston SL, Chou AF, Khaliq AA. Factors affecting the continuing education of hospital CEOs and their senior managers. J Healthcare Manag. 2010;55(6):413–27. https://doi.org/10.1097/00115514-201011000-00008 .

Garman AN, McAlearney AS, Harrison MI, Song PH, McHugh M. High-performance work systems in health- care management, part 1: development of an evidence-informed model. Health Care Manag Rev. 2011;36(3):201–13. https://doi.org/10.1097/HMR.0b013e318201d1bf .

MacDavitt K, Chou S, Stone P. Organizational climate and healthcare outcomes. Joint Comm J Qual Patient Saf. 2007;33(S11):45–56. https://doi.org/10.1016/s1553-7250(07)33112-7 .

Singer SJ, Hayes J, Cooper JB, Vogt JW, Sales M, Aristidou A, Gray GC, Kiang MV, Meyer GS. A case for safety leadership training of hospital manager. Health Care Manag Rev. 2011;36(2):188–200. https://doi.org/10.1097/HMR.0b013e318208cd1d .

Liang Z, Leggat SG, Howard PF, Lee K. What makes a hospital manager competent at the middle and senior levels? Aust Health Rev. 2013;37(5):566–73. https://doi.org/10.1071/AH12004 .

Liang Z, Howard PF, Koh L, Leggat SG. Competency requirements for middle and senior managers in community health services. Aust J Prim Health. 2013;19(3):256–63. https://doi.org/10.1071/PY12041 .

AbuDagga A, Weech-Maldonado R, Tian F. Organizational characteristics associated with the provision of cultural competency training in home and hospice care agencies. Health Care Manag Rev. 2018;43(4):328–37. https://doi.org/10.1097/HMR.0000000000000144 .

Liang Z, Howard PF, Leggat SG, Bartram T. Development and validation of health service management competencies. J Health Organ Manag. 2018;32(2):157–75. https://doi.org/10.1108/JHOM-06-2017-0120 . (Epub 2018 Feb 8).

Howard PF, Liang Z, Leggat SG, Karimi L. Validation of a management competency assessment tool for health service managers. J Health Organ Manag. 2018;32(1):113–34. https://doi.org/10.1108/JHOM-08-2017-0223 .

Fanelli S, Lanza G, Enna C, Zangrandi A. Managerial competences in public organisations: the healthcare professionals’ perspective. BMC Health Serv Res. 2020;20(303):1–9. https://doi.org/10.1186/s12913-020-05179-5 .

Ravaghi H, Beyranvand T, Mannion R, Alijanzadeh M, Aryankhesal A, Belorgeot VD. Effectiveness of training and educational programs for hospital managers: a systematic review. Health Serv Manag Res. 2020;34(2):1–14. https://doi.org/10.1177/0951484820971460 .

Woltring C, Constantine W, Schwarte L. Does leadership training make a difference? J Public Health Manag Prac. 2003;9(2):103–22.

Chowdhury MZI, Turin TC. Variable selection strategies and its importance in clinical prediction modelling. Fam Med Comm Health. 2020;8(1):1–7. https://doi.org/10.1136/fmch-2019-000262 .

Download references

Acknowledgements

Not applicable.

DM 737/2021 risorse 2022–2023. Funded by the European Union - NextGenerationEU.

Author information

Authors and affiliations.

Department of Economics and Business, University of Sassari (Italy), Via Muroni 25, Sassari, 07100, Italy

Lucia Giovanelli & Nicoletta Fadda

Department of Humanities and Social Sciences, University of Sassari (Italy), Via Roma 151, 07100, Sassari, Italy

Federico Rotondo

You can also search for this author in PubMed   Google Scholar

Contributions

Although all the authors have made substantial contributions to the design and drafting of the manuscript: LG and FR conceptualized the study, FR and NF conducted the analysis and investigation and wrote the original draft; LG, FR and NF reviewed and edited the original draft, and LG supervised the whole process. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Federico Rotondo .

Ethics declarations

Ethics approval and consent to participate.

The research involved human participants. All authors certify that participants decided to take part in the analysis voluntarily and provided informed consent to participate. Participants were granted total anonymity and were adequately informed of the aims, methods, institutional affiliations of the researchers and any other relevant aspects of the study. In line with the Helsinki Declaration and the Italian legislation (acknowledgement of EU Regulation no. 536/2014 on January 31st, 2022 and Ministerial Decree of November 30th, 2021), ethical approval by a committee was not required since the study was non-medical and non-interventional.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Giovanelli, L., Rotondo, F. & Fadda, N. Management training programs in healthcare: effectiveness factors, challenges and outcomes. BMC Health Serv Res 24 , 904 (2024). https://doi.org/10.1186/s12913-024-11229-z

Download citation

Received : 15 January 2024

Accepted : 20 June 2024

Published : 07 August 2024

DOI : https://doi.org/10.1186/s12913-024-11229-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Management training programs
  • Healthcare professionals
  • Factors of effectiveness

BMC Health Services Research

ISSN: 1472-6963

what is a peer review in research

what is a peer review in research

Energy & Environmental Science

Performance deviation analysis and reliability improvement during experimental development of lab-scale solid oxide single cells.

Solid oxide electrochemical cells (SOCs), an elevated-temperature energy storage and conversion device, are anticipated to play a pivotal role in future sustainable energy systems. Considerable research efforts have been devoted to this dynamic field, encompassing both lab-scale investigations and industrial exploration. However, a serious knowledge gap persists between laboratory development to industrial-scale application. One possible explanation is the large deviation that exist in the reported cell performances among different research groups. The errors introduced by the complex experimental process make it a significant challenge to distinguish the truly promising cell materials, fabrication techniques, and device systems. Thus, improving the reliability of cell performance obtained from laboratory-scale investigations is imperative. In this work, we begin by summarizing various parameters that can influence cell performance, including cell configuration, sealing materials and ways, current collection materials and techniques, current-voltage polarization test methods, and operation conditions. Then their potential impacts on button cell performance are comprehensively reviewed and discussed. Finally, strategies to enhance the reliability of the cell performance are suggested. We believe this specific article will offer valuable guidance for the future development of SOCs at lab scale, while also furnishing meaningful information to promote industrial applications.

Transparent peer review

To support increased transparency, we offer authors the option to publish the peer review history alongside their article.

View this article’s peer review history

Article information

Download citation, permissions.

what is a peer review in research

Z. (. Luo, Z. Wang, T. Zhu, Y. Song, Z. Lin, S. P. Jiang, Z. Zhu and Z. Shao, Energy Environ. Sci. , 2024, Accepted Manuscript , DOI: 10.1039/D4EE02581D

To request permission to reproduce material from this article, please go to the Copyright Clearance Center request page .

If you are an author contributing to an RSC publication, you do not need to request permission provided correct acknowledgement is given.

If you are the author of this article, you do not need to request permission to reproduce figures and diagrams provided correct acknowledgement is given. If you want to reproduce the whole article in a third-party publication (excluding your thesis/dissertation for which permission is not required) please go to the Copyright Clearance Center request page .

Read more about how to correctly acknowledge RSC content .

Social activity

Search articles by author.

This article has not yet been cited.

Advertisements

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • News & Views
  • Episiotomy: Are we...

Episiotomy: Are we ready to let women make choices in pregnancy and childbirth?

Linked research.

Lateral episiotomy or no episiotomy in vacuum assisted delivery in nulliparous women

  • Related content
  • Peer review
  • 1 Department of Clinical Sciences, Danderyd Hospital, Karolinska Institutet, Stockholm, Sweden
  • Correspondence to: S Bergendahl sandra.bergendahl{at}ki.se

It is important to work with care providers’ knowledge and perceptions about research on pregnant women to enable patients to make informed choices

Research on therapeutic measures and interventions is sparse in pregnant women, which risks hindering important medical advances for this group. Ethical concerns about imposing uncertain risk on the fetus have led to the exclusion of pregnant women in many trials. 1 Pregnant women are considered an especially vulnerable population. Healthcare staff often feel the need to protect these women from risk, stress, and strain during pregnancy and childbirth, to protect them and their baby. This presumed vulnerability means that pregnant women are often treated as if they need to be protected from difficult choices—such as consenting to certain interventions, or participation in research. In any other healthcare situation, these views of protecting patients from informed choice and research participation would be considered paternalistic and perhaps even a threat to patient autonomy. Women need to be provided adequate information on interventions and complications in childbirth and must be allowed to make informed choices.

Episiotomy, one of the most common surgical interventions in childbirth, is an incision in the tissue between the vaginal and anal openings to facilitate childbirth. Routine episiotomy in ordinary births has been abandoned in most countries in favour of restrictive use, which is in line with scientific evidence. 2 Persistent routine episiotomy has become a symbol for the lack of patient choice and even obstetric violence. 3 Despite this, what restrictive use means in terms of rates and indications is unclear, reflected in the highly variable rates of episiotomy between countries. 4 The lowest episiotomy rates are observed in Scandinavia, 4 likely as a result of the women’s independence movement and a less interventional view on childbirth. However, Scandinavia, notably Sweden, also has among the highest rates of obstetric anal sphincter injury, 4 a serious complication to childbirth that could result in anal incontinence and impaired quality of life.

Compelling observational data suggest that episiotomy can reduce obstetric anal sphincter injury in instrumental births (a potential, so-called restrictive indication). 5 Still, a randomised controlled trial has been called for to establish its protective effect. But one large obstacle exists: numerous women would have to be asked for consent during pregnancy or in an urgent medical situation during childbirth, making such a trial seem unfeasible. 6 This obstacle was underscored by a pilot trial from the UK, in which the recruitment of women and refraining from episiotomy in the control group proved difficult. 7

As obstetricians in Sweden, we hypothesised that Sweden would be the perfect setting for such a trial. In addition to the high rates of obstetric anal sphincter injury, Sweden almost exclusively uses vacuum for instrumental birth, there is a restrictive view on episiotomy but no consensus (rates vary between 10% and 80% in different hospitals), and almost all pregnant women attend antenatal care where they can receive trial information beforehand. Accordingly, we initiated the EVA trial, a randomised controlled trial on lateral episiotomy or no episiotomy in vacuum assisted delivery in first time mothers, the group with the highest risk of obstetric anal sphincter injury in childbirth (doi: 10.1136/bmj-2023-079014 ). 8

However, the research question of using episiotomy or not seemed too uncomfortable and controversial for many care providers in Sweden, and many objections to the trial were made. The most common concern was that women should not be bothered with potentially intimidating trial information: “In case your delivery has to end with vacuum extraction, would you consent to participate in a trial assessing episiotomy or not, with the possible outcome of sphincter injury?” It was assumed that women would be scared by this question and should be spared such information, revealing a questionable view on women’s autonomy. We decided to explore these views and obstacles further and found that when care providers spoke to women about possible childbirth complications, they were interested and wanted this information in a timely manner. 9 In the EVA trial, over 6000 first time mothers consented to participate if their childbirth required vacuum assistance. The results showed that obstetric anal sphincter injury was reduced by more than half—from 13% to 6%—with a lateral episiotomy compared with no episiotomy among the 717 first time mothers who required vacuum delivery. 8

Our experience from the EVA trial supports the view that it is necessary and feasible to work with care providers’ knowledge and perceptions about research on pregnant women to allow the possibility for the patient to make an informed choice. An overprotective attitude does not promote patient choice. Pregnant women should be regarded as autonomous individuals, capable of coping with uncertainty ahead, balancing advantages and disadvantages, and making informed choices. Even in active labour, women should be given the opportunity to receive adequate situational information and their choices should be respected—not only in research participation, but also in providing information and acting on the results of research. Women should be given adequate information on the protective effect of lateral episiotomy in vacuum extraction and given the chance to make an informed choice.

Acknowledgments

Maria Jonsson, Susanne Hesselman, Victoria Ankarcrona, Åsa Leijonhufvud, Anna-Carin Wihlbäck, Tove Wallström, Emmie Rydström, Hanna Friberg, and Helena Kopp Kallner provided input and feedback on this article.

Competing interests: None.

Provenance and peer review: Commissioned; not externally peer reviewed.

  • ↵ WHO. Antiretrovirals in pregnancy research toolkit, Ethical considerations. Geneva, Switzerland: Global HIV, Hepatitis and Sexually Transmitted Infections Programmes, World Health Organization. https://www.who.int/tools/antiretrovirals-in-pregnancy-research-toolkit/ethical-considerations#phases .
  • Carroli G ,
  • Malvasi A ,
  • Marinelli E
  • Blondel B ,
  • Alexander S ,
  • Bjarnadóttir RI ,
  • Euro-Peristat Scientific Committee
  • Okeahialam NA ,
  • Sultan AH ,
  • de Leeuw JW
  • Murphy DJ ,
  • Macleod M ,
  • Howarth L ,
  • Bergendahl S ,
  • Jonsson M ,
  • Hesselman S ,
  • Ericson J ,
  • Anagrius C ,
  • Rygaard A ,
  • Guntram L ,
  • Wendel SB ,
  • Hesselman S

what is a peer review in research

IMAGES

  1. Peer Review

    what is a peer review in research

  2. A beginner’s guide to peer review: Part Two

    what is a peer review in research

  3. How to Publish Your Article in a Peer-Reviewed Journal: Survival Guide

    what is a peer review in research

  4. Understanding Peer Review in Science

    what is a peer review in research

  5. What is Peer Review?

    what is a peer review in research

  6. Peer Review Process

    what is a peer review in research

COMMENTS

  1. What Is Peer Review?

    Peer review, sometimes referred to as refereeing, is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of

  2. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  3. Understanding Peer Review in Science

    Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

  4. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer review has been defined as a process of subjecting an author's scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It functions to encourage authors to meet the accepted high standards of their discipline ...

  5. What Is Peer Review and Why Is It Important?

    It is also safe to say that peer review is a critical element of the scholarly publication process and one of the major cornerstones of the academic process. It acts as a filter, ensuring that research is properly verified before being published. And it arguably improves the quality of the research, as the rigorous review by like-minded experts ...

  6. What is Peer Review?

    Scholarly journals, often called scientific or peer-reviewed journals, are good sources of actual studies or research conducted about a particular topic. They go through a process of review by experts, so the information is usually highly reliable.

  7. Reviewers

    Reviewers play a pivotal role in scholarly publishing. The peer review system exists to validate academic work, helps to improve the quality of published research, and increases networking possibilities within research communities. Despite criticisms, peer review is still the only widely accepted method for research validation and has continued ...

  8. Peer review

    Peer review cannot improve poor research, but it can often "correct, enhance and strengthen the statistical analysis of data and can markedly improve presentation and clarity" [ 4 ]. Why should you volunteer to be a referee and review papers? The noblest motive is altruism, to help others to improve their papers.

  9. What Is Peer Review?

    Peer review, sometimes referred to as refereeing, is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of

  10. Everything You Need to Know About Peer Review

    Embarking on conducting peer reviews for academic journals can present a new and exciting challenge for early career researchers. This article offers succinct guidance about peer review: not only "what to do" (the Good) but also "what not to do" (the Bad) and "what to never do" (the Ugly). It outlines models of peer review and ...

  11. Peer review

    Positive peer reviews contribute to increased funding opportunities, academic advancement and a good reputation. Learn how to review, get mentored, and get published.

  12. What is Peer Review?

    What is Peer Review? The peer-review process tries to ensure that the highest quality research gets published. When an article is submitted to a peer-reviewed journal, the editor after deciding if the article meets the basic requirements for inclusion, sends it to be reviewed by other scholars (the author's peers) within the same field.

  13. What is Peer Review?

    Peer review is designed to assess the validity, quality and often the originality of articles for publication. Its ultimate purpose is to maintain the integrity of science by filtering out invalid or poor quality articles. From a publisher's perspective, peer review functions as a filter for content, directing better quality articles to ...

  14. Research Methods: How to Perform an Effective Peer Review

    Scientific peer review has existed for centuries and is a cornerstone of the scientific publication process. Because the number of scientific publications has rapidly increased over the past decades, so has the number of peer reviews and peer reviewers. In this paper, drawing on the relevant medical literature and our collective experience as peer reviewers, we provide a user guide to the peer ...

  15. Explore Information

    Peer reviewed articles are often considered the most reliable and reputable sources in that field of study. Peer reviewed articles have undergone review (hence the "peer-review") by fellow experts in that field, as well as an editorial review process. The purpose of this is to ensure that, as much as possible, the finished product meets the standards of the field.

  16. What is Peer Review?

    What is Peer Review? Research findings are communicated in many ways. One of the most important ways is through publication in scholarly, peer-reviewed journals. Research published in scholarly journals is held to a high standard. It must make a credible and significant contribution to the discipline.

  17. Understanding peer review

    Peer review is the independent assessment of your research paper by experts in your field. The purpose of peer review is to evaluate the paper's quality and suitability for publication. As well as peer review acting as a form of quality control for academic journals, it is a very useful source of feedback for you.

  18. What is peer review?

    What is peer review? Peer-reviewed journal articles (also called scholarly or refereed articles) are written by expert researchers and reviewed by other experts in the field. Peer review refers to a process in which information sources are examined and approved by a number of experts in that subject area before being published.

  19. What are Peer-Reviewed Journals?

    What is Peer-Review? Peer-review is a process where an article is verified by a group of scholars before it is published. When an author submits an article to a peer-reviewed journal, the editor passes out the article to a group of scholars in the related field (the author's peers).

  20. What is Peer Review?

    Peer review is a process by which researchers gain a type of formal validation of their results. When a researcher wants to communicate a new discovery, they write a description of the research and the results in a standard format: a research article.

  21. Peer-Reviewed Research: Primary vs. Secondary

    Peer Reviewed Research Published literature can be either peer-reviewed or non-peer-reviewed. Official research reports are almost always peer reviewed while a journal's other content is usually not. In the health sciences, official research can be primary, secondary, or even tertiary. It can be an original experiment or investigation (primary), an analysis or evaluation of primary research ...

  22. Peer Review Is Primary: Presentations, Publications, Promotions, and

    Peer review is primarily thought of as the process used to determine whether manuscripts are published in medical or other academic journals. While a publication may be one outcome of peer review, this article shares a model of 4 Ps to remind faculty of some important additional applications of peer review.

  23. Cognitive authority: A scoping review of empirical research. An Annual

    Journal of the Association for Information Science and Technology is a leading international forum for peer-reviewed research in information science.

  24. Models and frameworks for assessing the implementation of clinical

    A systematic review was conducted following the Cochrane methodology, with adaptations to the "selection process" due to the unique nature of this review. ... scale/tool development, systematic reviews, opinion pieces, qualitative studies, peer-reviewed articles, books, reports, and unpublished theses. ... Further research is needed to better ...

  25. Management training programs in healthcare: effectiveness factors

    Peer Review reports. ... in healthcare. It intends to answer the following research questions: which factors influence the management training process? Which relationships exist between management competencies held before the program, factors of effectiveness, critical issues encountered, and results achieved or prefigured at the end of the ...

  26. Performance deviation analysis and reliability improvement during

    Transparent peer review. To support increased transparency, we offer authors the option to publish the peer review history alongside their article.

  27. Episiotomy: Are we ready to let women make choices in pregnancy and

    It is important to work with care providers' knowledge and perceptions about research on pregnant women to enable patients to make informed choices Research on therapeutic measures and interventions is sparse in pregnant women, which risks hindering important medical advances for this group. Ethical concerns about imposing uncertain risk on the fetus have led to the exclusion of pregnant ...

  28. What is Peer Review?

    What is Peer Review? The peer-review process tries to ensure that the highest quality research gets published. When an article is submitted to a peer-reviewed journal, the editor after deciding if the article meets the basic requirements for inclusion, sends it to be reviewed by other scholars (the author's peers) within the same field.

  29. Chinese Non-Life Insurers (Group 1)

    Capitalisation Supports Financial Strength: Capitalisation is an important rating driver that underpins the ratings of the Chinese non-life insurers in this peer group.