experience
With respect to their academic background, most participants (n = 9) had a PhD, three (3) had a post-doctorate, two (2) had a master’s degree, and two (2) had a bachelor’s degree. Participants came from a variety of disciplines: nine (9) had a specialty in the humanities or social sciences, four (4) in the health sciences and three (3) in the natural sciences. In terms of their knowledge of ethics, five (5) participants reported having taken one university course entirely dedicated to ethics, four (4) reported having taken several university courses entirely dedicated to ethics, three (3) had a university degree dedicated to ethics, while two (2) only had a few hours or days of training in ethics and two (2) reported having no knowledge of ethics.
As Fig. 1 illustrates, ten units of meaning emerge from the data analysis, namely: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. To illustrate the results, excerpts from verbatim interviews are presented in the following sub-sections. Most of the excerpts have been translated into English as the majority of interviews were conducted with French-speaking participants.
Ethical issues in research according to the participants
The research environment is highly competitive and performance-based. Several participants, in particular researchers and research ethics experts, felt that this environment can lead both researchers and research teams to engage in unethical behaviour that reflects a lack of research integrity. For example, as some participants indicated, competition for grants and scientific publications is sometimes so intense that researchers falsify research results or plagiarize from colleagues to achieve their goals.
Some people will lie or exaggerate their research findings in order to get funding. Then, you see it afterwards, you realize: “ah well, it didn’t work, but they exaggerated what they found and what they did” (participant 14). Another problem in research is the identification of authors when there is a publication. Very often, there are authors who don’t even know what the publication is about and that their name is on it. (…) The time that it surprised me the most was just a few months ago when I saw someone I knew who applied for a teaching position. He got it I was super happy for him. Then I looked at his publications and … there was one that caught my attention much more than the others, because I was in it and I didn’t know what that publication was. I was the second author of a publication that I had never read (participant 14). I saw a colleague who had plagiarized another colleague. [When the colleague] found out about it, he complained. So, plagiarism is a serious [ethical breach]. I would also say that there is a certain amount of competition in the university faculties, especially for grants (…). There are people who want to win at all costs or get as much as possible. They are not necessarily going to consider their colleagues. They don’t have much of a collegial spirit (participant 10).
These examples of research misbehaviour or misconduct are sometimes due to or associated with situations of conflicts of interest, which may be poorly managed by certain researchers or research teams, as noted by many participants.
The actors and institutions involved in research have diverse interests, like all humans and institutions. As noted in Chap. 7 of the Canadian Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS2, 2018),
“researchers and research students hold trust relationships, either directly or indirectly, with participants, research sponsors, institutions, their professional bodies and society. These trust relationships can be put at risk by conflicts of interest that may compromise independence, objectivity or ethical duties of loyalty. Although the potential for such conflicts has always existed, pressures on researchers (i.e., to delay or withhold dissemination of research outcomes or to use inappropriate recruitment strategies) heighten concerns that conflicts of interest may affect ethical behaviour” (p. 92).
The sources of these conflicts are varied and can include interpersonal conflicts, financial partnerships, third-party pressures, academic or economic interests, a researcher holding multiple roles within an institution, or any other incentive that may compromise a researcher’s independence, integrity, and neutrality (TCPS2, 2018). While it is not possible to eliminate all conflicts of interest, it is important to manage them properly and to avoid temptations to behave unethically.
Ethical temptations correspond to situations in which people are tempted to prioritize their own interests to the detriment of the ethical goods that should, in their own context, govern their actions (Swisher et al., 2005 ). In the case of researchers, this refers to situations that undermine independence, integrity, neutrality, or even the set of principles that govern research ethics (TCPS2, 2018) or the responsible conduct of research. According to study participants, these types of ethical issues frequently occur in research. Many participants, especially researchers and REB members, reported that conflicts of interest can arise when members of an organization make decisions to obtain large financial rewards or to increase their academic profile, often at the expense of the interests of members of their research team, research participants, or even the populations affected by their research.
A company that puts money into making its drug work wants its drug to work. So, homeopathy is a good example, because there are not really any consequences of homeopathy, there are not very many side effects, because there are no effects at all. So, it’s not dangerous, but it’s not a good treatment either. But some people will want to make it work. And that’s a big issue when you’re sitting at a table and there are eight researchers, and there are two or three who are like that, and then there are four others who are neutral, and I say to myself, this is not science. I think that this is a very big ethical issue (participant 14). There are also times in some research where there will be more links with pharmaceutical companies. Obviously, there are then large amounts of money that will be very interesting for the health-care institutions because they still receive money for clinical trials. They’re still getting some compensation because its time consuming for the people involved and all that. The pharmaceutical companies have money, so they will compensate, and that is sometimes interesting for the institutions, and since we are a bit caught up in this, in the sense that we have no choice but to accept it. (…) It may not be the best research in the world, there may be a lot of side effects due to the drugs, but it’s good to accept it, we’re going to be part of the clinical trial (participant 3). It is integrity, what we believe should be done or said. Often by the pressure of the environment, integrity is in tension with the pressures of the environment, so it takes resistance, it takes courage in research. (…) There were all the debates there about the problems of research that was funded and then the companies kept control over what was written. That was really troubling for a lot of researchers (participant 5).
Further, these situations sometimes have negative consequences for research participants as reported by some participants.
Many research projects, whether they are psychosocial or biomedical in nature, involve human participants. Relationships between the members of research teams and their research participants raise ethical issues that can be complex. Research projects must always be designed to respect the rights and interests of research participants, and not just those of researchers. However, participants in our study – i.e., REB members, researchers, and research ethics experts – noted that some research teams seem to put their own interests ahead of those of research participants. They also emphasized the importance of ensuring the respect, well-being, and safety of research participants. The ethical issues related to this unit of meaning are: respect for free, informed and ongoing consent of research participants; respect for and the well-being of participants; data protection and confidentiality; over-solicitation of participants; ownership of the data collected on participants; the sometimes high cost of scientific innovations and their accessibility; balance between the social benefits of research and the risks to participants (particularly in terms of safety); balance between collective well-being (development of knowledge) and the individual rights of participants; exploitation of participants; paternalism when working with populations in vulnerable situations; and the social acceptability of certain types of research. The following excerpts present some of these issues.
Where it disturbs me ethically is in the medical field – because it’s more in the medical field that we’re going to see this – when consent forms are presented to patients to solicit them as participants, and then [these forms] have an average of 40 pages. That annoys me. When they say that it has to be easy to understand and all that, adapted to the language, and then the hyper-technical language plus there are 40 pages to read, I don’t understand how you’re going to get informed consent after reading 40 pages. (…) For me, it doesn’t work. I read them to evaluate them and I have a certain level of education and experience in ethics, and there are times when I don’t understand anything (participant 2). There is a lot of pressure from researchers who want to recruit research participants (…). The idea that when you enter a health care institution, you become a potential research participant, when you say “yes to a research, you check yes to all research”, then everyone can ask you. I think that researchers really have this fantasy of saying to themselves: “as soon as people walk through the door of our institution, they become potential participants with whom we can communicate and get them involved in all projects”. There’s a kind of idea that, yes, it can be done, but it has to be somewhat supervised to avoid over-solicitation (…). Researchers are very interested in facilitating recruitment and making it more fluid, but perhaps to the detriment of confidentiality, privacy, and respect; sometimes that’s what it is, to think about what type of data you’re going to have in your bank of potential participants? Is it just name and phone number or are you getting into more sensitive information? (participant 9).
In addition, one participant reported that their university does not provide the resources required to respect the confidentiality of research participants.
The issue is as follows: researchers, of course, commit to protecting data with passwords and all that, but we realize that in practice, it is more difficult. It is not always as protected as one might think, because professor-researchers will run out of space. Will the universities make rooms available to researchers, places where they can store these things, especially when they have paper documentation, and is there indeed a guarantee of confidentiality? Some researchers have told me: “Listen; there are even filing cabinets in the corridors”. So, that certainly poses a concrete challenge. How do we go about challenging the administrative authorities? Tell them it’s all very well to have an ethics committee, but you have to help us, you also have to make sure that the necessary infrastructures are in place so that what we are proposing is really put into practice (participant 4).
If the relationships with research participants are likely to raise ethical issues, so too are the relationships with students, notably research assistants. On this topic, several participants discussed the lack of supervision or recognition offered to research assistants by researchers as well as the power imbalances between members of the research team.
Many research teams are composed not only of researchers, but also of students who work as research assistants. The relationship between research assistants and other members of research teams can sometimes be problematic and raise ethical issues, particularly because of the inevitable power asymmetries. In the context of this study, several participants – including a research assistant, REB members, and researchers – discussed the lack of supervision or recognition of the work carried out by students, psychological pressure, and the more or less well-founded promises that are sometimes made to students. Participants also mentioned the exploitation of students by certain research teams, which manifest when students are inadequately paid, i.e., not reflective of the number of hours actually worked, not a fair wage, or even a wage at all.
[As a research assistant], it was more of a feeling of distress that I felt then because I didn’t know what to do. (…) I was supposed to get coaching or be supported, but I didn’t get anything in the end. It was like, “fix it by yourself”. (…) All research assistants were supposed to be supervised, but in practice they were not (participant 1). Very often, we have a master’s or doctoral student that we put on a subject and we consider that the project will be well done, while the student is learning. So, it happens that the student will do a lot of work and then we realize that the work is poorly done, and it is not necessarily the student’s fault. He wasn’t necessarily well supervised. There are directors who have 25 students, and they just don’t supervise them (participant 14). I think it’s really the power relationship. I thought to myself, how I saw my doctorate, the beginning of my research career, I really wanted to be in that laboratory, but they are the ones who are going to accept me or not, so what do I do to be accepted? I finally accept their conditions [which was to work for free]. If these are the conditions that are required to enter this lab, I want to go there. So, what do I do, well I accepted. It doesn’t make sense, but I tell myself that I’m still privileged, because I don’t have so many financial worries, one more reason to work for free, even though it doesn’t make sense (participant 1). In research, we have research assistants. (…). The fact of using people… so that’s it, you have to take into account where they are, respect them, but at the same time they have to show that they are there for the research. In English, we say “carry” or take care of people. With research assistants, this is often a problem that I have observed: for grant machines, the person is the last to be found there. Researchers, who will take, use student data, without giving them the recognition for it (participant 5). The problem at our university is that they reserve funding for Canadian students. The doctoral clientele in my field is mostly foreign students. So, our students are poorly funded. I saw one student end up in the shelter, in a situation of poverty. It ended very badly for him because he lacked financial resources. Once you get into that dynamic, it’s very hard to get out. I was made aware of it because the director at the time had taken him under her wing and wanted to try to find a way to get him out of it. So, most of my students didn’t get funded (participant 16). There I wrote “manipulation”, but it’s kind of all promises all the time. I, for example, was promised a lot of advancement, like when I got into the lab as a graduate student, it was said that I had an interest in [this particular area of research]. I think there are a lot of graduate students who must have gone through that, but it is like, “Well, your CV has to be really good, if you want to do a lot of things and big things. If you do this, if you do this research contract, the next year you could be the coordinator of this part of the lab and supervise this person, get more contracts, be paid more. Let’s say: you’ll be invited to go to this conference, this big event”. They were always dangling something, but you have to do that first to get there. But now, when you’ve done that, you have to do this business. It’s like a bit of manipulation, I think. That was very hard to know who is telling the truth and who is not (participant 1).
These ethical issues have significant negative consequences for students. Indeed, they sometimes find themselves at the mercy of researchers, for whom they work, struggling to be recognized and included as authors of an article, for example, or to receive the salary that they are due. For their part, researchers also sometimes find themselves trapped in research structures that can negatively affect their well-being. As many participants reported, researchers work in organizations that set very high productivity standards and in highly competitive contexts, all within a general culture characterized by individualism.
Participants, especially researchers, discussed the culture of individualism and performance that characterizes the academic environment. In glorifying excellence, some universities value performance and productivity, often at the expense of psychological well-being and work-life balance (i.e., work overload and burnout). Participants noted that there are ethical silences in their organizations on this issue, and that the culture of individualism and performance is not challenged for fear of retribution or simply to survive, i.e., to perform as expected. Participants felt that this culture can have a significant negative impact on the quality of the research conducted, as research teams try to maximize the quantity of their work (instead of quality) in a highly competitive context, which is then exacerbated by a lack of resources and support, and where everything must be done too quickly.
The work-life balance with the professional ethics related to work in a context where you have too much and you have to do a lot, it is difficult to balance all that and there is a lot of pressure to perform. If you don’t produce enough, that’s it; after that, you can’t get any more funds, so that puts pressure on you to do more and more and more (participant 3). There is a culture, I don’t know where it comes from, and that is extremely bureaucratic. If you dare to raise something, you’re going to have many, many problems. They’re going to make you understand it. So, I don’t talk. It is better: your life will be easier. I think there are times when you have to talk (…) because there are going to be irreparable consequences. (…) I’m not talking about a climate of terror, because that’s exaggerated, it’s not true, people are not afraid. But people close their office door and say nothing because it’s going to make their work impossible and they’re not going to lose their job, they’re not going to lose money, but researchers need time to be focused, so they close their office door and say nothing (participant 16).
Researchers must produce more and more, and they feel little support in terms of how to do such production, ethically, and how much exactly they are expected to produce. As this participant reports, the expectation is an unspoken rule: more is always better.
It’s sometimes the lack of a clear line on what the expectations are as a researcher, like, “ah, we don’t have any specific expectations, but produce, produce, produce, produce.” So, in that context, it’s hard to be able to put the line precisely: “have I done enough for my work?” (participant 3).
While the productivity expectation is not clear, some participants – including researchers, research ethics experts, and REB members – also felt that the ethical expectations of some REBs were unclear. The issue of the inadequate ethical guidance of research includes the administrative mechanisms to ensure that research projects respect the principles of research ethics. According to those participants, the forms required for both researchers and REB members are increasingly long and numerous, and one participant noted that the standards to be met are sometimes outdated and disconnected from the reality of the field. Multicentre ethics review (by several REBs) was also critiqued by a participant as an inefficient method that encumbers the processes for reviewing research projects. Bureaucratization imposes an ever-increasing number of forms and ethics guidelines that actually hinder researchers’ ethical reflection on the issues at stake, leading the ethics review process to be perceived as purely bureaucratic in nature.
The ethical dimension and the ethical review of projects have become increasingly bureaucratized. (…) When I first started working (…) it was less bureaucratic, less strict then. I would say [there are now] tons of forms to fill out. Of course, we can’t do without it, it’s one of the ways of marking out ethics and ensuring that there are ethical considerations in research, but I wonder if it hasn’t become too bureaucratized, so that it’s become a kind of technical reflex to fill out these forms, and I don’t know if people really do ethical reflection as such anymore (participant 10). The fundamental structural issue, I would say, is the mismatch between the normative requirements and the real risks posed by the research, i.e., we have many, many requirements to meet; we have very long forms to fill out but the research projects we evaluate often pose few risks (participant 8). People [in vulnerable situations] were previously unable to participate because of overly strict research ethics rules that were to protect them, but in the end [these rules] did not protect them. There was a perverse effect, because in the end there was very little research done with these people and that’s why we have very few results, very little evidence [to support practices with these populations] so it didn’t improve the quality of services. (…) We all understand that we have to be careful with that, but when the research is not too risky, we say to ourselves that it would be good because for once a researcher who is interested in that population, because it is not a very popular population, it would be interesting to have results, but often we are blocked by the norms, and then we can’t accept [the project] (participant 2).
Moreover, as one participant noted, accessing ethics training can be a challenge.
There is no course on research ethics. […] Then, I find that it’s boring because you go through university and you come to do your research and you know how to do quantitative and qualitative research, but all the research ethics, where do you get this? I don’t really know (participant 13).
Yet, such training could provide relevant tools to resolve, to some extent, the ethical issues that commonly arise in research. That said, and as noted by many participants, many ethical issues in research are related to social injustices over which research actors have little influence.
For many participants, notably researchers, the issues that concern social injustices are those related to power asymmetries, stigma, or issues of equity, diversity, and inclusion, i.e., social injustices related to people’s identities (Blais & Drolet, 2022 ). Participants reported experiencing or witnessing discrimination from peers, administration, or lab managers. Such oppression is sometimes cross-sectional and related to a person’s age, cultural background, gender or social status.
I have my African colleague who was quite successful when he arrived but had a backlash from colleagues in the department. I think it’s unconscious, nobody is overtly racist. But I have a young person right now who is the same, who has the same success, who got exactly the same early career award and I don’t see the same backlash. He’s just as happy with what he’s doing. It’s normal, they’re young and they have a lot of success starting out. So, I think there is discrimination. Is it because he is African? Is it because he is black? I think it’s on a subconscious level (participant 16).
Social injustices were experienced or reported by many participants, and included issues related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when one researcher is a woman.
If you do international research, there are things you can’t talk about (…). It is really a barrier to research to not be able to (…) address this question [i.e. the question of inequalities between men and women]. Women’s inequality is going to be addressed [but not within the country where the research takes place as if this inequality exists elsewhere but not here]. There are a lot of women working on inequality issues, doing work and it’s funny because I was talking to a young woman who works at Cairo University and she said to me: “Listen, I saw what you had written, you’re right. I’m willing to work on this but guarantee me a position at your university with a ticket to go”. So yes, there are still many barriers [for women in research] (participant 16).
Because of the varied contextual characteristics that intervene in their occurrence, these social injustices are also related to distributive injustices, as discussed by many participants.
Although there are several views of distributive justice, a classical definition such as that of Aristotle ( 2012 ), describes distributive justice as consisting in distributing honours, wealth, and other social resources or benefits among the members of a community in proportion to their alleged merit. Justice, then, is about determining an equitable distribution of common goods. Contemporary theories of distributive justice are numerous and varied. Indeed, many authors (e.g., Fraser 2011 ; Mills, 2017 ; Sen, 2011 ; Young, 2011 ) have, since Rawls ( 1971 ), proposed different visions of how social burdens and benefits should be shared within a community to ensure equal respect, fairness, and distribution. In our study, what emerges from participants’ narratives is a definite concern for this type of justice. Women researchers, francophone researchers, early career researchers or researchers belonging to racialized groups all discussed inequities in the distribution of research grants and awards, and the extra work they need to do to somehow prove their worth. These inequities are related to how granting agencies determine which projects will be funded.
These situations make me work 2–3 times harder to prove myself and to show people in power that I have a place as a woman in research (participant 12). Number one: it’s conservative thinking. The older ones control what comes in. So, the younger people have to adapt or they don’t get funded (participant 14).
Whether it is discrimination against stigmatized or marginalized populations or interest in certain hot topics, granting agencies judge research projects according to criteria that are sometimes questionable, according to those participants. Faced with difficulties in obtaining funding for their projects, several strategies – some of which are unethical – are used by researchers in order to cope with these situations.
Sometimes there are subjects that everyone goes to, such as nanotechnology (…), artificial intelligence or (…) the therapeutic use of cannabis, which are very fashionable, and this is sometimes to the detriment of other research that is just as relevant, but which is (…), less sexy, less in the spirit of the time. (…) Sometimes this can lead to inequities in the funding of certain research sectors (participant 9). When we use our funds, we get them given to us, we pretty much say what we think we’re going to do with them, but things change… So, when these things change, sometimes it’s an ethical decision, but by force of circumstances I’m obliged to change the project a little bit (…). Is it ethical to make these changes or should I just let the money go because I couldn’t use it the way I said I would? (participant 3).
Moreover, these distributional injustices are not only linked to social injustices, but also epistemic injustices. Indeed, the way in which research honours and grants are distributed within the academic community depends on the epistemic authority of the researchers, which seems to vary notably according to their language of use, their age or their gender, but also to the research design used (inductive versus deductive), their decision to use (or not use) animals in research, or to conduct activist research.
The philosopher Fricker ( 2007 ) conceptualized the notions of epistemic justice and injustice. Epistemic injustice refers to a form of social inequality that manifests itself in the access, recognition, and production of knowledge as well as the various forms of ignorance that arise (Godrie & Dos Santos, 2017 ). Addressing epistemic injustice necessitates acknowledging the iniquitous wrongs suffered by certain groups of socially stigmatized individuals who have been excluded from knowledge, thus limiting their abilities to interpret, understand, or be heard and account for their experiences. In this study, epistemic injustices were experienced or reported by some participants, notably those related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when a researcher is a woman or an early career researcher.
I have never sent a grant application to the federal government in English. I have always done it in French, even though I know that when you receive the review, you can see that reviewers didn’t understand anything because they are English-speaking. I didn’t want to get in the boat. It’s not my job to translate, because let’s be honest, I’m not as good in English as I am in French. So, I do them in my first language, which is the language I’m most used to. Then, technically at the administrative level, they are supposed to be able to do it, but they are not good in French. (…) Then, it’s a very big Canadian ethical issue, because basically there are technically two official languages, but Canada is not a bilingual country, it’s a country with two languages, either one or the other. (…) So I was not funded (participant 14).
Researchers who use inductive (or qualitative) methods observed that their projects are sometimes less well reviewed or understood, while research that adopts a hypothetical-deductive (or quantitative) or mixed methods design is better perceived, considered more credible and therefore more easily funded. Of course, regardless of whether a research project adopts an inductive, deductive or mixed-methods scientific design, or whether it deals with qualitative or quantitative data, it must respect a set of scientific criteria. A research project should achieve its objectives by using proven methods that, in the case of inductive research, are credible, reliable, and transferable or, in the case of deductive research, generalizable, objective, representative, and valid (Drolet & Ruest, accepted ). Participants discussing these issues noted that researchers who adopt a qualitative design or those who question the relevance of animal experimentation or are not militant have sometimes been unfairly devalued in their epistemic authority.
There is a mini war between quantitative versus qualitative methods, which I think is silly because science is a method. If you apply the method well, it doesn’t matter what the field is, it’s done well and it’s perfect ” (participant 14). There is also the issue of the place of animals in our lives, because for me, ethics is human ethics, but also animal ethics. Then, there is a great evolution in society on the role of the animal… with the new law that came out in Quebec on the fact that animals are sensitive beings. Then, with the rise of the vegan movement, [we must ask ourselves]: “Do animals still have a place in research?” That’s a big question and it also means that there are practices that need to evolve, but sometimes there’s a disconnection between what’s expected by research ethics boards versus what’s expected in the field (participant 15). In research today, we have more and more research that is militant from an ideological point of view. And so, we have researchers, because they defend values that seem important to them, we’ll talk for example about the fight for equality and social justice. They have pressure to defend a form of moral truth and have the impression that everyone thinks like them or should do so, because they are defending a moral truth. This is something that we see more and more, namely the lack of distance between ideology and science (participant 8).
The combination or intersectionality of these inequities, which seems to be characterized by a lack of ethical support and guidance, is experienced in the highly competitive and individualistic context of research; it provides therefore the perfect recipe for researchers to experience ethical distress.
The concept of “ethical distress” refers to situations in which people know what they should do to act ethically, but encounter barriers, generally of an organizational or systemic nature, limiting their power to act according to their moral or ethical values (Drolet & Ruest, 2021 ; Jameton, 1984 ; Swisher et al., 2005 ). People then run the risk of finding themselves in a situation where they do not act as their ethical conscience dictates, which in the long term has the potential for exhaustion and distress. The examples reported by participants in this study point to the fact that researchers in particular may be experiencing significant ethical distress. This distress takes place in a context of extreme competition, constant injunctions to perform, and where administrative demands are increasingly numerous and complex to complete, while paradoxically, they lack the time to accomplish all their tasks and responsibilities. Added to these demands are a lack of resources (human, ethical, and financial), a lack of support and recognition, and interpersonal conflicts.
We are in an environment, an elite one, you are part of it, you know what it is: “publish or perish” is the motto. Grants, there is a high level of performance required, to do a lot, to publish, to supervise students, to supervise them well, so yes, it is clear that we are in an environment that is conducive to distress. (…). Overwork, definitely, can lead to distress and eventually to exhaustion. When you know that you should take the time to read the projects before sharing them, but you don’t have the time to do that because you have eight that came in the same day, and then you have others waiting… Then someone rings a bell and says: “ah but there, the protocol is a bit incomplete”. Oh yes, look at that, you’re right. You make up for it, but at the same time it’s a bit because we’re in a hurry, we don’t necessarily have the resources or are able to take the time to do things well from the start, we have to make up for it later. So yes, it can cause distress (participant 9). My organization wanted me to apply in English, and I said no, and everyone in the administration wanted me to apply in English, and I always said no. Some people said: “Listen, I give you the choice”, then some people said: “Listen, I agree with you, but if you’re not [submitting] in English, you won’t be funded”. Then the fact that I am young too, because very often they will look at the CV, they will not look at the project: “ah, his CV is not impressive, we will not finance him”. This is complete nonsense. The person is capable of doing the project, the project is fabulous: we fund the project. So, that happened, organizational barriers: that happened a lot. I was not eligible for Quebec research funds (…). I had big organizational barriers unfortunately (participant 14). At the time of my promotion, some colleagues were not happy with the type of research I was conducting. I learned – you learn this over time when you become friends with people after you enter the university – that someone was against me. He had another candidate in mind, and he was angry about the selection. I was under pressure for the first three years until my contract was renewed. I almost quit at one point, but another colleague told me, “No, stay, nothing will happen”. Nothing happened, but these issues kept me awake at night (participant 16).
This difficult context for many researchers affects not only the conduct of their own research, but also their participation in research. We faced this problem in our study, despite the use of multiple recruitment methods, including more than 200 emails – of which 191 were individual solicitations – sent to potential participants by the two research assistants. REB members and organizations overseeing or supporting research (n = 17) were also approached to see if some of their employees would consider participating. While it was relatively easy to recruit REB members and research ethics experts, our team received a high number of non-responses to emails (n = 175) and some refusals (n = 5), especially by researchers. The reasons given by those who replied were threefold: (a) fear of being easily identified should they take part in the research, (b) being overloaded and lacking time, and (c) the intrusive aspect of certain questions (i.e., “Have you experienced a burnout episode? If so, have you been followed up medically or psychologically?”). In light of these difficulties and concerns, some questions in the socio-demographic questionnaire were removed or modified. Talking about burnout in research remains a taboo for many researchers, which paradoxically can only contribute to the unresolved problem of unhealthy research environments.
The question that prompted this research was: What are the ethical issues in research? The purpose of the study was to describe these issues from the perspective of researchers (from different disciplines), research ethics board (REB) members, and research ethics experts. The previous section provided a detailed portrait of the ethical issues experienced by different research stakeholders: these issues are numerous, diverse and were recounted by a range of stakeholders.
The results of the study are generally consistent with the literature. For example, as in our study, the literature discusses the lack of research integrity on the part of some researchers (Al-Hidabi et al., 2018 ; Swazey et al., 1993 ), the numerous conflicts of interest experienced in research (Williams-Jones et al., 2013 ), the issues of recruiting and obtaining the free and informed consent of research participants (Provencher et al., 2014 ; Keogh & Daly, 2009 ), the sometimes difficult relations between researchers and REBs (Drolet & Girard, 2020 ), the epistemological issues experienced in research (Drolet & Ruest, accepted; Sieber 2004 ), as well as the harmful academic context in which researchers evolve, insofar as this is linked to a culture of performance, an overload of work in a context of accountability (Berg & Seeber, 2016 ; FQPPU; 2019 ) that is conducive to ethical distress and even burnout.
If the results of the study are generally in line with those of previous publications on the subject, our findings also bring new elements to the discussion while complementing those already documented. In particular, our results highlight the role of systemic injustices – be they social, distributive or epistemic – within the environments in which research is carried out, at least in Canada. To summarize, the results of our study point to the fact that the relationships between researchers and research participants are likely still to raise worrying ethical issues, despite widely accepted research ethics norms and institutionalized review processes. Further, the context in which research is carried out is not only conducive to breaches of ethical norms and instances of misbehaviour or misconduct, but also likely to be significantly detrimental to the health and well-being of researchers, as well as research assistants. Another element that our research also highlighted is the instrumentalization and even exploitation of students and research assistants, which is another important and worrying social injustice given the inevitable power imbalances between students and researchers.
Moreover, in a context in which ethical issues are often discussed from a micro perspective, our study helps shed light on both the micro- and macro-level ethical dimensions of research (Bronfenbrenner, 1979 ; Glaser 1994 ). However, given that ethical issues in research are not only diverse, but also and above all complex, a broader perspective that encompasses the interplay between the micro and macro dimensions can enable a better understanding of these issues and thereby support the identification of the multiple factors that may be at their origin. Triangulating the perspectives of researchers with those of REB members and research ethics experts enabled us to bring these elements to light, and thus to step back from and critique the way that research is currently conducted. To this end, attention to socio-political elements such as the performance culture in academia or how research funds are distributed, and according to what explicit and implicit criteria, can contribute to identifying the sources of the ethical issues described above.
The German sociologist and philosopher Rosa (2010) argues that late modernity – that is, the period between the 1980s and today – is characterized by a phenomenon of social acceleration that causes various forms of alienation in our relationship to time, space, actions, things, others and ourselves. Rosa distinguishes three types of acceleration: technical acceleration , the acceleration of social changes and the acceleration of the rhythm of life . According to Rosa, social acceleration is the main problem of late modernity, in that the invisible social norm of doing more and faster to supposedly save time operates unchallenged at all levels of individual and collective life, as well as organizational and social life. Although we all, researchers and non-researchers alike, perceive this unspoken pressure to be ever more productive, the process of social acceleration as a new invisible social norm is our blind spot, a kind of tyrant over which we have little control. This conceptualization of the contemporary culture can help us to understand the context in which research is conducted (like other professional practices). To this end, Berg & Seeber ( 2016 ) invite faculty researchers to slow down in order to better reflect and, in the process, take care of their health and their relationships with their colleagues and students. Many women professors encourage their fellow researchers, especially young women researchers, to learn to “say No” in order to protect their mental and physical health and to remain in their academic careers (Allaire & Descheneux, 2022 ). These authors also remind us of the relevance of Kahneman’s ( 2012 ) work which demonstrates that it takes time to think analytically, thoroughly, and logically. Conversely, thinking quickly exposes humans to cognitive and implicit biases that then lead to errors in thinking (e.g., in the analysis of one’s own research data or in the evaluation of grant applications or student curriculum vitae). The phenomenon of social acceleration, which pushes the researcher to think faster and faster, is likely to lead to unethical bad science that can potentially harm humankind. In sum, Rosa’s invitation to contemporary critical theorists to seriously consider the problem of social acceleration is particularly insightful to better understand the ethical issues of research. It provides a lens through which to view the toxic context in which research is conducted today, and one that was shared by the participants in our study.
Clark & Sousa ( 2022 ) note, it is important that other criteria than the volume of researchers’ contributions be valued in research, notably quality. Ultimately, it is the value of the knowledge produced and its influence on the concrete lives of humans and other living beings that matters, not the quantity of publications. An interesting articulation of this view in research governance is seen in a change in practice by Australia’s national health research funder: they now restrict researchers to listing on their curriculum vitae only the top ten publications from the past ten years (rather than all of their publications), in order to evaluate the quality of contributions rather than their quantity. To create environments conducive to the development of quality research, it is important to challenge the phenomenon of social acceleration, which insidiously imposes a quantitative normativity that is both alienating and detrimental to the quality and ethical conduct of research. Based on our experience, we observe that the social norm of acceleration actively disfavours the conduct of empirical research on ethics in research. The fact is that researchers are so busy that it is almost impossible for them to find time to participate in such studies. Further, operating in highly competitive environments, while trying to respect the values and ethical principles of research, creates ethical paradoxes for members of the research community. According to Malherbe ( 1999 ), an ethical paradox is a situation where an individual is confronted by contradictory injunctions (i.e., do more, faster, and better). And eventually, ethical paradoxes lead individuals to situations of distress and burnout, or even to ethical failures (i.e., misbehaviour or misconduct) in the face of the impossibility of responding to contradictory injunctions.
The triangulation of perceptions and experiences of different actors involved in research is a strength of our study. While there are many studies on the experiences of researchers, rarely are members of REBs and experts in research ethics given the space to discuss their views of what are ethical issues. Giving each of these stakeholders a voice and comparing their different points of view helped shed a different and complementary light on the ethical issues that occur in research. That said, it would have been helpful to also give more space to issues experienced by students or research assistants, as the relationships between researchers and research assistants are at times very worrying, as noted by a participant, and much work still needs to be done to eliminate the exploitative situations that seem to prevail in certain research settings. In addition, no Indigenous or gender diverse researchers participated in the study. Given the ethical issues and systemic injustices that many people from these groups face in Canada (Drolet & Goulet, 2018 ; Nicole & Drolet, in press ), research that gives voice to these researchers would be relevant and contribute to knowledge development, and hopefully also to change in research culture.
Further, although most of the ethical issues discussed in this article may be transferable to the realities experienced by researchers in other countries, the epistemic injustice reported by Francophone researchers who persist in doing research in French in Canada – which is an officially bilingual country but in practice is predominantly English – is likely specific to the Canadian reality. In addition, and as mentioned above, recruitment proved exceedingly difficult, particularly amongst researchers. Despite this difficulty, we obtained data saturation for all but two themes – i.e., exploitation of students and ethical issues of research that uses animals. It follows that further empirical research is needed to improve our understanding of these specific issues, as they may diverge to some extent from those documented here and will likely vary across countries and academic research contexts.
This study, which gave voice to researchers, REB members, and ethics experts, reveals that the ethical issues in research are related to several problematic elements as power imbalances and authority relations. Researchers and research assistants are subject to external pressures that give rise to integrity issues, among others ethical issues. Moreover, the current context of social acceleration influences the definition of the performance indicators valued in academic institutions and has led their members to face several ethical issues, including social, distributive, and epistemic injustices, at different steps of the research process. In this study, ten categories of ethical issues were identified, described and illustrated: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. The triangulation of the perspectives of different members (i.e., researchers from different disciplines, REB members, research ethics experts, and one research assistant) involved in the research process made it possible to lift the veil on some of these ethical issues. Further, it enabled the identification of additional ethical issues, especially systemic injustices experienced in research. To our knowledge, this is the first time that these injustices (social, distributive, and epistemic injustices) have been clearly identified.
Finally, this study brought to the fore several problematic elements that are important to address if the research community is to develop and implement the solutions needed to resolve the diverse and transversal ethical issues that arise in research institutions. A good starting point is the rejection of the corollary norms of “publish or perish” and “do more, faster, and better” and their replacement with “publish quality instead of quantity”, which necessarily entails “do less, slower, and better”. It is also important to pay more attention to the systemic injustices within which researchers work, because these have the potential to significantly harm the academic careers of many researchers, including women researchers, early career researchers, and those belonging to racialized groups as well as the health, well-being, and respect of students and research participants.
The team warmly thanks the participants who took part in the research and who made this study possible. Marie-Josée Drolet thanks the five research assistants who participated in the data collection and analysis: Julie-Claude Leblanc, Élie Beauchemin, Pénéloppe Bernier, Louis-Pierre Côté, and Eugénie Rose-Derouin, all students at the Université du Québec à Trois-Rivières (UQTR), two of whom were active in the writing of this article. MJ Drolet and Bryn Williams-Jones also acknowledge the financial contribution of the Social Sciences and Humanities Research Council of Canada (SSHRC), which supported this research through a grant. We would also like to thank the reviewers of this article who helped us improve it, especially by clarifying and refining our ideas.
As noted in the Acknowledgements, this research was supported financially by the Social Sciences and Humanities Research Council of Canada (SSHRC).
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
eAppendix. Interview Guide
Data Sharing Statement
Sign up for emails based on your interests, select your interests.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Others also liked.
Youssef A , Nichol AA , Martinez-Martin N, et al. Ethical Considerations in the Design and Conduct of Clinical Trials of Artificial Intelligence. JAMA Netw Open. 2024;7(9):e2432482. doi:10.1001/jamanetworkopen.2024.32482
© 2024
Question How generalizable are current National Institutes of Health (NIH) ethical principles for conduct of clinical trials to clinical trials of artificial intelligence (AI), and what unique ethical considerations arise in trials of AI?
Findings In this qualitative study, interviews with 11 investigators involved in clinical trials of AI for diabetic retinopathy screening confirmed the applicability of current ethical principles but also identified unique challenges, including assessing social value, ensuring scientific validity, fair participant selection, evaluation of risk-to-benefit ratio in underrepresented groups, and navigating complex consent processes.
Meaning These results suggest ethical challenges unique to clinical trials of AI, which may provide important guidance for empirical and normative ethical efforts to enhance the conduct of AI clinical trials.
Importance Safe integration of artificial intelligence (AI) into clinical settings often requires randomized clinical trials (RCT) to compare AI efficacy with conventional care. Diabetic retinopathy (DR) screening is at the forefront of clinical AI applications, marked by the first US Food and Drug Administration (FDA) De Novo authorization for an autonomous AI for such use.
Objective To determine the generalizability of the 7 ethical research principles for clinical trials endorsed by the National Institute of Health (NIH), and identify ethical concerns unique to clinical trials of AI.
Design, Setting, and Participants This qualitative study included semistructured interviews conducted with 11 investigators engaged in the design and implementation of clinical trials of AI for DR screening from November 11, 2022, to February 20, 2023. The study was a collaboration with the ACCESS (AI for Children’s Diabetic Eye Exams) trial, the first clinical trial of autonomous AI in pediatrics. Participant recruitment initially utilized purposeful sampling, and later expanded with snowball sampling. Study methodology for analysis combined a deductive approach to explore investigators’ perspectives of the 7 ethical principles for clinical research endorsed by the NIH and an inductive approach to uncover the broader ethical considerations implementing clinical trials of AI within care delivery.
Results A total of 11 participants (mean [SD] age, 47.5 [12.0] years; 7 male [64%], 4 female [36%]; 3 Asian [27%], 8 White [73%]) were included, with diverse expertise in ethics, ophthalmology, translational medicine, biostatistics, and AI development. Key themes revealed several ethical challenges unique to clinical trials of AI. These themes included difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios across various patient subgroups, and addressing the complexities inherent in the data use terms of informed consent.
Conclusions and Relevance This qualitative study identified practical ethical challenges that investigators need to consider and negotiate when conducting AI clinical trials, exemplified by the DR screening use-case. These considerations call for further guidance on where to focus empirical and normative ethical efforts to best support conduct clinical trials of AI and minimize unintended harm to trial participants.
The integration of artificial intelligence (AI) into health care promises to address long-standing challenges, offering innovative solutions to improve patient outcomes, health equity, clinician productivity, and system efficiency. 1 , 2 As the deployment of AI interventions expands, clinical evidence becomes increasingly crucial in validating their efficacy and safety. 3 - 7 However, there exists a notable gap between the extensive theoretical research on ethical concerns in AI applications in health care and the practical challenges encountered by clinical investigators in clinical settings. 8 This empirical study aims to bridge this gap by examining the practical ethical considerations in the design and implementation of clinical trials involving AI.
Early detection of diabetic retinopathy (DR) is a vanguard area in clinical AI; the first US Food and Drug Administration (FDA) De Novo–authorized autonomous AI was for diabetic eye examinations. 9 We collaborated with investigators from the first National Institutes of Health (NIH)-funded randomized clinical trial (RCT) of autonomous AI, the AI for Children’s Diabetic Eye Exams Study (ACCESS), which was designed to determine the efficacy of autonomous AI screening for DR in a diverse population of youth with diabetes. 10
Ethical frameworks for clinical research, shaped by landmark documents such as the Nuremberg Code, Declaration of Helsinki, Belmont Report, CIOMS guidelines, and the US Common Rule, form the bedrock of research ethics. 11 - 13 Emanuel et al 12 have further delineated 7 core principles for clinical trial ethics, endorsed by the NIH: social and clinical value, scientific validity, fair participant selection, favorable risk-benefit ratio, independent review, informed consent, and respect for human participants. 14 However, the complexities inherent to AI, such as clinical efficacy, algorithmic fairness, and reproducibility of results, pose unique challenges. 15 - 17 Systematic reviews have highlighted significant limitations in AI clinical trials, such as the absence of clinically relevant endpoints and a high risk of bias, raising questions about the suitability of traditional ethical frameworks in AI contexts. 4 , 18 - 21 While there is a consensus on the necessity for increased transparency of randomized clinical trials of AI (AI-RCTs), current guidelines primarily focus on standardized reporting and fall short from addressing ethical considerations in the design of these clinical trials. 22 , 23
This qualitative study aimed to address 2 primary research questions: (1) To what extent are the 7 NIH ethical principles 14 created by Emanuel and Grady 12 generalizable to clinical trials of AI? and (2) What are the ethical considerations that may be unique to clinical trials of AI?
This qualitative study was approved by the Johns Hopkins Medicine institutional review board. All study participants were informed about the waiver of written consent and provided verbal consent to participate voluntarily, without financial compensation. We followed the Consolidated Criteria for Reporting Qualitative Research ( COREQ ) reporting guidelines.
We employed both a deductive and an inductive approach to data collection. We used a deductive approach described to test the applicability of the NIH’s 7 core ethical principles—clinical and social value, scientific validity, fair participant selection, favorable risk-benefit ratio, independent review, informed consent, and respect for human participants. 24 We also utilized a modified grounded theory approach for the discovery of novel themes. 25
Participants in this study included clinical investigators, institutional review ethicists, and clinical trialists involved in autonomous AI trials for diabetic retinopathy screening. The selection criteria were aligned with the study’s aim of examining ethical challenges in the design and conduct of AI-based clinical trials. Initially, purposive, nonprobabilistic sampling was used to recruit 6 participants from the ACCESS study ( NCT05131451 ). 10 Two authors (R.W. and D.C.) invited investigators from the ACCESS study to participate in this study. To enhance the generalizability of our findings, we employed snowball sampling method to identify participants involved in concurrent RCTs of AI for diabetic retinopathy screening in low-income countries, resulting in the inclusion of 3 participants from a nonprofit organization and 2 from the private sector.
Data collection occurred from November 2022 to February 2023. Interviews were conducted in English via video call (Zoom) by a qualitative research scientist (A.Y.) with over 7 years of experience in qualitative research. Interviews ranged from 30 to 60 minutes and were guided by a set of questions addressing demographic questions, the 7 ethical principles endorsed by the NIH—clinical and social value, scientific validity, fair participant selection, favorable risk-benefit ratio, independent review, informed consent, and respect for human participants—and open-ended questions to explore additional ethical considerations (eAppendix in Supplement 1 ). All interviews were transcribed by a professional service, and MAXQDA version 2022.2 software (VERBI GmbH) was used for managing and analyzing the data.
Following initial transcript analysis, a study coauthor (A.Y.) developed a codebook for systematic independent coding. Two authors (A.Y. and A.N.) independently coded all interviews, achieving a Cohen κ score greater than 0.8 for interrater reliability. Discrepancies were resolved through consensus coding with 3 coauthors (R.W., N.M., and D.C.). Theoretical saturation was achieved after the tenth interview, as no new insights emerged from the eleventh interview. To maintain reflexivity, we kept a detailed audit trail (A.Y.). This trail included reflections on both the interviewees’ and interviewer’s perceptions. These reflections were critically examined in weekly research meetings with all authors, challenging emerging hypotheses to reduce confirmation bias and ensure theme credibility. 26
We conducted interviews with 11 investigators with experience conducting AI-RCT for DR screening, with a mean (SD) age of 47.5 (12.0) years (7 male [64%], 4 female [36%]; 3 Asian [27%], 8 White [73%]) ( Table 1 ). Participants came from academia, the nonprofit sector, and industry, bringing diverse expertise in ethics, ophthalmology, translational medicine, biostatistics, AI development and deployment, and policy.
While recognizing the importance of the 7 ethical principles in AI clinical trials, participants identified unique ethical challenges specific to AI trials. These challenges demand a nuanced understanding of how to appropriately apply these principles in the context of AI clinical trials. Table 2 outlines participants’ perspectives on the 7 principles within the context of AI clinical trials. Table 3 presents novel ethical considerations that emerged from the inductive analysis, highlighting specific challenges faced during the implementation of these trials.
When applied to clinical trials of AI in clinical settings, participants identified several unique applications of the 7 ethical principles. Common themes across principles included the added difficulty in accounting for equitable access to care and the need for transparency with patients.
Participants recognized AI’s potential to improve clinical outcomes, comparable with the potential outcomes of non-AI based RCTs (ie, drugs or medical devices). Specifically, in this RCT focused on DR, they perceived AI’s potential to reduce health disparities as a clear metric for social value. However, they expressed uncertainty defining and quantifying the social value of an AI intervention compared with its clinical benefits.
Participants recognized RCTs as a criterion standard for demonstrating clinical efficacy of the AI intervention. However, they identified unique challenges specific to AI RCTs. One participant questioned the appropriateness of prioritizing individualized outcome parameters—such as patient outcome—rather than outcomes for groups or populations. They noted the difficulty of comparing AI interventions with the variable criterion standard of usual care, which can differ significantly across clinical settings.
Fair participant selection in AI clinical trials emerged as a significant topic, particularly regarding the accurate representation of the patient population of focus. Study participants highlighted the challenges in evaluating the efficacy of the AI intervention across patient subgroups, who are often affected by limited access to care and can be underrepresented in clinical trials. One participant pointed out the complexities of ensuring equitable access studying the impact of AI screening on patient groups that may access less regular diabetes screening and care.
Participants recognized the complexity of balancing the risks and benefits of AI interventions across diverse patient groups. They noted the difficulty in estimating the harm-to-benefit ratio of AI interventions relative to the known risks of standard care, a challenge exacerbated by limited representation of patient groups facing health inequities in retrospective standalone studies of algorithm performance.
Participants identified key ethical concerns presented in AI clinical trials, emphasizing the need for transparent communication about the risk and benefits of an AI intervention tailored to patients with varying levels of health literacy. They questioned whether patients fully understand the extent toward which their data might be used beyond the trial itself. Additionally, concerns were raised about the adequacy of current informed consent processes and institutional ethical review readiness to assess the risks and benefits of AI interventions.
Participants highlighted additional ethical challenges beyond the established 7 ethical principles ( Table 3 ). These included: (1) Whose values prevail in AI systems design and implementation? (2) Can AI integration enhance clinical workflows without compromising patient safety? (3) How to balance the economic incentives with the ethical obligations to adopt effective AI intervention that can improve patients’ outcomes? (4) What are the ethical implications of expanding DR screening without enhancing treatment access?
Participants critiqued the broadly defined term “value,” noting its varied interpretations across different stakeholders including health systems, clinicians, and patients. They questioned whose values are prioritized during the design and implementation of AI systems. Additionally, concerns were raised about AI’s adaptability to individual patient needs. For instance, while clinicians can adjust treatments to ensure affordability and effectiveness, AI systems may lack this flexibility due to their predefined operational parameters and potential downstream effects.
Said a participant who specializes in informatics, “So patient value can take all what people really, really, really care about, which includes spiritual and religious issues which do not go into health services research considerations, don’t go into societal considerations…so the very word value I would bet is not well-defined.” A participant with a research focus in ophthalmology pointed out that cost of care varied by community, “Somebody who had very limited resources couldn’t afford to pay $60 a month for drops.…So, the decision that we make about how much quality we can afford needs to be made with respect to local economic scales.”
From the participants’ perspective, integrating AI tools into clinical workflows introduces a significant tension. There is a drive for AI to enhance clinical workflows, but this drive carries the inherent risk of complicating an already complex workflow, especially when the definitive clinical benefits of AI are still unclear. This tension is further amplified when contemplating potential risks to research participants during clinical trials. “I think part of it, to not be able to do an RCT with an AI tool, would come down to just clinical workflow,” said a participant working in optometry. “If you’re trying to interject something into a workflow that’s already overloaded and pretty strapped.…I think that’s a limitation when it comes to implementing these because you have to set up, you know, both arms. You have to then set up multiple workflows and you’re already…you’re already trying to interject a new workflow where it hadn’t been before, which can be complicated.”
“I think the clinical value is that there’s quite a bit of hype about AI and we know that for sure AI can do certain things better than humans in many different contexts, but just because AI is better than humans that do certain things doesn’t mean if we incorporate AI will it necessarily improve the outcomes or the metrics that we’re interested in,” said a respondent specializing in ophthalmology and machine learning. “So I think it’s very important to, in clinical trial[s] involving AI, to show that it improves outcomes. That’s a completely open question by now that we don’t know for the most part whether incorporating AI in clinical workflow improves outcome, so I think this is a very important question to answer. And you could only answer that using [a] randomized clinical trial.”
Participants highlighted a critical tension in developing and validating AI tools in health care, balancing economic pressures with ethical imperatives. The ethical mandate to make these tools universally accessible clashes with the high costs associated with conducting RCTs, deemed the criterion standard for validation, particularly in the US and Europe. This economic challenge has prompted AI developers to shift RCTs to developing countries, raising concerns about potentially deepening health care inequities.
One participant who is both an AI developer and clinician-scientist expressed the dilemma facing AI developers and health systems: “If you don’t get creators and people like me and investors excited about the potential return, it will stop. That’s just the way it is. I was struggling as a developer, what is the balance between making…so if you see that more access and better outcomes is good and you expect people to pay for that, how do you put a charge so that you don’t make a charge too high?” The same participant also added insight on payment models for AI, suggesting a focus on health equity. “How should we be paying for AI?” he said. “If as a taxpayer or society you’re paying for something, then health equity should be the main guiding star.”
One participant in ophthalmology criticized the inherently high costs associated with RCTs: “Randomized control trials…there’s sort of an inherent assumption that they have to cost 40 million dollars. I just think that’s unethical. We have to come up with a way of delivering the kind of evidence, the high-quality evidence that can provide in a way that’s affordable for lower middle-income countries.”
Finally, the practical implications of AI were discussed. “If you’re improving the screening, then that in and of itself is sufficient to understand this is something that benefits patients,” said a participant working in AI and machine learning in health care. “Now whether a system is willing to pay for such an [AI solution], that’s a different question.”
A stated goal of AI is to broaden the reach of ophthalmology screening to populations currently underserved, thereby reducing their risk of blindness from DR. However, participants highlighted a complex array of clinical and ethical questions associated with this proposed use of AI. One concern is whether merely expanding access to DR screening without simultaneously improving access to treatment might create ethical dilemmas downstream.
“If someone’s not accessing services for diabetes in general regularly and not coming in for their well visits,…they’re not controlling their condition in the first place, they’re probably coming in less, they’re less likely to get the AI screening even if it’s available to them in their clinic and they’re more at risk for diabetic retinopathy because they’re not [accessing care],” said a researcher specializing in biostatistics and clinical trials. “If we’re trying to target and reduce those disparities, I think it would be important then to see where they’re falling off and why they’re not getting screened, or are they getting screened and not going for follow-ups.”
One participant emphasized the potential benefits of AI for the “bottom billion,” a uniquely underserved population, highlighting the slower arrival of such technologies to these groups. Another noted the generalization issues that can arise if AI models are trained on data from a narrow demographic. “I’ve been pleasantly surprised at how well the generalization has shown so far,” he said. “But I also think that if you only train for people from one small part of the world, then you would have a generalization problem.”
Finally, the cost-effectiveness of AI diagnostics was questioned, particularly in contexts where the financial burden might outweigh the clinical benefits. “If the AI is efficacious, it’s an accurate diagnostic, but is this cost-effective if it costs a million dollars to run?” asked one participant. “Whether a system is willing to pay for such a product is another question.”
This study is, to our knowledge, the first to explore the practical ethical considerations involved in designing and executing AI clinical trials. We draw on the experiences of investigators conducting the first NIH-funded RCT of an autonomous AI for DR screening, along with related trials. While we found consensus among stakeholders regarding generalizability of the NIH’s 7 ethical principles to clinical trials of AI, we identified important areas of uncertainty regarding social value, scientific validity, fair participant selection, favorable benefit-risk ratio, and informed consent ( Table 2 ). Thematic analysis of participants’ experiences in DR screening trials across various settings also highlighted novel ethical considerations specific to AI clinical trials, independent of the 7 principles.
When discussing the 7 ethical principles, defining and measuring the social value of AI in clinical trials proved to be complex. Perspectives ranged from prioritizing patient views to measurable reduction in health care inequities. Moreover, participants struggled to generalize social value across trials due to its context-dependent nature and the lack of defined metrics to measure the social impact of AI interventions. While the clinical value of AI in improving patient outcomes is similar to that of traditional drug or device trials, evaluating AI introduces additional complexities as it functions as both a cognitive tool and a workflow intervention. Establishing a control, typically defined as the standard of care, can vary significantly across different clinical environments. This variability may limit the generalizability of AI interventions, as an AI system validated in one setting might not perform effectively in another setting.
In addition, the goal of using an AI to expand access for populations with limited access presents an ethical tension between the desirable social value of reducing access inequity with the limitations of an AI with biased training data (at least until better access for all groups is delivered) while outcomes are still being evaluated. For future trials, it will be crucial for researchers to clearly define the desired social value and establish specific, measurable outcomes that demonstrate the AI intervention efficacy achieving this value.
Participants also expressed nuanced concerns about ensuring a favorable benefit-to-risk ratio and obtaining true informed consent for AI interventions. While uncertainty is inherent in clinical research, balancing the unknown risks of AI screening against the known risks of untreated DR presented unique challenges. Interviewees noted that different population subgroups likely had different risk-benefit ratios when AI risks are compared with current screening methods or the absence of screening. Moreover, challenges in ensuring fair patient selection were noted, particularly in scenarios where patients facing health disparities are less likely to access care. Ensuring informed consent emerged as a significant challenge, particularly in communicating the benefits, risks, and data use terms for AI interventions to participants with varying levels of health literacy.
The exploration of novel ethical considerations in the context of AI clinical trials revealed several critical issues ( Table 3 ). Participants expressed concerns about whose values are prioritized in the design and implementation of AI systems, emphasizing that current definitions fail to capture the diverse needs and priorities of all stakeholders, including patients, clinicians, and health systems. An ethical challenge is emerging in trials of AI around value capture; if a desired outcome of an AI tool is “value” (cost, labor, or access savings), who decides what outcome value is prioritized and how value savings are redistributed within a health care system or community is unknown. 27 This uncertainty presents a crucial ethical knowledge gap in ensuring that AI trials are responsive to clinical contexts.
The integration of AI into clinical workflows also generated tension. RCT investigators aimed to evaluate AI effectiveness without compromising patient safety or disrupting established workflows with proven efficacy. Clinicians expressed concern that modifying care workflows to accommodate AI could unintentionally affect patient care or increase staff workload. Therefore, it is crucial to carefully assess the impact of AI on clinical workflows before implementation.
Participants also raised ethical concerns about using AI to improve access and equity without a clear financial incentive. They questioned what would incentivize a health system to invest in such technologies. Participants expressed concern around using AI to improve screening access in resource-constrained settings. For example, in the context of an AI clinical trial for DR screening, the AI system may be tested in a well-resourced clinical setting where patient can immediately receive follow-up treatment from an ophthalmologist if diagnosed with DR. However, if this AI system is later to be deployed in an underresourced setting, even if it accurately identifies patients needing treatment, the lack of access to follow-up care might prevent these patients from receiving the necessary interventions. This raises an ethical question: Is it appropriate to evaluate the AI’s efficacy in a controlled environment with follow-up care, knowing that in its clincial application, such care might be inaccessible?
Findings from this study suggest that the concept of equipoise—the ethical balance necessary in clinical trials—is more complex in AI interventions. AI, as a systems intervention, not only affects individual patient care but also integrates with and transforms health care workflows and system operations, complicating the evaluation of its effectiveness in clinical trials.
This study had several limitations. The participant pool was relatively small, comprising 11 individuals from 4 US academic institutions, and was restricted to investigators involved in AI clinical trials. While the deductive approach allowed us to systematically apply the 7 ethical principles to our data, it may inherently have limited the scope of conclusions by focusing on predefined frameworks. However, the inductive components of our study enabled us to explore novel insights and themes that emerged directly from the data, thereby enriching our understanding and identification of ethical issues beyond the initial framework. Although the theoretical concepts uncovered may be applicable to AI clinical trials in other areas, the scope of this study was limited to clinical trials of AI in diabetic retinopathy screening. Thus, the generalizability of these findings requires further validation in future studies. Furthermore, it is important to note that the 7 principles centered in this article have been criticized as being parochial, or at least Western-centric. 28 , 29 The potential vulnerability of the 7 principles raised by this criticism is magnified as AI development for health care is occurring globally.
This study addresses an important gap in practical understanding of how clinical investigators actually navigate ethical considerations arising with the design and conduct of AI clinical trials. It reveals a general consensus on the utility of NIH’s 7 ethical principles for clinical trials of AI but also important areas of uncertainty in social value, scientific validity, fair participant selection, favorable risk-benefit ratio, and informed consent. These findings highlight important considerations that should be addressed in future iterations of ethical guidance for AI trials. As Emanuel and Grady 12 aptly noted, “Like a constitution, these requirements can be reinterpreted, refined, and revised.…Yet these requirements must all be considered and met to ensure that clinical research, wherever practiced, is ethical.”
Accepted for Publication: July 15, 2024.
Published: September 6, 2024. doi:10.1001/jamanetworkopen.2024.32482
Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Youssef A et al. JAMA Network Open .
Corresponding Author: Alaa Youssef, PhD, Department of Radiology, Stanford University School of Medicine, Stanford Center for Artificial Intelligence and Medical Imaging, 1701 Page Mill Rd, Mail code 5467, Stanford, CA 94304 ( [email protected] ).
Author Contributions : Drs Youssef and Char had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Youssef, Abramoff, Wolf, Char.
Acquisition, analysis, or interpretation of data: Youssef, Nichol, Martinez-Martin, Larson, Wolf, Char.
Drafting of the manuscript: Youssef, Nichol, Abramoff, Wolf, Char.
Critical review of the manuscript for important intellectual content: Youssef, Martinez-Martin, Larson, Wolf, Char.
Statistical analysis: Youssef, Nichol, Abramoff.
Obtained funding: Wolf, Char.
Administrative, technical, or material support: Nichol, Wolf, Char.
Supervision: Larson, Abramoff, Wolf, Char.
Conflict of Interest Disclosures: Dr Larson reported holding shares in Bunker Hill Health Shareholder outside the submitted work; he reported receiving research support from Siemens Healthineers and the Gordon and Betty Moore Foundation outside the submitted work. Dr Abramoff reported work as director and consultant with Digital Diagnostics Inc outside the submitted work, where he also holds equity and has patent application assigned; he reported chairing the Healthcare AI Coalition Foundational Principles of AI Collaborative Community for Ophtalmic Imgaing and serving as committee member of the American Academy of Ophthalmology AI Committee, AI Workgroup Digital Medicine Payment Advisory Group, and the Collaborative Community for Ophthalmic Imaging. Dr Wolf reported grants from Novo Nordisk as primary investigator for a clinical research site outside the submitted work. No other disclosures were reported.
Funding/Support: This study was funded by the National Eye Institute (R01EY033233-01).
Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
Data Sharing Statement: See Supplement 2 .
Additional Contributions: We extend our sincere gratitude to the ACCESS study team and all participating individuals. Their invaluable contributions, dedicated time, and willingness to share their insights have been fundamental to the success of this research. We are deeply grateful for their openness and the rich perspectives they have provided.
IMAGES
VIDEO
COMMENTS
Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating ...
These considerations are designed to protect the rights, safety, and well-being of research participants, as well as the integrity and credibility of the research itself. Some of the key ethical considerations in research include: Informed consent: Researchers must obtain informed consent from study participants, which means they must inform ...
At Prolific, we believe in making ethical research easy and accessible. The findings from the Fairwork Cloudwork report speak for themselves. Prolific was given the top score out of all competitors for minimum standards of fair work. With over 25,000 researchers in our community, we're leading the way in revolutionizing the research industry.
Ethics Principle 1: Respect for persons. As the name suggests, this principle is all about ensuring that your participants are treated fairly and respectfully. In practical terms, this means informed consent - in other words, participants should be fully informed about the nature of the research, as well as any potential risks. Additionally ...
The following are just two examples of infamous cases of unethical research practices that demonstrate the importance of adhering to ethical standards: The Stanford Prison Experiment (1971) aimed to investigate the psychological effects of power using the relationship between prisoners and prison officers. Those assigned the role of "prison ...
a paper can be considered for publication. if you cannot attest to this, most reputable journals will not publish your paper . A Scientist's Responsibilities That said, ethical considerations do not stop when the last subject leaves. However, the focus of ethical consideration does change. Now you must
3. Justice. IRB indicates institutional review board. framework for evaluating research is outlined by Emanuel et al.7 Steps suggested in the process of eval-uating ethical research include: 1.Value in terms of the knowledge extracted and applied from the research. 2.Scientific validity reflecting the methodology.
Research is a critical aspect of any design journey, but ensuring that the research is accurate, unbiased, and ethical is just as important.. Whether you are conducting a survey, running focus groups, doing field research, or holding interviews, the chances are participants will be a part of the process.. Taking ethical considerations into account and following all obligations are essential ...
In the conduct of research, several key principles and actions must be observed and preserved. These include freedom from harm, right to self-determination, right to privacy, and right to anonymity and confidentiality (See Figure 2). Freedom from physical or mental harm or discomfort should be of utmost concern to the researcher.
Ethical Statement Templates The following statements provide templates for the different types of ethical statements required for journal articles. Authors may use these as a guide when drafting their manuscripts. Please note there are many different types of statements and situations, so several examples are provided, but may not cover all cases.
of power and authority are all 'ethical considerations inherent in and raised. by ESL research' (p. . 1) . Koulouriotis further reiterates the point that a great. proportion of research in ESL ...
Principles that come to mind might include autonomy, respect, dignity, privacy, informed consent and confidentiality. You may also have identified principles such as competence, integrity, wellbeing, justice and non-discrimination. Key ethical issues that you will address as an insider researcher include: Gaining trust.
organizations are also examined. Specifically, within this paper the importance of ethical conduct while engaging with research, especially WIL research using human participants, is discussed, including the need to obtain ethical approval and consideration of issues around informed consent, conflict of interest, risk of harm and confidentiality.
Revised on 6 July 2024. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments ...
In order to address ethical considerations aspect of your dissertation in an effective manner, you will need to expand discussions of each of the following points to at least one paragraph: 1. Voluntary participation of respondents in the research is important. Moreover, participants have rights to withdraw from the study at any stage if they ...
Ethical Considerations in Research. Research is an important component of expanding knowledge and understanding the. world, and the people that inhabit the it. As Fiske (2018) notes, how people ...
Fabrication and falsification of data or results. Conducting and reporting research methods, data, and results honestly is at the very top of the list of ethical considerations in research. Fabrication is making up data or results, while falsification is manipulating or altering data or results, both of which are seen as major ethical violations.
The study of research-related ethical dilemmas is what "ethics" is all about, say Mirza et al. (2023). Meyer (2008) highlighted that ethical considerations in research stem from ancient Greek ...
Ethical considerations related to conducting research also involve standards for the storage and analysis of research data so cases like the Schön scandal can be reviewed expeditiously. Generally speaking, all research data, including primary materials and raw data, such as survey questionnaires, measurements, recordings, and computer results, must be stored in secure and durable storage ...
In this paper, we discuss core ethical and methodological considerations in the design and implementation of qualitative research in the COVID-19 era, and in pivoting to virtual methods—online interviews and focus groups; internet-based archival research and netnography, including social media; participatory video methods, including photo ...
Introduction. Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted).University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity ...
According to Arifin (2018), ethical considerations in a qualitative study have a particular resonance due to the in-depth nature of the study process, which is central to protecting human subjects ...
Ethical frameworks for clinical research, shaped by landmark documents such as the Nuremberg Code, Declaration of Helsinki, Belmont Report, CIOMS guidelines, and the US Common Rule, form the bedrock of research ethics. 11-13 Emanuel et al 12 have further delineated 7 core principles for clinical trial ethics, endorsed by the NIH: social and ...
Email: [email protected]. This is the third of four papers to be published in Research Ethics Review in 2009, that address methodological issues of relevance to research ethics committees. It focuses on three issues: the representativeness of study participants, the size of the study and data analysis.