
As the use of Artificial Intelligence (AI) has exploded over the past few years, publishers have been tasked with the development and oversight of completely new policies aimed at preserving research integrity and helping authors and editors to ethically navigate an exciting yet turbulent point in history. At IGI Global Scientific Publishing, we have watched the emergence and evolution of AI policies, standards, and recommendations across the publishing industry. We have adopted and adapted these standards to fit our publishing models and uphold the best ethical standards.
In particular, we watched the controversy of using AI in the peer review process unfold. While some organizations, such as the NIH, prohibited the use of AI technologies in the peer review process almost immediately, other organizations countered that AI could be used to support in areas such as the readability of peer reviews and initial submission checks.
Concerns over privacy and confidentiality arose quickly, which meant that manuscripts could not be uploaded to generative AI tools. Conversations around peer reviewers using AI tools acknowledged the need for transparency, where reviewers would have to disclose to the author the use of an AI tool and how it was used. Discussions then turned to accountability: whose job was it to hold reviewers responsible and police their use of AI?
The result seemed to be that many publishers chose to outright reject the use of AI for peer reviewers. With resources already stretched thin against increasing ethical threats—paper mills, peer review rings, citation manipulation, plagiarism, just to name a few— it seemed wisest to eliminate any new ethical problems before they could even emerge. As an independent publishing company, IGI Global Scientific Publishing came to this conclusion, updating our policy to state:
Use in Peer ReviewManuscripts under peer review may contain sensitive or confidential information that should not be shared outside the peer review process. Uploading a manuscript to any generative Artificial Intelligence (AI) tool or service is a breach of confidentiality and privacy. IGI Global Scientific Publishing does not permit editors and peer reviewers to upload an unpublished manuscript or any information pertaining to the manuscript (files, images, data, etc.) into Generative AI tools.
It is the peer reviewer’s responsibility to ensure the accuracy and integrity of the research and to formulate their own opinions and recommendations. Allowing AI to assist with decision making, using it to vet accuracy and integrity, etc., are not permitted.
The use of AI in any aspect of the peer review process, including evaluation, decision-making, and the generation of summaries or comments, is strictly prohibited due to concerns regarding confidentiality and potential biases. IGI Global Scientific Publishing will continue to monitor advancements in AI technology and will update this policy as necessary.
During this same time period, IGI Global Scientific Publishing surveyed its authors and editors, a list of over 100,000 researchers, on a few different controversial topics. This included the use of AI in peer review.

When asked to rate the following statements regarding AI and its role in research innovation, with the statement being “AI can be a peer reviewer of academic research”, 46% of respondents either somewhat or strongly agreed, while 42% either somewhat or strongly disagreed.
It appeared that researchers were just as split on the use of AI in peer review, with some concerned about the “laziness” that AI would promote within the academic community and the threat of allowing a machine to make decisions that should only be made by humans. However, those who were not entirely against using AI in the peer review resoundingly promoted its use to help eliminate bias in the peer review process, an interesting concept when many organizations see AI as possibly adding bias into the process.
It is widely known that AI models are biased; however, according to researchers, it appears that bias is already a problem prevalent in the peer review process.
When asked to rate the following statement on publishing practices, the statement being “Manuscripts rejected based on bias such as gender, race, etc. is practiced routinely”, 44% of respondents somewhat or strongly agreed, while 21% somewhat or strongly disagreed. A majority (35%) of respondents preferred not to provide an opinion.

Still, almost half of the researchers believed there were biased reasons for rejecting manuscripts, and comments revealed they believed that bias stemmed from peer reviewers. Many statements were provided with reasons for this bias including that humans are inherently flawed, that reviewer selection is not transparent and can lead to the assignment of persons who are not experts in the field, that everyone has differing opinions on methodologies or theoretical stances, that positive or novel results tend to be favored more than negative or inconclusive results, that native English speakers are preferred, and that specific regions, races, etc. were purposely being excluded.
Those in favor of utilizing AI during the peer review process commented on its use as a neutral party, becoming one peer reviewer in addition to human peer reviewers. They acknowledged that, though AI has its biases, it is still less biased than humans. Others stated that editors should run peer reviewers’ feedback through AI first to mitigate inappropriate feedback that could be bullying or hateful.
Many also commented on the lack of time they had to perform peer reviews and the lack of recognition from their institutions for the time and effort that needed to be set aside to perform such reviews. AI, they claimed, could offset this burden and alleviate reviewer fatigue.
While it is unlikely that any publisher will allow AI to become a reviewer itself with current AI technologies, as there are real concerns against uploading manuscripts to AI and breaches in privacy and confidentiality, and truly, humans should always be the final decision-makers in the peer review process, perhaps there is reason to believe that AI should not be completely banned from the peer review process.
Certainly, it should, which for many publishers it does already, assist with research integrity checks, including but not limited to plagiarism checks, reference verifications, and authorship verification. Reducing the burden of ethics checks for peer reviewers allows them to focus on the content itself and reduces the time it takes to perform a quality review. Perhaps it’s also time to explore AI’s use in tailoring reviewer feedback to be readable and professional in tone and seeking ways in which it can help mitigate bias. Of course, this presents challenges to publishers as well, particularly in how to enforce its responsible use.
AI technologies are continuously evolving and becoming more advanced. As they shift, so too must we adapt publisher policies to not only safeguard the research, but also do all we can to ease current challenges for researchers. For the immediate future, this will likely have to extend to AI in peer review.
Interested in becoming an IGI Global Scientific Publishing reviewer? Please fill out the form here. For more on IGI Global Scientific Publishing’s editorial policies, please see the Book Editorial Policy and Journal Editorial Policy.