Insights Xchange: Conversations Shaping Academic Research

The Ethical Use of AI Tools in Publishing with Dr Marie Soulière

March 21, 2022 ScienceTalks Season 1 Episode 4
Insights Xchange: Conversations Shaping Academic Research
The Ethical Use of AI Tools in Publishing with Dr Marie Soulière
Show Notes Transcript

Artificial intelligence (AI) is everywhere these days. The publishing industry is no exception. 

Clever use of AI could be the modern-day solution to the skyrocketing submissions. Join Nikesh Gosalia and Dr Marie Soulière as they discuss the ethical use of AI tools to maintain publication quality in journals. While AI can promote research integrity by identifying plagiarism and image manipulation, it needs to be strategically integrated into the journal workflow to improve efficiency without worsening the editor's workload. By drawing on specific strategies used by Frontiers' AI Review Assistant (AIRA), Nikesh and Dr Soulière cover the key points pertinent to AI tools in the world of publishing, such as fostering trust among users, incorporating feedback to improve accuracy, and keeping accountability with human editors. Dr Soulière also touches upon an increasingly relevant concern– sophisticated fraud due to excessive information sharing – and shares tips on keeping up with new AI developments in publishing.

Dr Marie Soulière is the Head of Publishing Projects with Frontiers. She has a PhD in Chemical and Computational Biology, 12 publications, and several presentation awards. With her 10 years as a manager in the publishing industry and position as COPE Council member, she is a well-established figure in open access publishing. She can be reached at LinkedIn https://www.linkedin.com/in/marie-souliere/  

Insights Xchange is a fortnightly podcast brought to you by Cactus Communications (CACTUS). Follow us:

Nikesh Gosalia

Speaking of operations, we know that in the last maybe few years, the number of journal submissions have skyrocketed.  This is especially true in the biomedical sciences.  What advice would you offer a journal that's getting way more submissions than they would have expected?  What kind of short-term and long-term solutions are available?

 

Marie Soulière

My personal thoughts on this is that the way to maintain quality at scale is to involve technology, AI, machine learning, all these tools that can really make quick assessment on a large number of submissions, without compromising on all the quality checks that should be done to validate the papers.

The problem is, as you say, it's the sudden start of more submissions, and for journal, it's getting so many more submissions than they expected, they tend to do two things; either they will start rejecting a lot more, because they don't have the capacity or time to handle the papers.  Which is unfortunate, because that means some good research will be delayed in getting published.  It wastes time and the effort for the authors obviously, or the other thing they might do, the journal might minimize the checks and any more in-depth verification of those submissions to be able to cope with the volume.

Neither of these short-term approaches are ideal.  One will be used, but they are not ideal.  I think the best option, if the journals can afford it, is really to hire a bit more additional help in the short term to maintain the validation and recruit more editors to be able to distribute the volume.  If this can help short term, then they can work on adapting to what I believe is the best long-term solution in finding the tools, the technology, and the right platform that can really help them then handle the volumes with less manual work.

 

Nikesh Gosalia

Because you mentioned AI, and that fits in beautifully with what I want to talk about next.  I know that you have been involved in developing an AI solution.  In fact, in your own personal capacity, I remember, Marie, when we spoke, I think, first time, probably a few years back, four or five years back, you were one of the first few people I heard talking so passionately about AI and how AI can help us simplify things, improve efficiencies, maybe focus on the high-end tasks and get rid of the manual task.  But I know that Frontiers has been developing an AI solution for peer review.

In your opinion, do you think AI can help promote research integrity?  If yes, then how so?  In terms of the actual implementation, how would it fit into an actual journal's workflow?

 

Marie Soulière

Yes, absolutely.  I think the AI can help.  The classic example I like to give to everybody to illustrate this is the ethical issue of plagiarism for people to really visualize what AI can do.  Although this one is not strictly AI at the moment, it's more technological automation.  It used to be very difficult to detect plagiarism with just an editor's human brain looking at citations being the reference for what might have been published elsewhere.

But with plagiarism software, we can cross check texts with millions of other texts from various sources all at the same time.  It goes beyond what one or a few, even humans, could achieve.  Now, if you add AI to this automation, you can have a software that not only detects exact reuse of texts, but it can also detect similarity in a broader way to prevent the reuse of texts.  Without copy pasting, it can be a very similar idea without giving credit or citing the original research and trying to publish the same things twice.  AI can really help on that research integrity issue, in particular.

I also mentioned the issues with images earlier and tools to help detect manipulated images.  This is also something where AI helps catch misconduct a lot with algorithms that you can train to detect any weird edits or pixels that are moved in a figure and something that, again, even a trained specialist of research integrity would have a hard time finding in an image.  I think at the point where we are with tools to make fraud that become more and more sophisticated, even using AI to create fake manuscripts, by taking pieces of texts from different manuscripts and putting them together, we also have to fight back from our side with AI tools of our own.

Regarding where that fits in the journal workflow, I think what's good about AI and automation is that it can also be integrated virtually anywhere in the journal workflow.  You just have to be clever about how you do that to maintain the efficiency, as we discussed earlier, and not create additional bottlenecks or more work for the journal and the editors.  Otherwise, they are not going to want to implement this.  For example, with plagiarism, ideally, this is checked directly at submission, so no need to waste the time of an editor or reviewers.  If a paper is a copy of something already published, it should just be retracted.

But then, some other ethical checks can or actually should be done at other moments in the workflow.  Typically, if we want to look at conflicts of interest, potential conflicts between authors, editors, and reviewer by looking at their affiliations, their past co-publications, this can only be done, obviously, once you have the reviewers.  This is a verification that comes later in the lifecycle of the manuscript during peer review.  We found that the use of AI can really help the teams focus on specific ethical issues when they are detected, so at the right time during the review process, and overall, then it saves time for all the participants in the peer review.  It allows to maintain a really high level of ethical standards.

 

Nikesh Gosalia

You know that the idea of using AI for something like peer review is bound to raise a few eyebrows, yet you and your colleagues at Frontiers managed to develop and implement an AI Review Assistant.  How did you convince different journals at Frontiers to adopt it?

 

Marie Soulière

That's very true.  There is often, by default, a level of suspicion about the use of AI, and validly so, as there are limits to what AI can do, how it's trained or limits to what should be entrusted to an AI, as the AI is not accountable, but the humans are.

I actually had last year a great panel discussion with Nishchay Shah, the CTO of Cactus Communications, and Ibo van de Poel, who is Professor of Ethics, Philosophy, and Technology at TU Delft, on this subject and on AI and decision making and publishing, about the opportunities of AI, the limitations, and the ethical considerations of using it.  Also, the distinction between simple automation and actually different types of AI.  For AI in publishing, like the adoption of AI in any business, one of the key elements is trust.  We have to trust in the results.

I think there is a need for limitations on the types of decisions that the AI is allowed to make.  At Frontiers specifically, for the adoption of our AI Review Assistant called AIRA, it was a very gradual process.  We built each AI quality check individually with internal teams texting them, then using them in the status that we refer to literally as untrusted, providing feedback then to the algorithm to learn and improve.  When we saw that the results were good, and the teams started trusting the results of the check, we felt convinced in the benefits.  We started being comfortable in using them with editors and with more journals.

We built the AI so that we can continue to give feedback to it and can continue to learn over time based on the feedback provided by the users.  That is also a way to promote trust.

Then, as I mentioned, the whole aspect of decision making and transparency there is very important.  In the case of AIRA, the AI analyzes data and information.  Then, it either provides recommendations for a human specialist to look at an editor or it performs a number of actions typically sending an email.  But these actions are limited to things that will not take a final decision on the publication of a manuscript.  AIRA will not accept or reject.

We have seen that it's important to be clear on where the AI is used.  That's the transparency aspect and which processes it has used and which recommendations it's providing.  Actually, the European Commission had a high-level expert group on artificial intelligence.  They published very valuable ethics guidelines on trustworthy AI.  We have also been influenced by that and recently published with COPE, the Committee on Publication Ethics, a document with our recommendations on the ethical use of AI for decision making in publishing.  For those of you, listeners, who would be interested to read more about these fascinating subjects, you can look at these resources.

 

Nikesh Gosalia

Thank you.  Thank you so much, Marie.

Just talking about adoption, having been involved in this process where AI can enable and support and help us, over the last four or five years, have you seen increased awareness, increased adoption, in general, by industry, by senior decision makers?  Or do you think there is marginal improvement, but there is a lot more to be done?

 

Marie Soulière

I would say we are somewhere in the middle.  I think we are past the first stage where people needed to be convinced of the advantages and the potential benefits.  I think we are past that.  I think now people do realize there is a need for it, there are strong benefits to it, because we have some clear use cases, again, with plagiarism, the way it was used, and it succeeded really well.  That went a long way into convincing people, journals, publishers, even authors, that this was a good thing.

The difficulty right now is to make sure that new big quality checks like this gets validated.  You have to have publishers or journals that develop new ones and try them.  Once they are working, we try to share those then for other people to use it.  We have seen how different companies are trying to come up with these tools so that they can share it with publishers, including CACTUS and UNSILO, and there are other different tools out there.  Even with AIRA, we try to see whether these checks could be used externally with other publishers, because there is a need to work in a bit together as publishers to try to solve some of these big cases of Research Integrity out there.

 

Nikesh Gosalia

I agree.  I agree with you, Marie.  I mean, this has been very fascinating.  Thank you for sharing so many insights.  Just top of mind, the way I would kind of like to summarize this.

Retractions clearly, like you mentioned, is not such a big stigma, perhaps anymore.  People need to talk about it.  People need to accept it and probably not complain about it.  At the same time, there are two sides to it.  There is the publisher or the society's perspective.  There's the author's perspective as well.  I guess the other point being just having an open mind in using tools.  Especially as far as operational efficiency is concerned, it is a tricky balance, for sure.  But if you have an open mind in terms of using tools, then that definitely helps.

Talking about tools and some of the trends that we have seen, AI is more openly being spoken about, is being adopted.  AIRA is a brilliant example.  Some of the tools that CACTUS has developed, is a great example.  It's very important that we have the right mindset in accepting that it enables us, it helps us to do things and focus maybe on the high-quality tasks and not to eliminate task or to kind of take away jobs.  Clearly, we are seeing that.

One final question from my side, Marie, I mean, because there is so much information that is available out there.  How do you keep yourself updated with everything that's happening within the industry or the field of research?  I am sure you are very curious about the field of research in which you did your Ph.D. in.  How do you keep yourself updated?

 

Marie Soulière

Interesting!  It's a bit of a mix of things.  I think there are a few conferences that have gone online now, that have been interesting to attend in the last year, where you get a lot of people from all different sides that come together to talk about these issues.  The latest was the APE Online Conference, where you have researchers and publishers and journals and editor, so a bit of mix, where everybody then shares these ideas of what's going on.  STM [ph] have also a lot of nice online courses and conferences that try to keep you up to date with this.

I am lucky because as a member of COPE, I get to hear about a lot of issues firsthand because they are raised in such a broad way.  We have a lot of discussions there.  For me, a lot of my knowledge of the industry comes from sitting on the Council of COPE and attending some of these conferences.  Otherwise, one of the good places I have found to learn more about new AI tools that are coming out, is basically Archive.  A lot of people that are working on AI tools actually post openly their codes or their research on this in archive.org.  It's a place where I like to look at the papers on AI that are coming out, pointing out the tricky things and potential tools to try to address them.

 

Nikesh Gosalia

Thank you.  Thank you, Marie.  That was very useful.

One more thing, and I promise this is the final question, Marie.  I don't know if disruptive is the right word, but the last two, three years have been very different in multiple ways.  I mean, the push towards open access, the importance of measuring impact and then, of course, the pandemic.  What are one or two trends that you think would be interesting to watch out for as far as our industry is concerned?

 

Marie Soulière

I mean, I don't want to say AI again, because that's what we have talked about the whole time.  I think the sophistication of fraud is the first thing that comes to mind, both as a person who works on research ethics and AI, it's this whole situation where, as I said, AI can help you have the tools to find fraud and misconduct, but it also helps the other side.  It helps anybody who wants to do fraud to do it better.  I guess one of the big trends in that is that we find the sharing of information can backfire.  We tried to collaborate together as publishers to deal with some of these big issues of fraud in publishing.

But then, what we found is also the more you share publicly about what you are doing, the more the people doing fraud then learn from what you say you are doing and update their side as well.

We have found ourselves in this very tricky position of who to share what with and how much to make sure that you can address these issues in a concerted way without helping fraud become more sophisticated.  I think right now this is one of the big things that is being discussed between publishers and to keep an eye on what's going to happen, although it might not go public, what conclusion comes out of this for this exact reason.  It might not be a trend that people can follow on, but it will be one that will play out in the background for sure in the coming years.

 

Nikesh Gosalia

Very interesting!  Thank you.  Thank you, Marie, for being our guest on All Things SciComm.  As always, it has been a great pleasure to talk to you.

 

Marie Soulière

Thank you, Nikesh.  It was really fun.  Thank you for inviting me.

 

Nikesh Gosalia

Thank you everyone for joining us.  You can subscribe to this podcast on all major podcast platforms.  Stay tuned for our next episode.