Features of ESE

  • Our editors are native English speakers
  • Our editors are expert scientists with PhDs
  • Our editors have >10 years postdoc experience
  • No freelance editors
  • No artificial intelligence (AI)
  • Proficient at editing and fact-checking output from AI software
  • Manuscripts published in >400 journals
  • Clients in >40 countries
  • Client retention rate of >95%
  • Transparent pricing policy
  • Volume discounts of up to 10%
  • Guaranteed editing quality
  • Comprehensive feedback about editing
  • Assured privacy, data security, and reliability
  • Professional, friendly communications
  • Rapid replies to emails; the red box above gives the current response time
  • Company operating since 2000
  • No marketeers or other noneditors employed
  • Website designed and coded by us (hence the 1990s look!), so no cookies, tracking, or malware

Introduction

Algorithmic tools based on artificial intelligence (AI) and large language models (LLMs) are increasingly being used to both write and edit scientific manuscripts since they can produce excellent-sounding English text. We believe that our expert human editors at ESE are well placed to augment the use of AI, especially by detecting and removing the hallucinations that AI often introduces into scientific documents.

All hallucinations must be removed

LLMs can now be considered to be highly literate since they can be used to produce very good descriptions of scientific information. While most of this information will be accurate, it is reported that currently around 5% of the text generated using LLMs contains so-called hallucinations: false, yet confidently asserted information. It is crucial that manuscripts submitted for publication in the scientific literature contain no such hallucinations. Considerable efforts are being devoted to detecting these hallucinations, but since LLMs are fundamentally generative probabilistic models, it might be theoretically impossible to stop them from producing hallucinations altogether. While this problem can be ameliorated by expert scientists carefully checking the texts produced using LLMs, detecting such hallucinations can be extremely difficult (especially for nonnative English speakers) due to the vast majority of LLM-generated prose sounding so convincing.

These hallucinations can be considered analogous to problems encountered with the use of AI in self-driving vehicles. Comprehensive assessments of Tesla "full self-driving" software (https://amcitesting.com/tesla-fsd/) led to the conclusion that "The confidence (and often, competence) with which it undertakes complex driving tasks lulls users into believing that it is a thinking machine—with its decisions and performance based on a sophisticated assessment of risk (and the user's wellbeing)". This is also the impression that one can get when using LLMs. However, those assessments of Tesla software also revealed that "When errors occur, they are occasionally sudden, dramatic, and dangerous; in those circumstances, it is unlikely that a driver without their hands on the wheel will be able to intervene in time to prevent an accident—or possibly a fatality". Analogously, the exemplar LLM-based ChatGPT software usually seems very knowledgeable, but it also produces bogus text that it confidently asserts is factual. This means that it is currently mandatory for suitably skilled humans to check the accuracy of all output from LLM-based software; how long it will take before this becomes unnecessary is an open question.

Journal policies

The policies of many journal publishers related to the use of LLMs have also changed since ChatGPT became popularized in 2023, with some still banning their use while others having changed their minds. In June 2023 we had quoted on this webpage the policy of the American Association for the Advancement of Science (AAAS) that text generated by algorithmic tools such as AI could not be included in articles published in Science journals, going so far as to state that the inclusion of such text would represent scientific misconduct. The AAAS had changed that policy by the end of 2024, by permitting the submission of manuscripts containing text generated using LLMs as long as their use was described in detail. By that time other top journals such as Nature and Cell similarly allowed the use of LLMs as long as this was clearly acknowledged.

However, enforcing such policies is likely to become more problematic as detecting LLM-generated text becomes more difficult; this might even eventually become impossible, rendering these policies meaningless. It might therefore make more sense for publishers to simply require submitted manuscripts to state that AI was utilized in their generation, without going into much detail, analogously to how people other than manuscript authors are very briefly acknowledged for contributing to reported work.

The way forward

We at ESE believe that, for the foreseeable future, the revolutionary contributions of AI to improving the dissemination of scientific information will continue to be reliant on input from humans and their judgment abilities. The fundamental aspect is having trust in the writers of any scientific document, both human and algorithmic. Recent analyses have revealed that LLMs are currently used markedly less often in the production of manuscripts submitted to journals with higher impact factors, and also markedly less often by authors who are native English speakers. This latter aspect is especially relevant to ESE, since one of the main aims of our service has always been to ensure that the quality of English in documents that we edit is indistinguishable from that expected of an expert (human) native English speaker.

Put another way, our service focuses on ensuring the dissemination of accurate scientific information, and this applies equally to reducing the likelihood of AI hallucinations being present in submitted manuscripts. We believe that specific characteristics of our expert human editors (e.g., items 1 and 5 at scienceediting.com/advantage.html; note that ESE does not utilize any form of AI) mean that ESE services are particularly suitable for empowering nonnative English speakers in getting their scientific work published, including those who choose to also use algorithmic tools when writing or editing manuscripts.