AI machine vs. human proofreaders and editors
Between planning and carrying out experiments, seeking for funding, giving speeches, producing publications, and a variety of other responsibilities, researchers will have little time for proofreading. An automatic proofreader, which generally consists of an internet service or downloaded plug-in that evaluates spelling, punctuation, and style, is one quick solution to this problem. These proofreaders have numerous advantages, including the fact that they are accessible around the clock, provide quick results, and are generally reasonably affordable. These services may also offer educational explanations for problems and suggested fixes. However, some evaluations (such as this one) have shown that these systems ignore certain mistakes and inconsistencies and propose needless modifications, which may be especially problematic for authors working in a foreign language. A computerized proofreader will also miss the majority of logical gaps owing to a lack of knowledge of the material. However, one may argue that human editing is prone to errors and misunderstanding.
Case study: The automatic tool vs. human editors Grammarly
Nonetheless, there is one area where a human editor may be better than a computerized one, and it is especially relevant to academic research: evaluating the usage of field-specific terminology. To verify this idea, we created a set of correct and wrong phrases based on numerous often misunderstood and overused terminology listed in our clinical language resource. We included one appropriately utilized unique word that may look wrong in nonclinical situations for good measure: “past history.” We next sent this content to Grammarly, a prominent proofreading service that runs tests for contextual spelling (including often misunderstood terms), grammar, punctuation, style, and word usage (including vocabulary use).
Humans always have the upper hand with semantic and logical understanding
The findings were clear: while the automatic proofreader did not classify the correct phrases as wrong (with one exception, which is discussed below), it did produce false negatives, that is, it did not identify the mistakes in the erroneous sentences. In other words, the proofread algorithm suggested that the great majority of phrases, both correct and wrong, were accurate. Using the “General” setting, the system specifically questioned its use of pronouns and passive voice, which are generally allowed in clinical case reports, and observed that “past history” is redundant, despite the fact that this phrase is commonly used in medical writing (with over 500,000 Google Scholar searches). The “Academic” option yielded the same results. Finally, while not being sensitive to pronouns and passive voice usage, the “Technical” setting nevertheless detected the word “past history.” The grammar and syntax check with Microsoft Word was similar, identifying the passive voice and seeming wordiness of “past history,” but none of the mistakes in the erroneous phrases.
Grammarly is suitable for only basic surface level syntax proofreading
Notably, while the Grammarly website states that the program may be used to “Grammar verify medical reports,” the company’s CEO has claimed that Grammarly “is not meant for expert writers to attain the very next level of communication mastery.” Based on the findings of this short study, automatic editing solutions such as Grammarly may be suitable for basic editing but may not be suitable for more professional subjects; here is where live subject-matter experts may help.