Using generative AI to help you improve clarity in your writing
Jeffrey Robens
Head of Community Engagement, Nature Portfolio | Publishing consultant passionate about supporting researcher development
Generative AI has taken the world by storm since ChatGPT was released in November 2022. And this has spread broadly in the academic community where almost one-third of researchers claimed to use AI when writing their articles.
At our Nature Masterclasses workshops, this is also one of the most popular topics for the Q&A sessions. Given its popularity, I thought it would be timely to include a post about AI for my newsletter as well. ????
This article will be divided into four brief sections to make it easier for you to navigate:
1. What is generative AI?
First, let me distinguish generative AI (I will now refer to this as GenAI in this article) from discriminative AI. Discriminative AI (most of the tools available online, such as Grammarly) are models trained to discriminate between categories (e.g., grammatically correct or incorrect). GenAI, on the other hand, are trained to be able to generate new content. Because of this more advanced feature, it is not only useful to check for grammar, but it can also help with structure and content as well. I mention this because I often hear researchers getting confused thinking that ChatGPT and Grammarly are similar. They are not. They are very different tools. ????
What types of GenAI are available for you to use? Well, there are many models now to choose from. The most popular ones are ChatGPT from OpenAI, Gemini from Google, and Claude from Anthropic. Some key points to keep in mind:
ChatGPT: the free version is ChatGPT 3.5 that is only trained on information up to but not including 2022. The more advanced version that is up to date and trained on much more information is ChatGPT 4.0, but this is not free. Copilot from Microsoft uses a version of ChatGPT 4.0, and while free, has numerous restrictions compared with the original one from OpenAI.
Gemini: this is freely available from Google and is up to date. It is not trained on as much information as ChatGPT 4.0, but it is considerably more advanced than ChatGPT 3.5. Further, Gemini is a bit more research-focused, so I personally have found this to work better with academic writing compared with ChatGPT 3.5.
Claude: many have told me that this (Claude 3) is their favorite among the three, but as it is not free, I have not used it yet.
Often, I use both Gemini and ChatGPT (either 3.5 or Copilot) and see which gives me a better output. Sometimes, I use a combination of both. ??
2. Editorial policies regarding using generative AI
Do publishers let you use GenAI when writing academic articles? For most publishers, yes. For example, at Springer Nature, it is okay to use GenAI when writing your paper, but you need to disclose its use along with which model was used in the Methods section of your article (just like you would for any other software). Some publishers, such as Science, also require that you include the prompt that was used in addition to highlighting which GenAI model. In other words, always be sure to read the editorial policies of your target journal! ????
Recently, I had a question asking why we require disclosure when using GenAI but not Grammarly. As I mentioned above, Grammarly is discriminative AI, and therefore, cannot create content. It is the creative ability of GenAI that makes publishers want its use disclosed as it may modify or change the content in the paper.
领英推荐
I have two recommendations about using GenAI effectively in your paper:
That said, many publishers do not allow AI-generated images in submitted manuscripts as there are concerns about copyright infringement. And most publishers (and funders) do not allow using GenAI in reviewing manuscripts or grant proposals (something I recently discussed in a LinkedIn post.
3. Importance of prompt engineering
A tool is only as good as the training of the user. And this could not be more true than for GenAI. If you want to get good output, you need to use carefully crafted prompts to guide the model. Remember, GenAI is trained on vast amounts of data (billions or trillions!) that comprise a vast cloud of information. If you give it a generic prompt, it will give you a generic answer. But, if you really want to harness the power of the specific information it has, you have to guide the GenAI model to where to find that information in that vast cloud.
For typical prompts, there are three key things you need to give the model:
There are other ways to prompt GenAI models as well (such as an interviewing technique, chain-of-thought, tree-of-though, zero-shot, etc.), but that is beyond the scope of this article. You can find a lot of useful guidance online for prompt engineering, such as this course on LinkedIn Learning or this one at Coursera.
4. Ethics and responsibilities
Just like with any powerful tool, it is essential that you use GenAI responsibly. While there are other topics as well, I just want to touch on three: hallucinations, bias, and data privacy.
My last piece of advice...if you use GenAI in your paper, please ensure that you revise the text properly afterward! There have been a few interesting cases where researchers simply just copied and pasted the output and put it into their manuscripts, that were surprisingly published! ?? Don't let this happen to you.
AI Experts - Join our Network of AI Speakers, Consultants and AI Solution Providers. Message me for info.
10 个月Exciting to see researchers embracing generative AI for their academic writing.
Company Director, Inspiring STEM Consulting
10 个月If 33% of Masterclass students admit to using AI tools, how many do you think are reluctant to admit openly? I suspect generally the usage is significantly greater.