Skip to Main Content

AI for Research

This guide provides information and recommended resources on the applications of Generative Artificial Intelligence (GenAI) in research.

Home

 

Image generated by DALL-E 3

 

This guide provides information and recommended resources on the applications of Generative Artificial Intelligence (GenAI) in research.  

Best Practices of Using AI in Research 

  • Always cite and reference material included in your research papers that is not your own work. Be transparent about which tools you use and how you have used them in your research. 
  • Never present AI-generated work as your own in a research paper.  Your publishers will give you more specific instructions but in general you will be expected to include information about how you have used AI in your paper. 
  • Meticulously fact-check all of the information produced by generative AI and verify the source of all citations the AI uses to support its claims. 
  • Critically evaluate all AI output for any possible biases that can skew the presented information. 
  • When available, consult the AI developers' notes to determine if the tool's information is up-to-date.
  • Do not ask AI to generate experimental data and then present it as the data you have collected, whether in its raw form or after analysis. This is data fabrication and is considered serious misconduct. 
  • Avoid asking the general-purpose AI tools like ChatGPT to produce a list of sources on a specific topic as such prompts may result in the tools fabricating false references. 
  • When you are writing a research paper, it is a good idea to keep notes that you make along the way and try to keep different versions as your work develops. Should you be suspected of violating the research integrity policy, you can use such notes and archived files to show your working progress. 
  • Remember that general-purpose AI tools like ChatGPT are not search engines, encyclopedias, oracles or experts on any topic — they simply use large amounts of data to generate responses constructed to "make sense" according to common cognitive paradigms, but they don’t have either a fact-checking mechanism, or human-like reasoning abilities.

What is AI? 

Artificial Intelligence, or AI is a very broad term often used these days to describe any advanced machine learning system. 

Generative AI refers to deep learning models that can generate text, images, and other media in response to user prompts. Those creations are not spontaneous: behind the scenes, there is an algorithm that has been trained with massive amounts of data. As a result of this training, a GenAI model can associate user input (prompts) with the knowledge learned from the training data and provide an output according to the instructions given. 

What can AI do? 

It depends on the type of AI system you are using.  

Popular AI chatbots and virtual assistants such as ChatGPT, Claude 3Google Gemini or HuggingChat are based on Large Language Models (LLMs) that are trained to generate realistic text in a natural language or Vision Language Models (VLMs) that can analyse and generate both images and text. Therefore, they are particularly good at text-centric tasks, for example: 

  • generating text by description; 

  • spelling and grammar correction; 

  • suggesting synonyms; 

  • rewording or reformulating text; 

  • rewriting text in a particular style; 

  • applying formatting, e.g. LaTeX, to text; 

  • tasks that require generating new text based on a template; 

  • keyword extraction; 

  • image captioning (VLMs only).

Apart from general-purpose AI chatbots and virtual assistants, there are AI-based tools optimised for particular tasks and connected to relevant knowledge bases, e.g. Elicit or Consensus for research literature review. See more examples of AI-based tools in the AI Tools for Research section of this guide. 

Is AI reliable? 

Never assume the information generated by AI is accurate or true! While AI can produce logical and confident responses to most scenarios, it lacks any real ability to analyse information, and ultimately produce original thought. It is limited by the data it is trained upon and prone to hallucinations: generating false or misleading statements, presented as if it were a fact. See the section on AI Concerns for more information. 

Should I cite the use of AI? 

In most circumstances, yes, the use of generative AI tools should be cited. Your publisher may have specific guidance on how to attribute the AI tools you’ve used. Please see the section on Citing AI for more information. 

For generating emails and editing grammar, there is no need to credit an AI unless you're simply copying and pasting its response to an inquiry. Remember that you are still responsible for all your communication even if it was generated by AI. 

Can the use of AI be detected in publications?  

While some platforms have claimed to be able to detect AI usage, none have been able to consistently prove it. However, AI-written content can often stray significantly from our own writing styles, which is easily noticeable to the human eye. Additionally, AI-written content often contains factual errors and non-existent references that appear due to AI hallucinations and a limited knowledge base. An expert on the subject will detect these kinds of errors and understand their nature immediately.  

Further Reading 

Librarian