Skip to Main Content

AI for Research

This guide provides information and recommended resources on the applications of Generative Artificial Intelligence (GenAI) in research.

Home

 

Image generated by DALL-E 3

 

This guide provides information and recommended resources on the applications of Generative Artificial Intelligence (GenAI) in research.  

Best Practices of Using AI in Research 

  • Check the University’s Research Integrity Policy (QA514) (updated in December 2021) and visit the University’s website section on Research Integrity.
  • Never present AI-generated work as your own. Be transparent about which tools you use and how you have used them in your research.
  • Always cite and reference material included in your research outputs that is not your own work, including AI-generated content.
  • Meticulously fact-check all of the information produced by generative AI and verify the source of all citations the AI uses to support its claims.
  • Critically evaluate all AI output for any possible biases that can skew the presented information.
  • Do not ask AI to generate experimental data and then present it as the data you have collected, whether in its raw form or after analysis. This is data fabrication and is considered serious misconduct.
  • Avoid asking general-purpose AI virtual assistants like ChatGPT to produce a list of sources on a specific topic: such prompts may result in the tools fabricating false references. Use specialised tools for literature search and mapping instead.
  • Try to keep different versions as your work develops and keep notes that you make along the way. Should you be suspected of violating the research integrity policy, you can use such notes and archived files to show your working progress.
  • When available, consult developers' notes to find out if the tool's information is up-to-date, and if it has access to a knowledge graph or a database for fact-checking.
  • Keep in mind that GenAI virtual assistants like ChatGPT can’t function as a calculator, search engine, encyclopedia, oracle or expert on any topic by design. They simply use large amounts of data to generate responses constructed to "make sense" according to common cognitive paradigms, but they lack human-like reasoning abilities and information verification mechanisms.
  • Keep sensitive data away from GenAI to avoid personal and experimental data leakage. Once your data is fed to a proprietary GenAI tool, there is no guarantee that it stays private and no way to check it.

What is AI? 

Artificial Intelligence, or AI is a very broad term often used these days to describe any advanced machine learning system. 

Generative AI refers to deep learning models that can generate text, images, and other media in response to user prompts. Those creations are not spontaneous: behind the scenes, there is an algorithm that has been trained with massive amounts of data. As a result of this training, a GenAI model can associate user input (prompts) with the knowledge learned from the training data and provide an output according to the instructions given. 

What can AI do? 

It depends on the type of AI system you are using.  

Popular AI chatbots and virtual assistants such as ChatGPT, Claude 3Google Gemini or HuggingChat are based on Large Language Models (LLMs) that are trained to generate realistic text in a natural language or Vision Language Models (VLMs) that can analyse and generate both images and text. Therefore, they are particularly good at text-centric tasks, for example: 

  • generating text by description; 

  • spelling and grammar correction; 

  • suggesting synonyms; 

  • rewording or reformulating text; 

  • rewriting text in a particular style; 

  • applying formatting, e.g. LaTeX, to text; 

  • tasks that require generating new text based on a template; 

  • keyword extraction; 

  • image captioning (VLMs only).

Apart from general-purpose AI chatbots and virtual assistants, there are AI-based tools optimised for particular tasks and connected to relevant knowledge bases, e.g. Elicit or Consensus for research literature review. See more examples of AI-based tools in the AI Tools for Research section of this guide. 

Is AI reliable? 

Never assume the information generated by AI is accurate or true! While AI can produce logical and confident responses to most scenarios, it lacks any real ability to analyse information, and ultimately produce original thought. It is limited by the data it is trained upon and prone to hallucinations: generating false or misleading statements, presented as if it were a fact. See the section on AI Concerns for more information. 

Should I cite the use of AI? 

In most circumstances, yes, the use of generative AI tools should be cited. Your publisher may have specific guidance on how to attribute the AI tools you’ve used. Please see the section on Citing AI for more information. 

For generating emails and editing grammar, there is no need to credit an AI unless you're simply copying and pasting its response to an inquiry. Remember that you are still responsible for all your communication even if it was generated by AI. 

Can the use of AI be detected in publications?  

While some platforms have claimed to be able to detect AI usage, none have been able to consistently prove it. However, AI-written content can often stray significantly from our own writing styles, which is easily noticeable to the human eye. Additionally, AI-written content often contains factual errors and non-existent references that appear due to AI hallucinations and a limited knowledge base. An expert on the subject will detect these kinds of errors and understand their nature immediately.  

Further Reading 

Librarian