Skip to Main Content

AI for Research

This guide provides information and recommended resources on the applications of Generative Artificial Intelligence (GenAI) in research.

Prompt Engineering

Prompt engineering, or prompting, is a process of creating and refining instructions used as input to a generative AI system to get high-quality outputs.

Types of Prompting

  • System message — give the tool a persona or function to limit the perspectives it will approach a task from 
  • Zero-shot — prompting the tool with no examples and building context into the original prompts
  • Few-shot — provide the tool with a couple of examples or build up to the ultimate task by breaking the prompt down to several steps
  • Chain-of-thought — start from an idea and ask the tool to provide some thoughts on it, then shape further prompts based on its responses
  • Context-expansion — start from a premise and ask the tool to identify "5 Ws and a How" to build on your original premise
  •  "Be on your toes" — ask the tool to be on its guard and look for ulterior motives when it answers your query

Adapted from: Bullingam, L., Ylinen, O., Burnet, B. (2024). Prompt Engineering in Libraries. [Video]. Digital Shift Forum. Research Libraries UK. 

CLEAR Framework

  • Concise — while additional detail can provide context, cluttering your prompt with superfluous details can confuse the tool or cause it to get lost in the details
  • Logical — providing information in a structured format with a logical flow will improve your results
  • Explicit — to get specific, clear responses you will need to give specific, clear instructions: define the task, set parameters, and give a precise call to action
  • Adaptive - being flexible and willing to try multiple approaches will reduce hallucinations and produce more relevant outputs
  • Reflective — be critical! Evaluate and fact-check the responses you get, keep hold of strategies and prompts that work, prompting is a continual process

CREATE Framework

  • Character — who will AI be? Gives the model a context and perspective and can help limit the types of sources the tool draws information from or the tone of the output.
  • Request — specific instructions for what you want the model to achieve.
  • Examples — offers the model something to build on (few-shot learning).
  • Adjustments — writing prompts is an iterative process, and first drafts are hardly ever perfect!
  • Type — what type of output are you expecting? A table, bullet points, summary image etc.
  • Extras — additional instructions or steps such as "ask me questions before you answer" or "ignore previous prompts".

How Can you Reduce Hallucinations?

  • Be precise and concise — ambiguous and broad prompts will return vague and generic responses
  • Set limits — give the tool a persona to assume or specify other limitations and parameters such as word count or tone
  • Try the "be on your toes" approach to prompting — ask the tool to be on its guard and look for ulterior motives
  • Turn the Al responses into questions and check if you get the same answer or use multiple approaches or prompts to see if you arrive at the same results
  • Don't rely on these tools blindly — check and evaluate the outputs you receive
  • Choose the right tool for the job — this could be a tool using the most appropriate model or built for a specific task

Further Reading

Video Tutorials