10th December 2018CCI Facilities and Students

Using AI at university

How to use these new technologies responsibly and transparently

At the University of Portsmouth, our goal is to assist you in developing strong academic practices and a critical understanding of utilising generative AI tools. This guide will support you in the following:

Using generative AI tools creatively, responsibly and ethically:

  • Understand the capabilities and limitations of AI tools;
  • Use AI to enhance your learning, not replace your efforts;
  • Be mindful of potential biases and inaccuracies in AI-generated content.

Maintaining academic integrity and avoiding academic misconduct:

  • Know when the use of generative AI is and is not permitted;
  • Clearly distinguish between your ideas and those generated by AI;
  • Follow university guidelines and policies on the use of AI in assignments;
  • Seek guidance from your lecturers if you are unsure about the appropriate use of AI.

What are AI, Generative AI and Large Language Models?

Artificial Intelligence (AI) is a broad term for computer systems designed to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. These systems use algorithms and statistical models to process and learn from large amounts of data, enabling them to improve their performance over time.

Generative AI is a subset of AI that focuses on creating new, original content rather than simply analysing or making predictions based on existing data. Generative AI systems use deep learning techniques, such as neural networks, to produce various types of content, including:

  • Text: AI models like ChatGPT, Copilot, Claude and Gemini can generate human-like text based on input prompts. These models can assist with writing tasks, answering questions, and engaging in conversations;
  • Images: Tools like DALL-E (accessible from ChatGPT and Copilot), Midjourney, Davinci, and Stable Diffusion can create original images and artwork based on textual descriptions provided by users. These systems can generate illustrations, photographs, and even abstract art pieces;
  • Audio and Video: Generative AI can produce audio and video content, such as music, speech, and short video clips. Examples include Google's AudioLM and OpenAI's Sora;
  • Code: AI models like ChatGPT, Copilot, Claude and Gemini can generate code snippets and assist with software development tasks based on natural language inputs.

Large Language Models (LLMs) are an essential part of generative AI for text-based applications. They are trained on vast amounts of text data, learning patterns, grammar, and language semantics. LLMs can generate coherent and relevant text by processing this data, forming the foundation of AI models like ChatGPT, Claude, and Gemini. The term "large" refers to the neural networks' extensive size and the enormous datasets used to train them, ranging from billions to trillions of parameters. The terms Generative AI (GAI) and Large Language Model (LLM) are often used interchangeably, but strictly speaking, LLM should only refer to the specific type of AI model that focuses on processing and generating human-like text based on patterns learned from vast amounts of text data. LLMs are a key component of many Generative AI applications, particularly those involving text generation, but Generative AI encompasses a wider range of AI systems that can create various types of content, including images, audio, and video.

These tools can help with idea generation, inspiration, and the creation of various types of media. However, it is crucial to recognise generative AI's limitations and use these tools responsibly. While generative AI can assist with learning and creative tasks, students must develop their skills, critical thinking abilities, and original ideas rather than relying solely on AI-generated content.

Can I use AI for my studies?

Absolutely, you can! It is important that you use these tools to support your learning and to develop the AI skills employers will expect from graduates. Our approach is laid out in our six principles for the use of AI:

  1. Foster AI Fluency: We will support you in becoming literate in AI technologies and developing critical skills to use generative AI tools effectively, responsibly, and ethically in your learning.
  2. Equip and Support Staff: We will ensure our staff are equipped to guide you in using AI tools appropriately and to enhance your learning experience through AI integration.
  3. Adapt Teaching and Assessment for Ethical AI Use: We will continuously adapt our teaching and assessment strategies to incorporate the ethical use of AI tools, promoting equal access and enhancing educational outcomes.
  4. Uphold Academic Integrity and Rigour: We are committed to transparency in guiding the appropriate and ethical use of generative AI, encouraging proper acknowledgement of AI tool usage, and fostering open discussions about AI without fear of penalty. Currently, we will not rely on AI detection tools to assess your work's integrity.
  5. Promote Collaboration and Innovation: We will foster a culture of innovation, collaboration, and openness in using AI in education, encouraging the sharing of best practices and research findings.
  6. Maintain a Dynamic Approach to AI Evolution: We recognise the rapidly evolving nature of AI technologies and maintain a flexible stance towards their integration, committing to ongoing review and adaptation of policies and strategies.

When using AI tools, it's crucial to critically engage with the generated content, take responsibility for the final output, and ensure your use aligns with our academic regulations. The work you submit should authentically showcase your own knowledge, skills, and efforts while adhering to the specific guidelines of each assessment. AI should support your learning journey, not replace your critical thinking, creativity, and originality. By upholding academic integrity, you demonstrate your commitment to your education and our University's values.

Risks and issues to consider when using Generative AI tools

  • GenAI output may seem accurate but often contains factual errors (commonly called hallucinations). You must fact-check any information produced by GAI tools.
  • Despite appearances, GenAI lacks understanding of the content it generates, potentially leading to misplaced trust;
  • GenAI output imitates or summarises existing content, often without the permission of the intellectual property owners, but it can give the appearance of creativity;
  • GenAI can generate content that may pose moral and ethical concerns, raising important issues regarding its use;
  • Some AI tool developers have outsourced reinforcement learning from human feedback (RLHF) to low-wage workers;
  • GenAI can be misused to generate fake news and deep fakes;
  • Reliance on vast data and computing power contributes to the digital divide, favouring large tech companies and certain economies. This means that most people, especially those in the Global South, are unable to create and control GenAI;
  • GenAI outputs may reinforce dominant values and marginalise diverse perspectives;
  • The complexity of GenAI models makes it difficult to understand the reasons behind specific outputs;
  • Significant computational resources are required for training and running GenAI models, raising serious questions about their environmental impact;
  • The increasing presence of GenAI-generated content online may recursively influence future GPT models, perpetuating biases and errors.
     

Generative AI (GAI) might appear to think and understand complex topics like a super-intelligent person, but this is an illusion. In reality, GAI systems are highly sophisticated language models that predict the most likely next words based on patterns in their training data. While their outputs can be impressively coherent and human-like, GAI does not truly comprehend the meaning behind the words it generates in the same way a human does.

Using Generative AI in assessments

Generative AI can enhance and support your learning, and we encourage you to experiment with the available tools (you’ll find some ideas in this section). However, when it comes to assessments, not everything is permitted, and it's crucial that you understand what is allowed and what isn’t.
 

The Golden Rule

We require that most work submitted for assessment is your own original content, demonstrating your knowledge, skills, and critical thinking abilities. To that end, AI-generated content must not be included in your assessment submissions unless specified in the assessment brief and/or approved by your module lecturer.

Think of it this way: Is it acceptable to have someone else entirely or partially do your assignment for you?

Of course, the answer is no! Doing so would breach the University's Academic Regulations, outlined in the Student Conduct Policy. Therefore, make sure you apply the following guidance  when working on your assessments:
 

Ensure your submitted work is genuinely yours, not just copied or edited from AI-generated content (including writing, images, video, audio and code). Simply editing AI-generated content is not enough for it to be considered your own work! Engage with the material, think critically, and express ideas in your own style and voice to maximise learning and maintain academic integrity.

Detection Tools

We will not rely on AI detection tools to assess the integrity of student work. These tools often exhibit limitations in their accuracy and fail to provide adequate explanations regarding their scoring methodology and the interpretation of the scores they generate. Importantly, academics can usually spot AI-generated content without the need for specialised detection tools. The writing style, lack of original insights, and inconsistencies in the work may be tell-tale signs of AI involvement.

Categories of AI in assessments

At the University of Portsmouth, the use of GAI in assessments falls into two categories:

Category 1 - AI tools can be used.
The use of AI tools in the production of these assessments is permitted. Students can use AI tools as directed by the module coordinator/lecturer(s) and necessitated by the assessment. For example, AI could assist you with analysing data, identifying patterns, or producing novel insights when such tasks are specifically assigned and align with the learning objectives. Most of your assessments will fall into this category. 

IMPORTANT: Content directly produced by AI must not be included in your assessment submissions unless specified in the assessment brief and/or approved by your module lecturer (See “The Golden Rule” above).

Your module coordinator/lecturer(s) will provide information on what is and what is not an appropriate use of GAI tools for each assessment. If in doubt, ask for guidance. 

Category 2 - AI tools cannot be used.
Using AI tools to produce these assessments is unsuitable due to their specific learning objectives and structure. Typically, these assessments aim to develop essential knowledge, skills, and abilities for students to excel in their studies and future careers. Examples of assessments where AI might not normally be used could include:

  • In-person examinations
  • Class tests
  • Some online tests
  • Vivas
  • Some laboratories and practicals
  • Discussion-based assessments
     

Should I use AI in assessments?

The following questions will help inform your approach to using AI for your assessments:
 

  • Are you following the module guidance on using generative AI tools for assessments?
    • Have you cited and acknowledged any use of generative AI according to the University's guidelines?
    • Are you aware that using AI in a way inconsistent with policies and guidelines can result in academic misconduct?

You should be given clear guidance on what use of generative AI is appropriate in any assessment.
 

  • Is your final work your own, not simply copy-pasted from a generative AI tool?
    • Is your style and voice evident in the work?
    • Have you applied your critical/creative thinking and logical reasoning to write the assignment rather than simply rephrasing AI-generated content?
       
  • Are you tracking how you have used generative AI throughout your assignment?
    • Are you saving copies of each step to create a record that can be shared with lecturers to facilitate conversations about your work?
       
  • Are you correctly acknowledging and referencing source materials used in your work?
    • Have you used sources besides AI, such as the library, to find relevant and high-quality reference materials?
       
  • Are you exercising critical thinking and disciplinary expertise when considering AI-generated information?
    • Are you fact-checking the information you receive from AI?
    • Are you aware of generative AI's limitations, such as potentially outdated or inaccurate content and its limited ability to provide reliable sources?
       
  • Are you aware of potential biases in the generative AI you are using?
    • Are you aware that AI tools may align with commercial objectives or reinforce societal prejudices?
    • Are you constantly applying critical thinking, analysing and contextualising AI's outputs, and cross-verifying any information it provides?
    • Are you forming your own perspective rather than relying solely on AI-generated content?
       

If you choose to use GAI to aid in creating an assessment, always consult with your module coordinator/lecturer(s) before doing so.


Examples of how GAI can be used in assessments

This is not an exhaustive list.

You may use GAI or other AI-assistive tools (e.g., Grammarly) to provide suggestions for corrections or improvements to the spelling, grammar or structure of your written work.

You must make all such changes yourself.

Do not copy and paste edited text from a language model into a submitted work.

You must acknowledge this use of AI (see the guide on how to do this).

You may use GAI or AI-assisted search tools to identify relevant literature or to assess whether sources are valuable to read for your research.

You may use GAI to help you understand concepts or ideas as part of your study processes, but be aware that there is a risk of “hallucination” or misinformation.

Do not upload course materials (including assessment guidance, slides, readings, videos or transcripts) to any GAI tool. Doing so violates copyright law.

Do not copy AI-generated summaries or overviews of sources into your work.

While it's important to be aware of the innovative tools available, we emphasise prioritising traditional research methods, such as the University Library databases, to mitigate the risk of inaccuracies or "hallucinations" that can arise from AI-generated misinformation.

You must acknowledge this use of AI.

You may use bibliographic software such as EndNote but must not rely upon GAI assistive search tools to create your reference lists.

Also, never rely on these tools to generate accurately formatted references. Learning how to cite and reference is a vital part of your academic studies. You can find more information on the Library website.

You may use text-to-speech or speech-to-text software to create your assessments. However, be careful to verify the accuracy of any outputs from these tools.

You are not required to acknowledge this use.

You may use automated translation software to translate texts between languages. Be sure to verify the accuracy of any translation before incorporating any translated materials into your submitted work.

Remember the Golden Rule - Ensure your submitted work is genuinely yours, not just copied or edited from AI-generated content. Simply editing AI-generated content is not enough for it to be considered your own work!

Do not upload course materials to machine translation software for translation into other languages – doing so may violate copyright laws.

You must acknowledge this use of AI, making clear which components of a submitted piece have been translated (see the guide on how to do this).

Do not use AI-generated material (whether generated by yourself or others) as a source, whether referenced or not, in any submitted work (except when commenting directly upon the outputs or functionality of the AI system itself or in other situations when it is a requirement of the assessment).

You must acknowledge this use of AI (see the guide on how to do this).

You may use an AI tool to generate or modify supporting images for your work or to generate or modify charts, graphs, diagrams and other visuals, audio clips, music or video. 

You must acknowledge this use of AI and clearly state that the material was generated using a named tool at the point where the material appears.

You may use AI tools to analyse and summarise data. For example, you may use ChatGPT to produce a graph, chart, or table from data and include it in your submitted work.

Some assessments will require you to analyse data manually or to use other tools such as Excel or SPSS. When AI is prohibited, the assessment briefs will contain all the necessary information.

You must acknowledge this use of AI (see the guide on how to do this).

Suggest code completions and assist with debugging and streamlining your programming workflow.

Explain complex coding concepts and provide step-by-step guidance, facilitating your learning process.

You must acknowledge this use of AI (see the guide on how to do this).

How to acknowledge AI sources in your work

The University has issued a statement on the use of Generative AI in assignments. We advise all students to check this guidance regularly, and particularly before submitting your work. This is so that you're aware of the most up to date advice prior to using AI in your initial thinking or assignment.

If lecturers have made clear that you may use AI sources in your assessed work, then you should acknowledge, describe and reference its use.

The University Library offers guidance on referencing your use of AI tools

Privacy and Data Protection

Artificial Intelligence (AI) tools present significant benefits but also come with inherent privacy risks. Even when these tools are not directly trained on user inputs, caution is necessary. Staff and students should be vigilant about the data they enter into AI systems.

Best Practices for Data Protection

The first step when considering the use of AI tools should be a necessity assessment. Before inputting personal data (e.g., names, ID numbers, contact details, email addresses, etc), users should carefully consider whether it's necessary to include this information and whether the desired outcome can be achieved without personal details. This critical evaluation helps minimise potential privacy risks from the outset.

If personal data must be used, the principle of data minimisation should be applied rigorously. Users should include only the minimum amount of personal information required to achieve their objectives. It's crucial to avoid unnecessary details such as full names, ID numbers, contact information, and email addresses unless they are essential for the task at hand.

Implementing a privacy-by-design approach is also vital. This means carefully structuring inputs to exclude non-essential personal information and, where possible, using anonymisation or pseudonymisation techniques. By incorporating these practices, data protection becomes an integral part of the process rather than an afterthought.

Configuring AI Tools for Privacy

Most generative AI platforms offer privacy-enhancing options that users should take advantage of. For example, in ChatGPT, users can navigate to Settings > Data Controls and find the option "Improve the model for everyone." Disabling this option is generally recommended, especially when working with personal data. This simple step can prevent unintended data uploads, reduce the risk of data breaches, and ensure that personal information isn't used for model training without consent.

Transparency with Students

In an educational context, transparency regarding AI use is paramount. Staff should inform students about the use of AI tools in academic processes, including setting, checking, and marking assessments. They should explain potential impacts on coursework or evaluations and clearly communicate how student data might be used with AI tools. The rationale behind this transparency is rooted in fundamental data protection principles. Students should never be surprised by how their personal information is utilised, especially in an educational environment that increasingly incorporates AI technologies.

University Library

Our Library is home to a vast array of publications you'll need for your studies, including rare archives and special book collections.

Postgrad students
Explore our Library

Academic Skills Support

Develop your understanding, thinking, writing and organisational skills at the Academic Skills Unit.

Female student smiling
Read more

Digital skills

It's important to think about the digital skills you already have and the new skills you'd like to learn.

Students studying as group in Eldon building
Read more