Academic conduct policies and guidance have undergone review in all 24 Russell Group universities.
To address the challenges posed by the increasing use of generative artificial intelligence in education, UK universities have developed a set of guiding principles aimed at ensuring AI literacy among students and staff. The code, endorsed by vice-chancellors of the 24 Russell Group research-intensive universities, aims to enable universities to seize AI opportunities while upholding academic rigor and integrity in higher education.
Previously, discussions revolved around prohibiting educational use of software such as ChatGPT to prevent cheating. However, the new guidance emphasizes the importance of educating students on responsible utilization of AI in their academic pursuits, while also raising awareness about potential issues like plagiarism, bias, and inaccuracies associated with generative AI.
In response to the rise of generative AI, all 24 Russell Group universities have revised their academic conduct policies and guidance. The updated guidelines aim to provide clarity to students and staff regarding situations where the use of generative AI is deemed inappropriate. They also empower individuals to make informed decisions about utilizing these tools appropriately and to acknowledge their use when necessary.
These principles, developed collaboratively with AI and education experts, mark an initial milestone in a potentially complex phase of transformation in higher education, as the world continues to be shaped by the advancements of AI.
According to Prof Andrew Brass, the University of Manchester’s School of Health Sciences’ leader, recognizing that students are already utilizing generative AI technology, the focus for educators is on determining the most effective approach to prepare students and identify the essential skills they require to engage responsibly with such technology.