ICAEW.com works better with JavaScript enabled.
Exclusive

Academia & Education Community

Are universities and artificial intelligence tools worthy adversaries in the quest for academic integrity?

Author: Susan Smith PFHEA NTF, Professor (Education), UCL School of Management

Published: 03 Jan 2023

Exclusive content
Access to our exclusive resources is for specific groups of students, subscribers, users and members.

How universities can cope with the rise of tools such as ChatGPT.

Recent headlines have highlighted the capabilities of artificial intelligence (AI) to produce essays from various prompts fuelling headlines questioning the cornerstones of assessment practice across the academic and professional world1. There is no doubt that Open AI’s ChatGPT and other similar tools bring powerful new capabilities to a mass audience. As each response is unique, their use is also largely undetected by existing plagiarism software that is widely used by higher education institutions, e.g. Turnitin2, making the use of such tools difficult to identify and leaving existing university misconduct policies struggling to adapt.

Should we be worried?

Over time, assessment practices have been shifting away from traditional essays and unseen exams to embrace a greater variety of assessment types. This has been in response to two major considerations, increasing technological capabilities offering an increased range of potential assessment forms, and the need to develop employability skills (employers don’t ask for essays).

This transition accelerated during the pandemic as universities were largely forced to move away from unseen exams and embrace a more diverse range of digital assessments (Hancock et al., 2022). Whilst the pandemic resulted in a continuation of improved undergraduate degree outcomes, changes to assessment practices were credited with narrowing awarding gaps as some groups of students responded more positively to assessment outside of the stress of an unseen exam3 (UUK, 2021). Academic integrity continues to be a key preoccupation for the sector and research findings indicate that no single measure, including proctoring, is effective in isolation (Henderson et al., 2022).

Much depends on how the sector views the role of AI in assessment. Here, I see two divergent approaches that could be taken: the preventative approach and the pragmatic approach.

The preventative approach

If the sector views the use of AI in assessment as misconduct resulting from the submission of work that is not the student’s own, it will seek to prohibit the use of such technology in the way it has sought to prohibit the use of ‘essay mills’ (services where students contact others to write their assignments, also known as contract cheating). We know that students knowingly engage in misconduct for a variety of reasons, with studies indicating that fear of failure, time pressures and financial pressure are often considerations (Henderson, 2022; Brimble, 2016). Of course, reduced barriers to cheating through the prevalence of AI tools and potentially low detection could stimulate increased usage.

Overall, the preventative approach appears to be a zero-sum game with the sector likely to be chasing technological advances and developing ever more complex policies for students to follow and staff to execute. Prior studies of exam cheating show that institutions need to have a clear and definitive approach to communicating misconduct policies to students for them to act as an effective deterrent (Henderson et al., 2022). 

An alternative would be the return to high stakes final assessments taken in person under timed conditions. Whilst this form of assessment has its place, I cannot help but think that this would be a retrograde step, which is why I am suggesting a pragmatic approach.

The pragmatic approach

Rather than work against the inevitable spread of AI tools, there is an opportunity to work with such tools so that we understand their limitations. For example, we already know that AI has the capacity to reproduce bias in a way that students would not be expected to and students should be attuned to these shortcomings. How do we educate students on how to use AI in a responsible manner? One approach in a classroom setting could be to ask students to instruct a tool to respond to a topic and then critique its response.

These developments offer an opportunity to review our pedagogies afresh, challenging norms that have persisted over time. AI is here to stay and will only increase its capabilities. It is incumbent on us as educators to teach students how to harness its strengths and also to recognise its shortcomings to prepare them for the contemporary workplace.

References

Brimble, M. (2016). Why students cheat: An exploration of the motivators of student academic dishonesty in higher education. Handbook of academic integrity, 365.

Hancock, P., Birt, J., De Lange, P., Fowler, C., Kavanagh, M., Mitrione, L., Rankin, M., Slaughter, G., & Williams, A. (2022). Integrity of assessments in challenging times, Accounting Education, DOI: 10.1080/09639284.2022.2137818

Henderson, M., Chung, J., Awdry, R., Mundy, M., Bryant, M., Ashford, C., & Ryan, K. (2022). Factors associated with online examination cheating, Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2022.2144802

Universities UK (UUK) (2021). Lessons from the pandemic: making the most of technologies in teaching. Universities UK. https://www.universitiesuk.ac.uk/what-we-do/policy-and-research/publications/lessons-pandemic-making-most

1https://www.theguardian.com/technology/2022/dec/31/ai-assisted-plagiarism-chatgpt-bot-says-it-has-an-answer-for-that

2https://www.vox.com/recode/2022/12/7/23498694/ai-artificial-intelligence-chat-gpt-openai

3https://www.officeforstudents.org.uk/media/a712215d-94e8-4e98-8614-baee4d5aa3dc/insight-brief-14.pdf

*The views expressed are the author’s and not ICAEW’s.
Category header
Topics