How universities can cope with the rise of tools such as ChatGPT.
Recent headlines have highlighted the capabilities of artificial intelligence (AI) to produce essays from various prompts fuelling headlines questioning the cornerstones of assessment practice across the academic and professional world1. There is no doubt that Open AI’s ChatGPT and other similar tools bring powerful new capabilities to a mass audience. As each response is unique, their use is also largely undetected by existing plagiarism software that is widely used by higher education institutions, e.g. Turnitin2, making the use of such tools difficult to identify and leaving existing university misconduct policies struggling to adapt.
ChatGPT is what is known as a large language model (LLM). It was developed by OpenAI and the current version was launched to the public in late 2022. It is by no means the only such tool and we should expect further tools to emerge.
Should we be worried?
Over time, assessment practices have been shifting away from traditional essays and unseen exams to embrace a greater variety of assessment types. This has been in response to two major considerations, increasing technological capabilities offering an increased range of potential assessment forms, and the need to develop employability skills (employers don’t ask for essays).
This transition accelerated during the pandemic as universities were largely forced to move away from unseen exams and embrace a more diverse range of digital assessments (Hancock et al., 2022). Whilst the pandemic resulted in a continuation of improved undergraduate degree outcomes, changes to assessment practices were credited with narrowing awarding gaps as some groups of students responded more positively to assessment outside of the stress of an unseen exam3 (UUK, 2021). Academic integrity continues to be a key preoccupation for the sector and research findings indicate that no single measure, including proctoring, is effective in isolation (Henderson et al., 2022).
Much depends on how the sector views the role of AI in assessment. Here, I see two divergent approaches that could be taken: the preventative approach and the pragmatic approach.
The preventative approach
If the sector views the use of AI in assessment as misconduct resulting from the submission of work that is not the student’s own, it will seek to prohibit the use of such technology in the way it has sought to prohibit the use of ‘essay mills’ (services where students contact others to write their assignments, also known as contract cheating). We know that students knowingly engage in misconduct for a variety of reasons, with studies indicating that fear of failure, time pressures and financial pressure are often considerations (Henderson, 2022; Brimble, 2016).
Overall, the preventative approach appears to be a zero-sum game with the sector likely to be chasing technological advances and developing ever more complex policies for students to follow and staff to execute. Prior studies of exam cheating show that institutions need to have a clear and definitive approach to communicating misconduct policies to students for them to act as an effective deterrent (Henderson et al., 2022).
ChatGPT is due to be incorporated into Microsoft’s products, with the Bing search engine already integrating the tool. This means that users assimilate AI enhancements to their work product in the same way they already use spell and grammar check tools that are integrated into various products.
Some call for a return to high stakes final assessments taken in person under timed conditions and wider spread adoption of viva-voce type assessments. Whilst these forms of assessment have their place, both have significant documented drawbacks and I cannot help but think that this would be a retrograde step, which is why I am suggesting a pragmatic approach.
The pragmatic approach
Rather than work against the inevitable spread of AI tools, there is an opportunity to work with such tools so that we understand both their capabilities and their limitations. They will be ubiquitous in the workplace that students graduate into. For example, we already know that AI has the capacity to reproduce bias in a way that students would not be expected to, and students should be attuned to these shortcomings.
These developments offer an opportunity to review our pedagogies afresh, challenging norms that have persisted over time. AI is here to stay and will only increase its capabilities. It is incumbent on us as educators to teach students how to harness its strengths and to recognise and critique its shortcomings to prepare them for the contemporary workplace. Educators must also work with employers, professional bodies, students and key stakeholders to evolve approaches to assessment in professional qualifications so that early career professionals are equipped with the skills and capabilities to thrive in a changing business environment.
How can the ICAEW Academia & Education Community support its members?
We ask you to answer several questions on this Padlet and share any practices that you have developed. Depending on the responses, we may seek to run a webinar or further articles on this topic or aspects related to it.
We have shared a Padlet that curates the news and commentary on AI tools from across the sector.
Brimble, M. (2016). Why students cheat: An exploration of the motivators of student academic dishonesty in higher education. Handbook of academic integrity, 365.
Hancock, P., Birt, J., De Lange, P., Fowler, C., Kavanagh, M., Mitrione, L., Rankin, M., Slaughter, G., & Williams, A. (2022). Integrity of assessments in challenging times, Accounting Education, DOI: 10.1080/09639284.2022.2137818
Henderson, M., Chung, J., Awdry, R., Mundy, M., Bryant, M., Ashford, C., & Ryan, K. (2022). Factors associated with online examination cheating, Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2022.2144802
Universities UK (UUK) (2021). Lessons from the pandemic: making the most of technologies in teaching. Universities UK. https://www.universitiesuk.ac.uk/what-we-do/policy-and-research/publications/lessons-pandemic-making-most