ICAEW.com works better with JavaScript enabled.
Exclusive content
Access to our exclusive resources is for specific groups of students, subscribers and members.

Almost one year ago, the world was caught in awe when ChatGPT was released to the public. Artificial Intelligence (ΑΙ) has been around since the 1950s, but it wasn’t until 2022 with the public release of generative AI that got everyone talking about it. As we become more reliant on Generative-AI powered tools, Monica Odysseos, Head of AI and Data Lab at Grant Thornton Cyprus, considers the ethical implications of using AI and how we can ensure AI empowers us without eroding our cognitive capacities.

I remember when I first started using ChatGPT - I fell in love with it. I used to refer to it as my best friend and claimed that it reduced my workload substantially. Long gone were the days when I used to open a blank page and get stuck thinking on how to begin writing an article like AI: What is it really?. I could now just easily ask ChatGPT to write the whole thing for me. The risks and negative consequences of this seemed non-existent.

It wasn’t until I watched a Ted Talk by the brilliant Margaret Heffernan discussing the dangers of AI when I actually started thinking about these. She emphasised that “The more we let machines think for us the less we can think by ourselves”. At first, I was in disbelief and wanted to prove it wrong. Nonetheless, when I opened a blank page once again and caught myself getting stuck again on how to begin, I realised how dependent I had become on ChatGPT and how helpless I would feel if I didn’t have ChatGPT. Margaret Hefferman came to my mind, and I began thinking about the dangers that come with AI.

By becoming overly reliant on AI tools, I realised that we are at risk of losing our own ability to generate original ideas and engage in independent thought. As we hand over more and more of our mental tasks to AI, we risk losing the very cognitive skills that make us human. This raises a crucial question about the ethical implications of AI. If AI is designed to automate tasks that require creativity and independent thought, are we not essentially outsourcing a core aspect of our humanity? How can we ensure that AI empowers us without eroding our own cognitive capacities?

What is ethics in AI?

The very definition of “ethics” may be relative to each person’s values and experiences. What one person may consider ethical, another may not. 

One of my favourite ethical dilemmas is the traditional trolley problem but with a twist on self-driving cars. Imagine a self-driving car going down a lane and has a sudden brake failure. At a pedestrian crossing a few metres away, a female doctor, a pregnant woman, a female athlete, a young boy, and a male athlete are crossing the street illegally (with a red pedestrian signal). The self-driving car has the option to swerve and avoid killing these pedestrians, however, if it would, it would drive towards a female doctor crossing the street legally (with a green pedestrian signal). What should the self-driving car do?

There are varying opinions on this dilemma depending on the ethical values of individuals. Many prefer the car changing direction towards the female doctor as it is only one person compared to five. Others prefer the car continuing its course towards the five pedestrians as they are crossing the street illegally whereas the female doctor is crossing the street legally.

This subjectivity makes it difficult to have a clear-cut conversation about ethics, especially when it comes to something as complex as AI.

In his book “21 lessons for the 21st century”, Yuval Noah Harari raises a philosophical and ethics related scenario. Imagine two models of a self-driving car being released, the Egoist and the Altruist.

The Egoist

This self-driving car is designed with your safety and comfort as the top priority. It is programmed to make decisions that protect you at all costs, even if it means prioritising your well-being over others on the road.

 

The Altruist

This self-driving car prioritises the greater good and the safety of all road users. It is designed to make decisions that minimise harm to pedestrians and other vehicles, even if it means that you might experience a slight inconvenience or delay.

 

Which one would you buy? 

Some people would choose the Egoist model, prioritising their own safety and comfort. Others would choose the Altruist model, as they believe that the greater good should be prioritised. Leaving such a decision to the discretion of individuals and their ethical principles is not an equitable or sustainable approach.

The debate between the Egoist car and the Altruist car represents a broader philosophical discussion about the ethics of AI and the role of technology in society. It raises questions about how we should balance individual autonomy and societal well-being, and how we should ensure that AI tools are developed and used in a way that aligns with our values and principles. 

AI Ethics & Regulations

A number of issues are under discussion as part of an ongoing effort to reach a consensus on how to regulate AI and solve the ethical challenges that come with AI tools. Some of these issues/ethical challenges include:

  1. Black Box Problem: When AI models make autonomous decisions without human oversight, understanding their reasoning becomes crucial. The lack of explainability ("black box" issue) can have significant consequences for individuals, particularly in sensitive domains like:
    • Financial Services: Autonomous loan approvals or algorithmic trading could disadvantage individuals based on opaque criteria, impacting their financial well-being.

    • Healthcare: AI-driven diagnoses or treatment recommendations without human review could lead to misdiagnoses or inappropriate care, with potentially life-altering consequences.
  1. Perpetuating Biases: AI tools inherit and amplify biases present in the data they are trained on. This can lead to discriminatory outcomes, disadvantaging specific groups in areas like:
    • Employment: Algorithmic hiring tools might unconsciously favour certain demographics, perpetuating inequalities.

      For example, in 2018, Amazon was involved in a major controversy when it was revealed that its AI recruitment tool was biased against women. The tool was trained on historical hiring data, which included resumes of employees. Since the company had historically hired more men than women, the AI tool learned to associate certain words and phrases in resumes with men, and other words and phrases with women. This led to the AI tool being more likely to recommend male candidates for open positions.

    • Criminal Justice: Risk assessment tools biased against certain communities can impact sentencing and parole decisions.
  1. Competence and Care in Tool Use: Even well-designed AI tools require human competence and ethical responsibility in their deployment. Improper use or interpretation of results can exacerbate existing biases and lead to harmful outcomes.

    Addressing these challenges requires the following to be considered:
    • Transparency and Explainability: Developing inherently transparent AI models and explainability frameworks to understand their decision-making processes.
    • Data Governance and Privacy: Implementing robust data protection regulations and ensuring responsible data collection, usage, and sharing practices. This includes:
      Data Integrity: Guaranteeing the accuracy, completeness, and security of data used to train and operate AI models. This involves robust data cleaning practices and data quality checks, and measures to prevent data poisoning or manipulation.
      Model Integrity: Building security into AI models from the outset to prevent vulnerabilities and adversarial attacks. This involves continuous monitoring, testing, and collaboration between researchers, developers, and security experts.

    • Accountability Frameworks: Establishing clear lines of accountability for AI decisions, including developers, users, and deploying organisations. This extends to ensuring accountability for upholding data and model integrity.

    • Algorithmic Fairness & Bias Detection: Continuously auditing AI models for biases and implementing mitigation strategies to ensure fairness and non-discrimination. This includes incorporating data and model integrity as key aspects of bias detection and mitigation efforts.

    • Human Oversight and Training: Emphasising human oversight and responsible use of AI tools, alongside training users on potential biases, ethical considerations, and the importance of data and model integrity.

Evolving standards and regulations

As AI continues to evolve and affect various aspects of our lives, the need for regulations to govern its development and use has become increasingly important. Numerous governments and organisations around the world have recognised this need and have already enacted various AI regulations.

An international standard on AI Ethics was published by UNESCO in November 2021 and it aims to promote a set of principles for responsible AI development and use. These principles are intended to guide governments, businesses, and individuals as they develop and use AI, however following these principles is not obligatory. UNESCO’s Recommendation outlines ten key principles that should be followed when developing and using AI.

In December 2023, the European Union adopted the EU AI Act, one of the most comprehensive AI regulations to date. The EU AI Act takes a risk-based approach, classifying AI applications into four categories based on their potential risks: unacceptable risk, high-risk, limited risk, and minimal risk. The EU AI Act prohibits the development of any AI applications that fall under the unacceptable risk category and imposes stricter requirements on high-risk AI applications, such as those used in critical infrastructure or healthcare.

In contrast with the EU, the UK has decided not to proceed with legislation regulating AI. Instead, the UK has proposed a pro-innovation approach to AI regulation and has proposed that existing regulators use existing laws to regulate AI in a manner that is appropriate to the way in which AI is used in the relevant sector. The intention is to develop a context-based, proportionate, and adaptable approach to regulation that supports the responsible application of AI. More on the UK’s approach to AI-regulation has been covered in the ICAEW Generative AI Guide.

In the United States, the National AI Initiative Act of 2022 is more principle-based, outlining a set of principles for responsible AI development and use. China's Ethical Guidelines for the Development and Application of Artificial Intelligence (2021) focuses on human control, emphasising that AI should not be used to manipulate or exploit human autonomy. Singapore's AI Standards and Guidelines for Good Governance (2022) focuses on diversity and inclusion, promoting the development and use of AI in a way that is fair and equitable.

Despite the various existing regulations, currently, there is no single, universally accepted AI regulation, possibly because there isn’t a single, universally accepted definition of AI. However, there is growing consensus that a uniform internationally accepted AI regulation is needed. A uniform regulation would address the issues that the patchwork of regulations that exist today create and would ensure that AI is developed and used in a responsible and ethical manner all over the world. A uniform AI regulation would also help to level the playing field for businesses and organisations that develop and use AI.

AI Ethics: Way forward

As we enter an era of technological transformation, AI is a protagonist in revolutionising our world, improving our quality of life, and addressing global challenges. However, with this great power comes great responsibility to use AI wisely and ethically.

As AI becomes intertwined with our daily lives, from providing medical diagnoses to powering our homes, we must ensure it is aligned with our values of fairness, transparency, and accountability. To ensure AI empowers, not erodes, our humanity, we must leverage it as a tool, not a replacement, fostering critical thinking, ethical judgement, and the very traits that define us. By carefully navigating these challenges, we can unlock AI's transformative potential without compromising our human values.

Furthermore, establishing clear lines of accountability is crucial, not just for developers and deploying organisations, but also for users. ICAEW’s Code of Ethics emphasises professional competence and due care, which applies to how members leverage AI tools in their work. This fosters responsible decision-making and mitigates potential risks.

The ICAEW Generative AI Guide also offers a valuable resource for users seeking to deepen their understanding of ethical considerations specific to generative AI tools. Its guidance on responsible use aligns with the principles outlined above and empowers professionals to leverage AI ethically and effectively.

The future of AI depends on our choices. Let's steer AI towards a future that benefits all, empowering, connecting, and uplifting humanity, not dividing or diminishing us. With open and cautious minds, let's harness AI responsibly to create a brighter future where technology serves as a partner, not a master.