The journey started in the summer of 2023; a phone call between friends, one in finance and the other in technology – the subject, causal AI. Over the following months the two grew to a consortium comprising 15 individuals with expertise in modern slavery, industry supply chains, finance, economics, data science, technology, behavioural science, investigations, sustainability and ethics.
Modern slavery is a term that often provokes strong reactions. However, those reactions pale in comparison to the experience of any one of the 50m people the UN estimates is living it right now. To put this issue in context, 50m people is more than the combined populations of Canada, Switzerland and New Zealand.
It is a difficult, but not impossible, problem to solve because of concealed, incomplete, conflicting and moving requirements that are often hard to identify or recognise.
That ignited a purpose driven desire to explore ways in which to help, which gave birth to a public interest project, iEARTHS (Innovative Ethical AI for Remediation and Termination of Human Slavery). iEARTHS explores how we could leverage technology, ethics, domain expertise, survivor voices and supply chain findings to tackle this problem.
Every person and organisation within the consortium is volunteering their time and expertise with a single view: to have a positive impact on eradicating modern slavery. Solving this problem was not going to be easy, but that fact was hardly a deterrent.
Before brainstorming any solutions, the consortium reflected on the non-negotiables that formed the foundation for any solution design. We started with data. Three aspects were particularly important: firstly, ensuring that none of the investigative data used to train the AI is ever traceable to a person or organisation; secondly, that all data used in the training is high quality; and lastly, that no stakeholder data files were stored.
We then focused on the technology. As with any AI approach, we carefully considered the steps necessary to prevent hallucinations and biases, as well as the potential for the AI solution to be used by bad actors.
The importance of high-quality data and learning rose in importance, in part, given the success of generative AI (GenAI), such as ChatGPT. It is precisely because of GenAI that learning approaches and guardrails are essential.
The example that best highlights the issue is the Manhattan legal case, where a lawyer over-relied on GenAI for a motion full of made-up case law. It demonstrates the issue of AI hallucinations, where the technology literally fabricates information that appears credible to the user.
We manage bias risk by building an inclusive community of experts with different disciplines, socioeconomic backgrounds and cultures to ensure diversification of learning and perspectives for the AI to learn from.
Imagine you are trying to identify the possible causes (motivators) for a person perpetuating a forced labour act, then try to devise remedial or preventative steps when you lack a contextual understanding or ability to recognise the environmental conditions within which it occurs.
Advisory committees were formed to develop expertise from various communities. For example, survivor voices, labour or human rights organisations, or employee-owned consultancies specialising in ethical trade and human rights. Including this human perspective within AI learning is critical to eliminate biases, assumptions that arise from a privileged life, or the application of values and norms that lack universality.
The ethical considerations, while seemingly obvious, can ironically be trickier than they appear. They are often unnatural thoughts within typical mindsets, even within a profession built upon ethics. For instance, technology specialists are challenged with considering how AI recommendations may create greater harm than already being done in a particular case, or how bad actors may use the AI to devise new methods of concealing their forced labour tactics.
Causal AI is a technology that can ‘reason’ and evaluate choices akin to a human approach. It is ideally suited to highly complex problems such as identifying, solving, and eradicating forced labour (aligned with the CCLA initiative). Effectively, causal AI is designed to understand and illustrate the cause and effect behind an outcome, the insights from which subsequently support human-led remediation and prevention actions.
Thinking beyond the AI, the consortium reflected on how the technology could be leveraged to become ‘decision useful’ for industry in identifying, fixing and preventing modern slavery within their international supply chains. Tougher legislation is passed, including the recent Corporate Sustainability Due Diligence Directive (CS3D) or the Modern Slavery Act (UK), putting more onus on corporate responsibility in this area.
A better and fairer future lies ahead; one where you too can contribute your finance expertise towards a worthy public interest cause. The consortium used its collective expertise to do good – what might you do with yours?
These challenging AI, ethics and data issues will be explored in greater detail in the next three articles: a journey premised on providing positive and practical outcomes to a real-world complex problem. The next instalment will dive more deeply into the artificial intelligence approach and the lessons learned.
Read the whole series...
Supporting AI adoption
In its Manifesto, ICAEW sets out its vision for a renewed and resilient UK, including incentivising the use of AI and upskilling the workforce to do so.
- Wates Principles: seven steps towards better governance reporting
- Proposed public-sector sustainability standard takes broad approach
- ICAEW outlines effective grant management for government entities
- How AI is changing chartered accountancy
- Corporate governance reporting under spotlight in FRC review