ICAEW.com works better with JavaScript enabled.

New technologies and their implications on the code of ethics

On 25 May 2018, ICAEW hosted a roundtable to discuss the implications on the ethical principles of technological change and whether new technologies present a threat to the code of ethics and whether the code requires updating. It also discussed our interaction with machines and how firms can ensure their systems are adhering to ethical principles.

The event included participants from a range of different sized firms including Big4, mid-tier, smaller practices and academia. Participants were told that the discussion would inform ICAEW’s ongoing work on Artificial Intelligence, Blockchain, Cyber Security and Data.

The discussion was prompted by the following questions.

  • What are the implications on the ethical principles of technological change, in particular around big data and artificial intelligence?
  • Does new technology create any new threats to complying with the code of ethics?
  • Is there a need to update current approaches to ethics and education around ethics?
  • How do we develop the way we interact with machines? Where is human involvement still essential? What are the risks of greater (unchallenged) reliance on computers?
  • How do firms get comfort that their systems (AI systems in particular) are acting according to ethical principles?

The principles of ethics

While the ethical principles do not necessarily need to change, compliance is likely to become more difficult. For example, what should be done with all the data that a firm holds? An even greater challenge is the lack of understanding around the contractual terms regarding use of data, and little understanding of the factors that data integrity depends upon.

The future of confidentiality as a fundamental principle may be in question. The list of contexts in which confidentiality can be breached is ever growing, which makes it unique amongst the fundamental principles. It used to be a duty rather than a fundamental principle and we may see a reversion.

Rules-based auditor independence requirements may erode the need for professional judgment. If the audit partner is a machine, rotating them every 5 years to counter a familiarity threat makes little sense. 

The impact of technology on ethics may well depend on what we mean by the term. Are we talking about motive driven thought processes (deontological) ethics or a regulatory compliance-based system, such as the auditor independence regime? 

If a human overrules a system then the human will need to explain why. Arguably, it’s human interventions that make systems unreliable. If a machine makes a mistake, how far do we push responsibility, eg, to the machine programmer? Are we putting appropriate systems in place or simply what regulators ask of us? 

US security services, for instance, have said that any system they buy must be able to produce an easily explainable reason as to why it has reached a certain decision. 

In summary, the fundamental principles are still fit for purpose (with the possibility that confidentiality may be further eroded). AI may well remove certain threats eg, intimidation, but may also complicate demonstration of compliance with the principles.

Education and training of AI

How does one train artificial intelligence (AI)? What behavioural characteristics make a good person? If you base training solely on past behaviour and outcomes, then you incorporate the bias of the humans that previously completed the assigned task. Consequently, you risk the reinforcement of undesirable outcomes. 

As interpretations change over time, there is also a risk that the AI machine gets stuck in the past. There are difficulties in building a ‘”reasonable and informed third party” into the systems, particularly as subtle changes to fact patterns can entirely alter ethical decision-making.

The training of deep learning and heuristic systems depends on the data available. Can one codify human values such as fairness? In considering professional and virtue based ethics can a machine learn virtue? 

If a particular judgment is being made repeatedly, or shifting slightly over time, then perhaps that can be codified. 

If the systems are poorly designed and trained then they will continue to suffer from “garbage in, garbage out” eg, banks not taking into account industry specific ratios in lending decisions.

It is accepted that we should do due diligence on systems, but should we put AI through an ethics test? AI learns, and as it learns it‘s able to make different decisions, similarly to a person. It may also have in-built safeguards, eg, if three of the five systems in an aircraft do not agree then it reverts to manual control.

We need to train people to understand and train systems. If we view AI as a child, do they need a “moral guardian”? There is a small population of experts that could do this. This is even more of an issue for smaller firms. While many are pulling out of the audit market, ethical issues that are attached to confidentiality and data apply across all service lines, and the pace of change is incredible. 

How do you train someone that is going to train AI when there is not enough centrally available information on ethical matters? 

There is a need to understand the consequences of including certain information. 

We need to learn, not necessarily to use a framework to make a decision, but how to review a decision made by someone/something else using that framework. Professional scepticism will remain of the utmost importance, as will having the confidence to challenge a machine rather than blindly filing whatever the system produces.

It will be more important in future that staff members are empowered to take responsibility over systems and to hold them accountable.

Ethical use of data

There is a conflict between GDPR and the need to use as much outside data as possible to train systems. Do firms have sufficient data to train systems? Firms are working more closely together than they ever have before, but this is generally a verbal sharing of data and experience. But it is a big step from voluntary sharing to making sharing mandatory. Is that in the public interest? Are there competition and markets implications? There are also issues around how anonymous is anonymous information if, for example, the subject can be easily discovered based on publicly available information. 

If AI is sharing data then one node could corrupt the network. Cyber activity is increasingly becoming an ethical issue. One that could be easily overcome by reducing connectivity, but society is resistant to that. 

The boundaries of how you can use data are not yet understood. For example, should a system that measures personal effectiveness be used solely for self-improvement or also for disciplinary action? If the data on effectiveness is based on what someone puts in their calendar, then it could be manipulated leading to poor data integrity. 

The purpose of sharing data is important, eg, following the Nepal earthquake, mobile phone data was used to find survivors. However, once data is shared it cannot be unshared. Another issue occurs when people think they are share data with only one organisation, but they are actually sharing it with others too.

Auditors need to understand the context around data sharing, and possibly go beyond the compliance framework to ethical principles. 

Assuming I have properly acquired data, what are the challenges in querying it? What if I interrogate data but then revise my query? Should the data set relating to original queries be kept? 
Data is so easily available that it is very difficult for people to do the right thing, and difficult to pin down societal attitudes. There is a conflicting desire for absolute transparency in absolutely everything. People are generous with their data then wonder why it comes back to bite them. “If you are not paying for the product, you are the product”.

Threats from new technologies

The structure of the profession is changing. It may be that the bottom of the traditional staffing pyramid will shrink further as tech becomes more prominent. One issue is that nobody likes overriding decisions made by machines. 

The headline grabbing crashes are the ones that break firms. Who’s responsible when a system doesn’t perform properly? Should informed accounting firms know better? One cannot query a system unless one truly understands it. A company should be able to trust its FD to bring in the right system.

What if AI solves ongoing concerns but there is a flaw in the algorithm? There is also an assumption that AI will perhaps produce a black or white answer but this should not necessarily be the case. Therefore AI should complement audit work but there should still be professional human judgment.

Will the human desire to blame someone extend to AI?

Interaction with AI

It is important that we understand the interface and the potential implications of the interface.

Do we have a responsibility to tell people that they are dealing with AI? If yes, should we also tell people that staff are reading from a script? Robots cannot easily replace people but they can easily replace robots. Many organisations have turned their staff into organic chat-bots.

There will always be a need for human involvement, especially in the early stages of a process. Again the importance of scepticism should be reinforced. 

Will an AI change its response based on the nature of the interaction eg, collaborative, dominant, adversarial, and therefore be influenced by the behavior of the human it is interacting with? If the AI can learn, then it is possible it would adapt its response, but at least certain biases will still be removed, for example a human’s ability to act ethically can be impaired by factors, such as lack of sleep or low blood sugar.

The importance of explaining decisions is paramount, especially in training and teaching.

Assurance

Can we get systems to clearly explain what they are doing, rather than us just monitoring outputs? A student will learn that if they produce a a high-quality output they will get a good mark. In practice does that change their moral make up? Similarly, if you test an AI system, it might give you the answer it knows you want, rather than what it would do in practice. 

Systems will always stumble over the scenarios that they were not programmed to deal with. 

There is a question around the prominence of fairness in the Code of Ethics. It only appears once but takes on a greater role in discussions around the concept of “public interest”. How does one decide what is fair? It is contextual. People do not like unexplained unfairness, but are more accepting of explained unfairness. What is fair might be too subjective a perception to build into a system.

Fair dealing in the code of ethics is more to do with client relationships eg, dealing with vulnerable people. Perceptions of fair dealing are very subjective.

People are getting better at fooling systems (and being fooled by them).

The issue with linking ethics and AI is that it’s an attempt to quantify through a system ,whereas many these questions are contextual.

How much are we willing to pay for the right answer? There is a power relationship between the purchaser and supplier of a system and possibly prohibitive entry costs.

The role for accountants will be developing an assurance framework to determine whether firms/systems are operating in accordance with ethical principles. Implementation will be key.

Summary

AI only gives a probable answer, so additional governance is required as a result. Lack of understanding of new technologies is a big issue and those reliant on junior members of staff will require education. Where does accountability lie?

How do we go through the lifecycle of an AI system, including transfer to different software systems and the training that would be required. Should there be a fundamental change or incremental evolution?

Black box computers are likely to make a lot of mistakes. We are in the spotlight every time a system is to blame. There needs to be a big jump in understanding not only our data but that of our clients. The ethical principles of the profession will remain important. Professional scepticism is key.

As technology becomes more complex we need to ensure that a human reviewing the output of the machine understands how the machine has reached its conclusion. It will be essential that the machine can explain the basis for its decision, so that a human can assess the reasonableness of the decision. 

Further detail on XAI (Explainable AI)