ICAEW.com works better with JavaScript enabled.

Role of assurance in risk management

Internal best practice design and controls are critical tools to ameliorate the risks in deploying cognitive technologies. However, in particularly important cases, or where the risk is unacceptably high, organisations should consider bringing in additional expertise to validate their design principles, controls or the model itself.

This could be an internal audit department, or an assurance engagement commissioned from an external provider. An assurance engagement under ISAE 3000, the prevailing standard on assurance engagements other than statutory financial audits, could be obtained. The engagement could cover the design and controls surrounding the technology, or the appropriateness of the outputs of the model.

For traditional financial audit and assurance providers, these engagements can bring their own challenges, as traditional working methods might not be well adapted to the tech-heavy environment.

Any such assurance engagement would likely rely heavily on practitioners’ experts – technology specialists who could bring a deeper understanding of the methodology and operation of the cognitive technology to the assurance team. It is vital for any accountant engaging in this kind of activity to ensure they have sufficient understanding of their client’s situation, their experts’ knowledge and the work those experts will carry out to ensure that they are planning appropriate work to reach an informed opinion on the engagement.

In this section, we will discuss some of the issues and approaches that an assurance provider might take.

What is being assured

It is important to consider when taking on an assurance engagement relating to a cognitive technology what exactly is being assured. Is it the design processes that led to its creation?  Is it the design and effectiveness of the controls environment that surrounds the operation of the technology? Or is it the accuracy and completeness of the training data set, the appropriateness of the decisions the model makes, or even just an opinion on a specific decision?

Having a clear conversation with the client and agreeing what procedures will be carried out and what the scope of the review will be is critical to making sure that both parties are happy with the result. It is also important to be clear if any engagement will form an assurance engagement on non-financial information under ISAE 3000.

Having agreed the scope, the assurance provider can then plan an engagement based on their understanding of the key risks and uncertainties, considering the issues discussed earlier. The concept of explainability is particularly important.

If the engagement covers a system created through machine learning, it may be impossible to determine through tests of detail alone whether the system is working correctly. In these cases, the assurance provider should focus on designing tests of control, as they would in any case where volume, complexity or another issue made specific testing alone impractical.

Skills

Depending on the details of the engagement that has been planned, a degree of technical expertise may be required. This might include data science and statistics skills to review the training datasets and consider biases, or coding knowledge to review the robotic process automation system’s code, the machine learning program and/or the model it produced.  Depending on the size of the risks involved and the significance of these particular elements to the planned engagement, the assurance provider might use regular assurance staff with knowledge in these areas, or one or more practitioners’ experts, either from a dedicated team in their organisation or hired externally.

In any case, as ISAE 3000 makes clear, the assurance provider should carefully consider any technical experts’ knowledge and skills, the complexity of the work they are being asked to perform and how they will assess the reliability of the experts’ conclusions.

If engagements of this type are frequently part of any assurance provider’s work, they may wish to consider hiring and/or training staff with an appropriate mix of technical knowledge and familiarity with the assurance process and requirements. They might look for trained accountants with an interest in and understanding of technology, technology experts who could be given a grounding in the principles of audit and assurance, qualified IT auditors, or a combination of these.

Assurance approach

Audit concepts such as a test of detail or a test of control very much apply to engagements on cognitive technology. For example, if the concern is gender bias in a credit assessment model, then a test of detail might include checking if resubmitting data with only the applicant’s gender reversed is likely to lead to a reversal of the decision. A test of control might include a review of the documentation of how the training dataset was stripped of applicant gender data and proxies for it, such as first name or record of changes of last name.

When reviewing a tool such as robotic process automation, the assurance provider could review a sample of the tasks carried out by the system, and they might consider how the system’s outputs are reviewed by the organisation itself.

Many cognitive solutions are not fixed in time, but instead adapt and update as they are run.  This is particularly true of approaches driven by machine learning, which are often continually retraining on live data as they did on their initial training data. In theory this allows them to improve their operation and their accuracy. However, it can also mean that the system can drift from its original state and pick up false associations, biases or influences. A one-off assurance approach might not be appropriate; instead a programme of occasional reviews might be better suited.

Where automation has been created with explicit instructions, such as in the case of robotic process automation or with many algorithms, the assurance provider could review how the instruction list or process was documented and transformed into code. Many business processes rely on unwritten steps or rules and the common sense of the humans carrying out those processes. These might not be captured in the creation process.

Risk profile and context

Ultimately, what procedures are most appropriate will depend on the level of risk that the cognitive technology carries and the context in which it is used.

Indications of a high-risk situation include:

  • high decision volume;
  • concerns around the reliability of the underlying data;
  • likelihood of bias;
  • low model accuracy;
  • poor model explainability;
  • operation in a heavily regulated sector;
  • application to areas where mistakes would affect the operator’s reputation;
  • high probability of harm to subjects if mistakes were made; and
  • complex model outputs (such as a classification rather than a binary decision).

The next section provides a summary of our approach, which can be applied to other emerging technologies, as well as additional resources for further reading.