Managing risks with robust controls
Robust controls must be put in place during the creation of cognitive technologies and throughout their deployment to make sure they are working effectively and as intended, as well as to detect and flag any unexpected behaviour.
Good controls require research and understanding of the issues and difficulties involved in implementing cognitive technologies. These controls can range from direct, hard interventions (for example, placing a minimum to maximum collar on the allowed outputs of a pricing algorithm) to business process design (such as including a human in the process for any significant decisions).
No single control can prevent or detect all possible errors, but creating several good quality controls systems can help to reduce the overall possibility of error. These may be entity-wide business processes that are considered at the board level, to inbuilt systems of control embedded into specific cognitive solutions by the implementation teams coding them.
Controls exist as part of any organisation’s ordinary operations, but they will need to be amended and added to as cognitive automation introduces new risks.
Human in the loop
Automated systems built using techniques such as machine learning do not have any concept of common sense. They learn to apply themselves to their target problem without any prior idea of what a good solution looks like. This is one of the strengths of cognitive technology as it can solve problems that are hard or impossible to define in absolute terms. However, it can also mean that the system can produce unexpected results at the fringes of what it has learned. Even simpler tools such as robotic process automation can fall prey to this if the input data is different from what the designers expected.
These kinds of errors are why many users of cognitive technology are opting to keep a human in the loop, for example, by conducting a spot review by a human expert or using AI only to make recommendations for a human operator to choose from. Humans have wider contextual understanding and common sense, which partners well with cognitive systems’ raw efficiency but lack of understanding.
The context of what the automated system is being used for and the impact its decisions will have on its subjects will determine to what extent human oversight is needed. Overseers will need to be trained to understand how the system works and be confident to challenge it where appropriate.
Collars and kill switches
Simpler controls with similar aims to keeping human reviewers involved are to either build a collar (a maximum and minimum that it cannot override on its own) on the system’s autonomy, or kill switches (manual or automatic triggers that can rapidly suspend the cognitive technology’s operation).
These prevent runaway errors and feedback loops from creating outsized results, the kind suspected to be behind the so-called “flash crash” of 2010, when high-frequency trading algorithms accelerated a market dip into a 9% crash within a few minutes, before mostly rebounding immediately afterwards. Such kill switches, called circuit breakers, are now in place on many exchanges to suspend trading automatically if unusually sharp swings in prices are detected.
Whether a machine learning-taught system is developed only once, or whether it continues to evolve while in live operation, it is important to keep up a regular review of its fitness for purpose. If the data the cognitive technology is working on begins to change, then a static algorithm can be left behind, or a dynamic one could learn maladaptive behaviours.
As well as keeping a human in the loop for specific decisions, it may be appropriate to implement a regular larger-scale review of the model, its influences and its decision-making. For example, this could consist of a regular review of trends in the model’s high-level outputs, plus a spot review of a sample of particular decisions.