Analysis: humans make about 35,000 decisions every day so is it possible for AI to deal with a similar volume of high decision uncertainty?
Artificial intelligence (AI) that can think for itself may still seem like something from a science-fiction film. In the recent TV series Westworld, Robert Ford, played by Anthony Hopkins, gave a thought-provoking speech: "we can't define consciousness because consciousness does not exist. Humans fancy that there’s something special about the way we perceive the world and yet, we live in loops as tight and as closed as the [robots] do, seldom questioning our choices – content, for the most part, to be told what to do next."
Mimicking realistic human-like cognition in AI has recently become more plausible. This is especially the case in Computational Neuroscience, a rapidly expanding research area that involves the computational modelling of the brain to provide quantitative, computational theories.
From RTÉ One's Six On News, a report on an AI system engaging in the first live public debate with humans
The field of AI began in the 1950s and was originally inspired by the goal of creating machines as intelligent as humans. However, disillusioned by slow progress, many AI researchers deviated away from the brain sciences and focused more on so-called "expert systems", in which AI was targeted at performing very specific jobs.
Recently, AI systems have brought together advanced machine learning techniques and powerful computational resources to create predictive models through the processing of complex and large datasets. These advanced AI algorithms often make use of artificial neural networks inspired by biological networks that link brain cells (or neurons). In particular, deep learning and its variants, have been proven to surpass human-level performance - for example Google's DeepMind AlphaGo program beat the European champion in a Go competition.
However, these artificial neural network models are mimicking only a very small portion of the actual complex biological brain. Critically, they lack a key ingredient that is essential to human intelligence, namely consciousness or self-awareness. Implementing such advanced (meta)cognition into algorithms could potentially lead to more human-like intelligent machines.
From RTÉ 2fm's Chris and Ciara show, a discussion on a new study which shows that one in four people admit that they are sexually attracted to their AI voice assistant
Of course, one might question whether a machine really needs to be self-aware of its own choices and actions. To address this, let us first imagine a scenario in which a self-driving car is cruising along a motorway, and encounters unforeseen circumstances, such as extreme fog. The car could continue to drive at a high speed, and this could be catastrophic. Now, suppose the car has some form of awareness of the situation. If it was aware that its level of decision confidence was sufficiently low, then it may slow down or park on the road side, thereby reducing the danger.
But how can one implement a computational model of decision confidence in artificial intelligence? The answer lies in recent studies in the brain and cognitive sciences of decision making.
It has been estimated that the average adult human makes about 35,000 decisions each day. Some decisions are automatic and unconscious (for example, whilst driving a car), and others are more deliberate and complex, like voting in an election. Importantly, every decision is accompanied by a level of confidence (or lack of confidence). And high decision uncertainty can lead to change-of-mind, or even a complete lack of action. However, it is unclear how the brain can perform such calculations.
The work illuminates the mechanisms of how we make decisions
In a recent computational neuroscience study by researchers from the Intelligent Systems Research Centre at Ulster University, the first biologically motivated neural network model that computes decision uncertainty has been developed. Intriguingly, the same computer model can not only mimic brain activity observed in humans and some animals, but also replicate change-of-mind and error correction behaviour, which require on-the-fly metacognitive processing. These phenomena can be explained with a "feedback" control mechanism involving a decision uncertainty-monitoring system in the brain - just like a thermostat would monitor room temperature.
This exciting research, led by the author with PhD researcher Nadim Atiya, was recently published in the Nature Communications journal. The work not only illuminates the mechanisms of how we make decisions, but has some exciting potential uses in those persons who may have distorted evaluation of their confidence, such as OCD sufferers, gamblers etc).
The work also re-poses the philosophical question 'how does consciousness arise?’ and opens up a new research area for machine consciousness or awareness, in which artificial neural networks can have the ability to reflect on, evaluate and self-correcting their own thoughts and behaviour – making films like Terminator and RoboCop more of a reality than we thought previously possible.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ