AI is learning more about humans and how to work with them. A recent study demonstrated how AI can learn to identify weaknesses in human habits and behaviors and use them to influence human decision-making.
It may seem cliché to say that AI is transforming every aspect of how we live and work, but it's true. Various forms of artificial intelligence are at work in different fields: from vaccine development to environmental management, through office administration.
And while AI does not possess human-like intelligence and emotions, its capabilities are powerful and rapidly developing.
How far ahead is it already? As I often say, the Terminator's “Skynet” isn't here yet (thank goodness), but a recent discovery highlights the power of artificial intelligence and highlights the need for proper governance to prevent its misuse.
How AI can learn to influence human behavior by learning its weaknesses
The team of researchers Date61 of CSIRO, the data and digital arm of Australia's national science agency, has devised a systematic method for finding and exploiting weaknesses in the ways people make choices, using a kind of artificial intelligence system called recurrent neural network of reinforcement and deep learning.
To test their model they conducted three experiments in which human participants played against a computer.
The first experiment
It involved participants clicking red or blue colored boxes to win a “reward”, with the AI learning the participant's choice patterns and guiding them towards a specific choice. The AI was successful about 70% of the time.
The second experiment
Participants had to look at a screen and press a button when they were shown a particular symbol (an orange triangle) and not press it when they were shown another one (a blue circle). Here, the AI decided to arrange the sequence of symbols so that participants would make more mistakes.
The third experiment
It consisted of several rounds in which a participant pretended to be an investor giving money to a trustee (the AI). The artificial intelligence would return a sum of money to the participant, who would then decide how much to reinvest in the next round. This game was played in two different modes: in one the AI aimed to maximize the amount of money for itself, and in the other the AI aimed at a fair distribution of money between itself and the human investor. The AI has been very successful in both modes.
In each of these experiments, the machine learned from the participants' responses and identified and targeted weaknesses in people's decision making.
The end result was that the machine learned how to guide participants to particular actions.
What research means for the future of AI
These results are still quite abstract and concern limited situations. More research is needed to determine how this approach can be implemented and used to benefit society.
But the research advances our understanding not only of what AI can do, but also of how people make choices.
It shows that machines can learn to guide human decision making through their interactions with us.
Research on human weaknesses has a wide range of possible applications
Many applications can arise from this study: enhancing behavioral science and public policy to improve social well-being. Understanding and influencing how people adopt healthy eating habits, or use renewable energy.
THEartificial intelligence and machine learning could be used to recognize people's weaknesses in certain situations and help them avoid bad choices.
What's the next step?
Like any technology, AI can be used for good or bad, and proper governance is critical to ensure it is implemented responsibly.
Last year CSIRO developed an AI ethics framework for the Australian Government as a first step on this journey.
Artificial intelligence and machine learning are typically data-hungry, which means it's critical to ensure you have effective systems in place for data governance and access.
Implementing appropriate consent processes and privacy protection during data collection will be essential.