A woman's accident mistakenly identified as a thief from the facial recognition system in a supermarket in New Zealand I'm not surprised. When the chain Foodstuffs North Island announced its intention to test this technology to fight crime in stores, technology and privacy experts have expressed many fears. In particular, the risk of discrimination against Maori women and women of color was highlighted. Very right fears, and necessary reflection.
The context of use of supermarkets: beyond algorithms
Automated facial recognition is often discussed in abstract terms, as pure algorithmic pattern matching, with the emphasis on correctness and accuracy: I still remember the emphasis of “grab and take away” shops with no more checkouts. These are rightly important priorities for systems that manage biometric and security data. But with such a crucial focus on the outcomes of automated decisions, it's easy to overlook concerns about how these decisions are applied.
Designers use the term “context of use” to describe the daily working conditions, activities and objectives of a product. With facial recognition technology in supermarkets, the context of use goes well beyond traditional design concerns such as ergonomics or usability. It requires considering how automated breach notifications trigger responses in the store, the protocols for managing those responses, and what happens when things go wrong. These are more than just technology or data problems. They are human, social problems.
Balance accuracy and impact of errors
Investing in improving prediction accuracy seems like an obvious priority for facial recognition systems. But this needs to be seen in a broader context of use, where the damage caused by a small number of incorrect predictions outweighs the marginal performance improvements elsewhere.
The New Zealand supermarket's mistake is just a small pearl in a series that can strangle this technology. And the supermarket company's response that it was a "genuine case of human error" does not address the deeper issues surrounding such use of AI and automated systems. Research suggests is human decision makers can inherit biases from AI decisions. In situations of high stress and risk of violence, combine automated facial recognition with impromptu human judgment it is potentially dangerous.
Rather than isolating and blaming individual workers or technology components as single points of failure, greater emphasis needs to be placed on fault tolerance throughout the system. AI errors and human errors cannot be avoided entirely. AI security protocols with “humans in the loop” need more careful safeguards that respect customer rights and protect against stereotyping.
Supermarket, towards a culture of surveillance?
The Australian case is emblematic. In New Zealand they have responded to retail crime with overt technological surveillance: body cameras provided to staff (now also adopted by the Woolworths chain), digital tracking of customer movement through stores, automatic trolley locks and exit gates to stop people from leaving without paying.
Supermarkets may just be the vanguard of a technological shift in the shopping experience. A terrible change, which moves towards a culture of surveillance in which every customer is monitored like a potential thief. It reminds me of the ways in which global airport security has changed since 11/XNUMX.
A human-centered design challenge
New Zealand's Privacy Commissioner will soon comment on Foodstuffs' facial recognition trial. And this pronouncement, believe me, I am paraphrasing Lorenz, is the classic flapping of a butterfly's wings in one part of the world that can cause a tornado on the other side of the world.
Theft and violence are an urgent problem that supermarkets, like other businesses, must address. But now they must demonstrate that digital surveillance systems are a more responsible, ethical and effective solution than possible alternative approaches. And that means recognizing that technology requires human-centered design. To avoid abuse, to avoid prejudice. Holy gods, to avoid damage.
If these junctures are not taken advantage of now to guide regulatory frameworks and standards, to inform public debate on the acceptable use of AI, and to support the development of safer automated systems, when? The New Zealand case is a wake-up call for all those who design and implement AI systems in delicate contexts such as retail. Only by putting the human factor at the center, with all its complexities and nuances, will we be able to develop technologies that truly improve our society, without creating new forms of discrimination and surveillance.