UK Police Facial Recognition Software 98% Wrong – Report
Photo: PixabayTech04:19 14.05.2018Get short URL
Facial recognition software used by metropolitan and South Wales police forces has returned false positives in over 98 percent of alerts, according to an investigation by the Independent.
In initial tests, Metropolitan Police’s facial recognition system returned 104 alerts of which only two were confirmed to be positive matches, the Independent noted in its investigation. This makes system effectiveness a deeply disturbing 2 percent.
In South Wales, a similar system generated over 2,400 alerts in 15 separate deployments since June 2017. Only 234 of those were confirmed positive, fewer than 10 percent.
The police assert that they do not consider inaccurate matches to be “false positives,” saying that they check every alert as soon as they occur. Both forces noted that the system is still in trials.
The poor performance — even in controlled testing stages — has drawn fierce criticism from human rights organizations, concerned citizens and lawmakers.
“I have told both police forces that I consider such trials are only acceptable to fill gaps in knowledge and if the results of the trials are published and externally peer-reviewed. We ought to wait for the final report, but I am not surprised to hear that accuracy rates so far have been low as clearly the technology is not yet fit for use,” said professor Paul Wiles, a UK biometrics commissioner.
Wiles stated that the technology must not be deployed before suitable legislation comes into force, and such legislation should make the public “clear when their biometrics might be taken and what they might be used for, and that Parliament has decided those rules.”
Wiles noted that the Home Office promised to publish a biometrics strategy in June, however the Office acknowledged it cannot confirm when and if legislation will be ready.
“It is an intrinsically Orwellian police tool that has resulted in ordinary people being stopped and asked for their ID to prove their innocence,” said Silkie Carlo, director of the Big Brother Watch group.
“It is alarming and utterly reckless that police are using a technology that is almost entirely inaccurate, that they have no legal power for, and that poses a major risk to basic democratic freedoms. It must be dropped.”
Tao Zhang, senior lecturer at Nottingham Trent University, remarked that without open debate concerning facial recognition, the technology “could clearly be exploited by an authoritarian state for purpose of political control, as the case of China illustrates.”
Zhang added that, “with such a rapidly developing technology, there is danger that public policy may not keep pace.”
The system also drew criticism from Tony Porter, surveillance camera commissioner. According to the Independent, he said he was concerned about the amount of false positives the system produced.
“The cause of concern goes right across the whole use of a surveillance camera,” Porter noted, adding, “I’ve got concerns about the quality of the technology.
“That they could be discriminatory against race, sexual orientation and even age causes me concern,” he added, but did not elaborate how a surveillance camera could identify sexual orientation.
Metropolitan Police told the newspaper that no arrests have been made using the technology. Images with false positives were deleted 30 days after capture, while images that generated no alert, according to the report, are deleted “immediately.”
A South Wales Police spokesperson observed that, “The use of automated facial recognition (AFR) in South Wales Police has to be for a policing purpose and all enquiries have to be deemed as being proportionate for the intended purpose.”
“The deployment and use of AFR is governed by the Protection of Freedom Act 2012 with oversight provided by the surveillance camera commissioner,” she said, adding that the system is still in early stages and will see a “robust governance structure around its use with bi-monthly project boards chaired by chief officers.”