In recent news, the Italian region of South Tyrol (Alto Adige) has initiated a DNA profiling program to identify owners of dogs responsible for leaving excrement or those found as strays. While there have been challenges in convincing owners to submit samples, the program is set to launch this year. Local authorities plan to send these samples to a regional government agency. The applicability of this initiative beyond the region may be questioned, but it raises the possibility of future inter-regional or national interoperability even?
“Intendiamo così salvare i dati delle analisi del DNA nella banca dati centrale al fine di essere in grado, tramite appositi test del DNA, di individuare i responsabili di eventuali escrementi ed anche i proprietari di cani randagi”, così spiega le ragioni del provvedimento l’assessore provinciale Arnold Schuler, che oggi (31 agosto) lo ha illustrato in Giunta provinciale che lo voterà nella seduta della prossima settimana. Gli enti locali, gli enti pubblici e le forze dell’ordine potranno, così, presentare campioni biologici ai laboratori competenti per la profilazione genetica, e quindi chiedere al Servizio Veterinario dell’Azienda Sanitaria dell’Alto Adige la correlazione dei dati con quelli inseriti nella banca dati dell’anagrafe degli animali di affezione. La correlazione fra i dati è finalizzata all’esercizio di funzioni istituzionali e può essere richiesta esclusivamente dagli enti indicati." (source)
A company in the US is calling such technology PooPrints and “adheres to FBI protocol”…
Via Gary Max on the surveillance studies mailing list.
An interesting report on the deployments of biometric and behavioural mass surveillance in EU Member States was recently published. The report was commissioned by the commissioned by the Greens/EFA in the European Parliament.
The online report furthermore includes a clever visualization of a network showing connections between MS authorities and public and private partners.
Two interesting articles from MIT Technology Review:
Facebook/Meta is shutting down its facial recognition system. They explain their choice in this blog post.
But the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole. There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.
The Guardian published this great piece of investigative journalism on the funding of research on security technologies through EU-funded research programmes (such as Horizon 2020), and the involvement of industry. The following is an excerpt about the way the kinds of funded research projects are presented:
“Often the problem is that the topic itself is unethical,” said Gemma Galdon Clavell, an independent tech ethicist who has evaluated many Horizon 2020 security research projects and worked as a partner on more than a dozen. “Some topics encourage partners to develop biometric tech that can work from afar, and so consent is not possible – this is what concerns me.” One project aiming to develop such technology refers to it as “unobtrusive person identification” that can be used on people as they cross borders. “If we’re talking about developing technology that people don’t know is being used,” said Galdon Clavell, “how can you make that ethical?”
Abacus, a unit of the South China Morning Post, published an article on how facial recognition technology is causing issues in China as people wear masks as a preventive measure against the new corona virus:
For hundreds of millions of people in China, the spread of the new coronavirus has caused abrupt changes to the smallest of habits – even a gesture that most in the country are used to by now: Looking into the camera for facial recognition.
Residents donning surgical face masks while venturing outside their homes or meeting strangers have found themselves in an unfamiliar conundrum. With their faces half-covered, some are unable to unlock their phones or use mobile payments with their faces.
Read the full article from abacus news.
Claire Walkey from the Oxford Refugee Studies Centre writes about why we should rethink refugee registration. Registration is the first moment when asylum-seekers become known by the state, so we might assume that states will always want to implement registration procedures to monitor people. But her fieldwork in Kenya shows that this is not always the case as here the government actually stopped the registration procedures. According to her we therefore need to “look for answers in the meaning and politics of registration itself”, as registration can be a form of empowerment for refugees:
Acknowledging the legal recognition that registration can offer refugees sheds light on why states, resistant to hosting refugees, might choose not to register them. It is too easy to assume, given security practices especially in the West, that states will always pursue bureaucratic surveillance and monitoring of refugees. In Kenya, it is politically prudent to resist legal recognition, even at the expense of bureaucratic surveillance. The promotion of registration by the international community may therefore find little traction by focusing purely on the security gain for states, particularly in contexts with weak administrative infrastructures. It is prudent instead to rethink registration and recognize that at times it offers more to refugees than to states.
According to this article by The Intercept some prisons in the U.S. are capturing the voices of incarcerated people’s voice to create new biometric databases with their “voice prints”. It seems like another example of the deployment of new technology with the involvement of private companies on more vulnerable groups of people, with all the usual problems of biometrics (eg. reliability) and automated decisions (eg. transparency, explainability).
The enrollment of incarcerated people’s voice prints allows corrections authorities to biometrically identify all prisoners’ voices on prison calls, and find past prison calls in which the same voice prints are detected. Such systems can also automatically flag “suspicious” calls, enabling investigators to review discrepancies between the incarcerated person’s ID for the call and the voice print detected. Securus did not respond to a request for comment on how it defined “suspicious.” The company’s Investigator Pro also provides a voice probability score, rating the likelihood that an incarcerated person’s voice was heard on a call.
This article from The Guardian presents an interesting case on if a company can fire a worker for refusing to use biometrics, in this case for time clocking.
The @refugeestudies published two podcasts from the workshop on biometric refugee registration where I was present as part of our Processing Citizenship project.