There are no good uses for facial recognition

My likeness doesn’t belong to anyone but me — stop turning it into data

Georgia Iacovou
5 min readJun 19, 2020

Picture this: you’re a delivery person going door to door dropping off packages. You’re over-worked and under-paid. You arrive at a house and ring the doorbell. You wait patiently for someone to answer. Everything is normal. Oh, except: you’ve just been profiled on a facial recognition database owned by Amazon. Wow, what a time to be alive.

The scenario above has been made possible by Ring, an Amazon-owned company who make smart doorbells for Karens. These doorbells are powered by Rekognition, Amazon’s facial recognition technology. Which works very well, actually (at automating the already heavily embedded systemic racism that exists in our society). Amazon have been working with US police departments to help them solve crimes and protect people create a private surveillance network and cultivate a culture of paranoia. They’ve trained police to recommend the doorbells to people who don’t have them already (why isn’t that illegal?) and to convince existing Ring customers to hand over their footage without a warrant (ah, now that one IS illegal…).

But don’t worry, Amazon have banned Rekognition from being used by law enforcement. Well, this is impeccable timing, isn’t it? They’ve realised that their shoddy, biased technology should not be in the hands of law enforcement — right in the middle of the Black Lives Matter movement. A recent study found that Rekognition misclassifies the gender of darker-skinned women 31 percent of the time. That’s even less accurate than similar tech built by Microsoft, who are also jumping on this virtue-signalling bandwagon. None of these tech giants seemed to care until the potential positive PR started to outweigh those hefty government contracts.

They’ve realised that their shoddy, biased technology should not be in the hands of law enforcement — right in the middle of the Black Lives Matter movement

Of course, it’s not really a ban, is it? It’s just a one year pause to wait for government regulation to catch up: “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested”. Haha, okay… and I hope you understand the following:

🙅‍♀️ No, Amazon, you will not be ‘helping’ to regulate your own putrid technology — your penchant for conflicts of interest is becoming a problem.

🙅‍♀️ The absolute gall of you, thinking that the US Congress will only need a year to figure out how to ethically implement facial recognition technology into law enforcement. I think it’s more like you think this whole BLM thing will just go away in a year and you won’t have to worry about protecting your reputation anymore…

🙅‍♀️ Credit where credit is due: Congress will probably be extremely motivated to sort this out, seeing as just a couple of years ago Rekognition falsely matched 28 of it’s members with faces on a database of criminal mugshots. Hopefully that really lit a fire under them, well done 👏.

So Amazon and Microsoft et al have left some shoes to fill…

Those two weren’t the only ones hoovering up government money via their ‘innovations’ in AI. Now that they’ve taken a (brief) step away from law enforcement, they’ve created a vacuum into which other ethically dubious technology will be sucked.

One of my favourites is of course Clearview AI, who’s website now boasts how law enforcement use their technology to “catch criminals at-large”, and “exonerate the innocent”. This all reads like a mission briefing for the Justice League.

What Clearview AI are cleverly omitting from their messaging is what their technology actually does: it continually scrapes the web (including social media) for images of faces, and compiles databases out of these. Do they ask for permission before putting a face on a database? Well, no… you could very easily be on it. Do a subject access request on them to find out; this journalist did, and was appropriately disturbed by what she got back (I’m still waiting to hear back from mine — they take a month to respond).

All the creepy intrusiveness of an online stalker, with the speed and precision of a machine!

So Clearview AI don’t just magically find faces of criminals — they simply document every single face they can find on the web, you know, just in case they’ve done something wrong. All the creepy intrusiveness of an online stalker, with the speed and precision of a machine! Sticking the faces of unknowing, innocent people onto a searchable database is not a clever law-enforcement tool — it’s just plain, good old-fashioned surveillance.

As if that wasn’t enough, here’s another company you’ve probably never heard of, and who could have your face on file: NEC have been working with police in London since January, to gear them up with wearable, real-time facial recognition cameras. This is less than a year after Axon, another facial recognition company, actually banned use of their police body cameras, because their ethics board reported that, “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras”. Yes, I’m as shocked as you — this tech company actually listened to their ethics board.

Wait there’s more: what about Quividi who like to scan your face while you shop — yes that’s right, this particular intrusion isn’t even to protect you from evil criminals… it’s to help you buy more products! Quividi make smart billboards that look at you and identify key things about you like your age, your gender (with 90% accuracy, apparently), and even your mood. Why do they need this information you ask? The answer is quite obvious, and it lies in this emoji: 🤑

☝️So just to conclude: while facial recognition is somewhat useful when unlocking your phone (I mean… I guess?), this tiny blip if convenience is massively out-weighed by how private companies spy on delivery people, put your face on a database without your knowledge, and track your mood while you walk around Westfields looking for the toilet. What’s more, even if this technology is being used with the best of intentions, it’s unreliable, sexist, and racist. And that is why, in 2020 there are still no good uses for facial recognition.

--

--

Georgia Iacovou

Writing about data privacy and tech ethics; dismantling what Big Tech firms are doing in a way that’s easy to understand