The new wave of virtue signalling in the wake of world-wide protests

From Amazon’s facial recognition moratorium to the rejection of Chinese apps: why are Big Tech and nation states alike withholding services?

Georgia Iacovou
7 min readAug 28, 2020

This piece was originally published in Issue 2 of GDFC Mag; a magazine run by Goal Diggers FC, a London-based football club for women and non-binary folk.

Protests at the US Consulate in Hong Hong, 2019. Unsplash

On the 17th of June 2020, aljazeera.com reported that there were 20 Indian soldiers killed in a violent skirmish against China, due to an ongoing border dispute. India chose to retaliate digitally, by banning 59 Chinese apps, including TikTok and WeChat, from Indian app stores. In their government announcement, they stated that their reasons for doing this were because these apps “engaged in activities … prejudicial to [the] sovereignty and integrity of India.”

That makes sense — apps like TikTok didn’t get where they are today by not intrusively harvesting user data for profit. However, I feel that the more likely desired affect of this move was not to protect Indian citizens from underhanded mobile apps, but to hurt China’s economy. India was TikTok’s largest overseas market, after all. Not only was this an effective move, but a relatively easy one: why bother the UN with a cumbersome, old-fashioned economic sanction when you could just ask Apple and Google to exclude a few dozen apps from your countries’ stores? Why indeed! Apple and Google were fine with this.

But wait, important side-note: according to Indian law, this decision did not need to be made public. So, perhaps they were also doing it… out of protest? Banning these apps certainly doesn’t help India either; this could put Indian software engineers and content creators out of work (you can make a lot of money doing TikTok dances). Furthermore, are we to assume that it will be easy to create and maintain domestic alternatives to all 59 Chinese apps?

This is not India’s first sojourn into a nationwide rejection of digital services. In 2016, following protests across the country, India banned Facebook Free Basics, an app that behaves as a ‘free’ gateway to small collection of websites and services. The upside to Free Basics was that you could do a few essential things online without having to pay for mobile data. The downside was, every request went via Facebook’s servers, thus making them privy to every inch of your browsing — in other words the app was an intrusive insight into the lives of people who cannot afford mobile internet (lest we forget that privacy is a privilege).

Those protesting against the app called it ‘digital colonialism’. This was the first time a Facebook product was rejected by an entire nation. Zuckerberg’s response: “some internet is better than no internet”. Nope, India have higher standards than that actually. Maybe Facebook should try something more along the lines of what Google are doing: they recently invested $4.5bn in Jio, India’s largest ISP. It’s things like this that may mark the beginning of the ‘Splinternet’: where the World Wide Web is no longer a thing, and the internet is regulated country by country. Ew…

Zuckerberg’s response: “some internet is better than no internet”. Nope, India have higher standards than that actually.

Now, this bout of anti-China sentiment isn’t only coming from India: the likes of Facebook, Google, and Twitter are currently refusing to hand over user data to Chinese law enforcement. Why are they doing this now, when they’ve never had a problem with doing it before? Two reasons:

REASON ONE: on the 30th of June, Beijing passed a horrid new security law in Hong Kong. It has everything you’d expect from an authoritarian government: broad language, vague definitions, and the very real threat of life imprisonment (e.g. damaging public transport is now considered an act of terrorism, and terrorists can get a life sentence). In other words, the law gives the authorities in Hong Kong new powers to essentially stamp out the protests that have been going on for a year now.

This cements the notion that the security of our human rights are at the mercy of Big Tech’s business decisions

REASON TWO: In the wake of this new law, Hong Kong citizens have been either ‘cleaning up’ or simply deleting their social media accounts for fear of being surveilled. A great way to put a stop to this terrifying account-purge, is for social media giants to simply cease their cooperation with law enforcement — thus cementing the notion that the security of our human rights are at the mercy of Big Tech’s business decisions. But look it’s fine: this time protecting human rights and profit margins are one in the same. Honestly, at this stage, this is just good PR; nobody likes social media anymore… they need to do something to get us back on side.

Right, let’s not get away from ourselves. We have new and horrendous surveillance measures in the West too — coupled gruesomely with the Black Lives Matter movement. Imagine (or… experience first hand) a racist police officer wading through a crowd of protestors. Now, add to that a body-cam powered with biased facial recognition technology.

Police at BLM protests in Portland, Oregon. Unsplash

Currently, there seems to be little way around this: if a machine is trained on biased datasets put together by a biased society, then it too will be biased. Ah yes, all the implications of traditional human bias, automated and amplified with the cold, indifferent speed of a machine.

We’ve seen a number of responses to this problem recently. In 2019 a company called Axon actually listened to their AI ethics board, and stopped putting their facial recognition technology into their body cameras. The reason: “Face recognition technology is not currently reliable enough to ethically justify its use on body-worn cameras.” In the same year, San Fransisco became the first major city in the US to ban the use of the technology by police. In June of this year, Boston did the same.

Ah yes, all the implications of traditional human bias, automated and amplified with the cold, indifferent speed of a machine.

One of the biggest players in the facial recognition game is of course Amazon. Their technology, called Rekognition, has been widely used by the police, and in consumer products. In some cases… both at once. Perhaps you’ve heard of Ring, a smart doorbell (Amazon-owned), that employs the use of Rekognition, so it can tell you if the person at your door is suspicious (yep, sounds awful already).

Last year it was revealed that Ring had been working closely with police departments across the US in order to train their officers to become better salespeople: Ring provided police with scripts to help them sell Ring doorbells, and convince existing doorbell owners to hand footage over without a warrant. Wow, what a grisly blurring of law enforcement and underhanded advertising techniques.

Rekognition is of course grossly inaccurate, and the inaccuracies disproportionately land on people of colour. A damning study from 2018 shows that Rekognition misclassifies darker-skinned women as men 31% of the time. The technology has even been tested on US congress: ACLU scanned every member’s face, and 28 of them were misidentified against a criminal database. Oh and yes, most of them were people of colour.

Unsplash

On the 10th of June, Amazon announced a one year moratorium on the use of Rekognition by governments, saying “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested”. We’ve known for years now that their technology — and their practices in implementing it — have had a lot of serious problems, so why do this now? Perhaps it has something to do with the fact that Black Lives Matter has gained a promising amount of momentum this summer? How useful, then, that the moratorium is only a year long; just enough time for the mainstream media to forget all about this movement.

When it comes to responding to protests, Amazon are just doing what any other big tech firm would do (and has already done). They know full well that Congress can’t figure this whole thing out in just one year. As is routine with Big Tech, they have simply spat out a piece of harmful, unchecked technology into the world, and retracted it only when the PR gains outweigh the monetary ones.

What’s become clear over the course of the recent protests is that — like the pandemic — protests for basic human rights are just yet another crisis for technology companies to leverage. They throw money at marketing stunts which are disguised as acts of solidarity. Meanwhile, their bloated lobbying arms perpetuate all of the long-standing systemic issues that we are currently protesting against. India’s stand against China is at least for good reason; the Big Tech response to protests are no better than the police officers who make a show of marching in support, just to make arrests later when the cameras are off.

--

--

Georgia Iacovou

Writing about data privacy and tech ethics; dismantling what Big Tech firms are doing in a way that’s easy to understand