Toronto Cops Admit To Using Controversial Clearview AI Software… Why Everyone Needs To Care About This

Global News is reporting that the Toronto Police Service was using Clearview AI. This is  is a controversial facial recognition tool. It uses images scraped from social media and other websites to cross-reference uploaded images of people. The system reportedly has three billion photos on its database. This apparently had been going on for months before being ordered to stop by the chief of police in Toronto. Which of course implies that this was some sort of “black ops” effort done behind the chief’s back. Which in itself is a problem. On top of that, the cops have requested the Information and Privacy Commissioner to review Clearview AI. Finally, this report contradicts what the police force has said in the past. Which is that it does use facial recognition software. But not software from Clearview AI.

Now the fact that the Toronto Police Service was using this tool isn’t a surprise. But it is a huge problem. The New York Times has pointed out the dangers of software like this:

Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

Now that’s a huge problem. But the other problem with Clearview AI is that it basically scrapes its database of photos from social media. Often violating the terms of service of the social media platforms that it takes its photos from. Think about that for a second. Your photos have possibly been uploaded to this system without your permission. Regardless of the fact that your social media profile has been set up to be private or otherwise. That’s a huge problem that you should be a wee bit ticked off about. Which is why social media companies have gone after Clearview AI. In fact, here’s a list of what the company has been caught doing:

  • The company reportedly lifted pictures from Twitter, Facebook, Venmo and millions of other websites over the past few years. Twitter recently sent a cease-and-desist letter to Clearview AI after the company got caught claiming that the company’s actions violated Twitter’s policies and demanding that Clearview AI stop lifting images from its platform immediately.
  • Google and YouTube made similar claims in a cease-and-desist letter that they served up to Clearview AI.

But based on this, I don’t think that the company cares about what social media companies think of them scraping their platforms for photos. This is a CBS News interview with Clearview CEO, Hoan Ton-That who clearly thinks, or wants you to think that he is doing nothing wrong:

Clearly, he has a Travis Kalanic attitude to life. Which is “I’m going to do what I want so screw you and the horse you rode in on if you have a problem with that.” But creating a massive database of pictures so that law enforcement can use it anyway the wish is problematic. And what happens if someone is misidentified by this system? It may take years, if it is even possible to clear that person’s name from any association with a crime. And just think what would happen if a law enforcement agency, or worse yet a government wanted to go after a person or a group via this piece of software. Just thinking about it sends chills down my spine as it sounds like something out of 1984.

That’s why we all should have a problem with what the Toronto Police Service was doing with this tool. It sends society down a very dangerous path that there may be no way back from. Thus they need to be held to account for using this tool. But we should also be bothered by Clearview AI who has the vibe of Uber. But with something that is far more dangerous. People need to make sure that politicians and those in positions to potentially use a piece of software like this know that this is unacceptable. I am all for any tool, software or otherwise that can help to catch the bad guys and put them in jail. But violating people’s privacy on a massive scale to do it isn’t something that should exist in a free and democratic society. At least I think so. And if you value being in a free and democratic society, so should you.

 

 

 

2 Responses to “Toronto Cops Admit To Using Controversial Clearview AI Software… Why Everyone Needs To Care About This”

  1. […] might recall that I posted a story on Toronto Police being ordered by their chief to stop using facial […]

  2. […] adjustments. However, I would say that into this void other companies will step in. Clearview AI who appear not to have the same moral standards as IBM is likely to fill this void. Which means that this tech will still be out […]

Leave a Reply to RCMP Cops To Using Clearview AI Facial Recognition Tech | The IT NerdCancel reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading