You might recall that I posted a story on Toronto Police being ordered by their chief to stop using facial recognition tech from a controversial company named Clearview AI. This is a company who has a massive database of faces that they’ve scraped from places like Facebook and Twitter, violating those platforms terms of service in the process. Well, it now turns out the RCMP has been using this software too according to the CBC:
The RCMP acknowledged use of Clearview AI’s facial recognition technology in a statement Thursday, detailing its use to rescue children from abuse.
The force said it has used the technology in 15 child exploitation investigations over the past four months, resulting in the identification and rescue of two children.
The statement also mentioned that “a few units in the RCMP” are also using it to “enhance criminal investigations,” without providing detail about how widely and where.
“While the RCMP generally does not disclose specific tools and technologies used in the course of its investigations, in the interest of transparency, we can confirm that we recently started to use and explore Clearview AI’s facial recognition technology in a limited capacity,” the statement said.
“We are also aware of limited use of Clearview AI on a trial basis by a few units in the RCMP to determine its utility to enhance criminal investigations.”
CBC News has requested further details of where else the force is using Clearview AI, but has yet to receive a response.
Fortunately, there appears to be some scrutiny coming to this issue. The Privacy Commissioner is investigating the company, and the House of Commons Standing Committee on Access to Information, Privacy and Ethics is going to have a look as well. Seeing as Canada has some of the more strict privacy laws on the planet, with the exception of the EU of course, there’s a real chance that the use of this software could be curtailed. Seeing as the company behind it appears to have stolen images for its database, and software like this tends to misidentify racialized groups with alarming frequency, and is a significant invasion of your privacy, that would be a good thing.
Like this:
Like Loading...
Related
This entry was posted on February 28, 2020 at 9:13 am and is filed under Commentary with tags Privacy. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
RCMP Cops To Using Clearview AI Facial Recognition Tech
You might recall that I posted a story on Toronto Police being ordered by their chief to stop using facial recognition tech from a controversial company named Clearview AI. This is a company who has a massive database of faces that they’ve scraped from places like Facebook and Twitter, violating those platforms terms of service in the process. Well, it now turns out the RCMP has been using this software too according to the CBC:
The RCMP acknowledged use of Clearview AI’s facial recognition technology in a statement Thursday, detailing its use to rescue children from abuse.
The force said it has used the technology in 15 child exploitation investigations over the past four months, resulting in the identification and rescue of two children.
The statement also mentioned that “a few units in the RCMP” are also using it to “enhance criminal investigations,” without providing detail about how widely and where.
“While the RCMP generally does not disclose specific tools and technologies used in the course of its investigations, in the interest of transparency, we can confirm that we recently started to use and explore Clearview AI’s facial recognition technology in a limited capacity,” the statement said.
“We are also aware of limited use of Clearview AI on a trial basis by a few units in the RCMP to determine its utility to enhance criminal investigations.”
CBC News has requested further details of where else the force is using Clearview AI, but has yet to receive a response.
Fortunately, there appears to be some scrutiny coming to this issue. The Privacy Commissioner is investigating the company, and the House of Commons Standing Committee on Access to Information, Privacy and Ethics is going to have a look as well. Seeing as Canada has some of the more strict privacy laws on the planet, with the exception of the EU of course, there’s a real chance that the use of this software could be curtailed. Seeing as the company behind it appears to have stolen images for its database, and software like this tends to misidentify racialized groups with alarming frequency, and is a significant invasion of your privacy, that would be a good thing.
Share this:
Like this:
Related
This entry was posted on February 28, 2020 at 9:13 am and is filed under Commentary with tags Privacy. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.