Marking 20 years
of bold journalism,
reader supported.
News
Rights + Justice
Science + Tech

FOI Documents Confirm RCMP Falsely Denied Using Facial Recognition Software

Its contract with Clearview AI started in October, but the force was still denying using the controversial technology three months later.

Bryan Carney 16 Apr 2020TheTyee.ca

Bryan Carney is director of web production at The Tyee and reports on technology and privacy issues. You can follow his very occasional tweets at @bpcarney.

The RCMP denied using facial recognition software on Canadians three months after it had entered into a contract with controversial U.S. company Clearview AI, The Tyee has learned.

Documents obtained under a Freedom of Information request show an RCMP employee signed a “Requisition for goods, services and construction” form to fund a one-year contract with Clearview AI that began Oct. 29. The documents also include an invoice from Clearview AI signed by the RCMP Nov. 26.

The RCMP refused to say whether it used Clearview AI when asked by The Tyee in January 2020.

And the force went further in an emailed statement in response to questions from the CBC, denying in an emailed statement that it used any facial recognition software.

“The RCMP does not currently use facial recognition software,” it said on Jan. 17. “However, we are aware that some municipal police services in Canada are using it.”

In fact, the RCMP’s $5,000 contract with Clearview had begun almost three months earlier.

The FOI documents show the RCMP justified the request based on the software’s successful use by U.S. police agencies.

“Clearview is a facial recognition tool that is currently being used by the child exploitation units at the FBI and Department of Homeland Security because of it’s [sic] advanced abilities,” the employee wrote.

If the request was not approved, the form stated, “Children will continue to be abused and exploited online.”

“There will be no one to rescue them because the tool that could have been deployed to save them was not deemed important enough.”

The form also said that the RCMP would share information obtained using the software with internet child exploitation units in police forces across Canada.

A New York Times report in January headlined “The secretive company that might end privacy as we know it” focused attention on Clearview AI.

It reported the company claimed to have a database of billions of photos scraped from social media. More than 600 police forces — and some private companies — were already using the technology, which allowed them to upload a photo and see any matching images on the web, along with links to where they appeared, the company said.

The article also noted the risk of invasion of privacy, abuse and false identifications and arrests.

The RCMP revealed its use of the software after Clearview AI was hacked on Feb. 27 and its client list leaked.

It acknowledged officers had been using the software for “approximately four months.”

And despite the requisition’s suggestion use was limited to child exploitation cases, the RCMP confirmed three other units were using the software.

No information on the other units was provided under the Access to Information request.

Clearview AI isn’t the RCMP’s only case of conflicting information around the use of facial recognition technology.

The RCMP told The Tyee in July 2019 that the use of facial recognition technology would require approvals from its national headquarters, and that no such request had been made.

However The Tyee later confirmed the RCMP had been using facial recognition technology for 18 years.  [Tyee]

  • Share:

Facts matter. Get The Tyee's in-depth journalism delivered to your inbox for free

Tyee Commenting Guidelines

Comments that violate guidelines risk being deleted, and violations may result in a temporary or permanent user ban. Maintain the spirit of good conversation to stay in the discussion.
*Please note The Tyee is not a forum for spreading misinformation about COVID-19, denying its existence or minimizing its risk to public health.

Do:

  • Be thoughtful about how your words may affect the communities you are addressing. Language matters
  • Challenge arguments, not commenters
  • Flag trolls and guideline violations
  • Treat all with respect and curiosity, learn from differences of opinion
  • Verify facts, debunk rumours, point out logical fallacies
  • Add context and background
  • Note typos and reporting blind spots
  • Stay on topic

Do not:

  • Use sexist, classist, racist, homophobic or transphobic language
  • Ridicule, misgender, bully, threaten, name call, troll or wish harm on others
  • Personally attack authors or contributors
  • Spread misinformation or perpetuate conspiracies
  • Libel, defame or publish falsehoods
  • Attempt to guess other commenters’ real-life identities
  • Post links without providing context

LATEST STORIES

The Barometer

Are You Concerned about AI?

Take this week's poll