Archive for Privacy

Google Home Speakers May Have Been Recording Sounds Without Your Permission

Posted in Commentary with tags , on August 10, 2020 by itnerd

Your Google Home speaker may have been quietly recording sounds around your house without your permission or authorization it was revealed this week.

A Google spokesperson told Protocol that the feature was accidentally enabled for some users through a recent software update and has since been rolled back. But in light of Monday’s news that Google invested $450 million — acquiring a 6.6% stake — in home security provider ADT, it may be a sign of things to come for Google, as it hints at the company’s secret home security superpower: millions of smart speakers already in people’s homes.

There have been a few people on Reddit who have discovered this, and frankly this bothers me. And it should bother you. I can see how this could be a powerful feature. But I can also see how this could become a privacy nightmare. Google really needs to come clean in terms of this, and give users of these smart speakers the info that they need to give them the ability to have the privacy that they need.

Pizza Pizza Gave Cops Customer’s Personal Info Without A Warrant….. Wow

Posted in Commentary with tags on July 4, 2020 by itnerd

One of the biggest pizza operations in Canada is Pizza Pizza. But they seem to have dropped themselves into it big time as the Toronto Star has discovered. Apparently according to the Toronto Star, Toronto Police while investigating gangs in the city, trolled customer information using Pizza Pizza’s databases without going for a warrant first. And Pizza Pizza simply complied:

Amid a major criminal investigation that announced dozens of arrests last year, the popular pizza chain voluntarily searched its internal data and handed over customers’ personal information to Toronto police investigators, the Star has learned. 

Officers used the technique in Project Kraken, an investigation into guns, gangs and drugs that resulted in more than 70 arrests last June. Seven of those charged were tow truck operators, police said at the time. The accused are awaiting trial.

During the investigation, Toronto police obtained telephone numbers from phone intercepts. Officers then took those numbers to Pizza Pizza to get any matching customers’ names.

None of the accused were identified using the technique. Rather, it was used to identify people associated with targets of the investigation. It is not clear how many people were identified. 

Two sources connected to the case told the Star police used the technique. The Star is not identifying the sources because they can’t publicly speak about material that is part of disclosure.

Well, that is something that doesn’t sit well with me. And I am sure that if you have used Pizza Pizza it doesn’t sit well with you either. The cops won’t say anything about this. But Pizza Pizza says this:

The Star also asked Pizza Pizza for comment. In an emailed response, the company said it is “committed to protecting the Personal Information provided by customers. Our Privacy Policy details how we protect and use customer data in the fulfilment of customer orders.”

Pizza Pizza’s privacy policy states the company “reserves the right to access and/or disclose Personal Information where required to comply with applicable laws or lawful government requests.”

Customers, the policy states, consent with every order made to the “collection, use and disclosure of your information by Pizza Pizza in accordance with” its policy terms.

So, what that tells me is the following:

  1. If you use Pizza Pizza’s online services to order a pizza for pickup or delivery. Then your personal information could be subject to being handed over to someone in law enforcement of government.
  2. Pizza Pizza clearly doesn’t require a warrant to hand that info over.

I have the Pizza Pizza app on my iPhone, and this is enough for me to delete it off my iPhone and not use their services again. Now Pizza Pizza may not care about that. But they will care about this:

In response to Star queries, a spokesperson for the Office of the Privacy Commissioner of Canada said the office could not comment on this specific scenario involving Pizza Pizza because it had not examined it “in detail.”

I read that as this might get a serious look by the Ontario Privacy Commissioner. And I hope it does as this behavior by Pizza Pizza is simply unacceptable.

TikTok Doesn’t Belong On Your Phone Because It Is A Privacy & Security Nightmare Says Security Researcher

Posted in Commentary with tags , , on July 3, 2020 by itnerd

According to a security researcher who posted to Reddit, TikTok is one app that if you value your privacy and security, you need to delete ASAP. Here’s why:

TikTok is a data collection service that is thinly-veiled as a social network. If there is an API to get information on you, your contacts, or your device… well, they’re using it.

  • Phone hardware (cpu type, number of course, hardware ids, screen dimensions, dpi, memory usage, disk space, etc)
  • Other apps you have installed (I’ve even seen some I’ve deleted show up in their analytics payload – maybe using as cached value?)
  • Everything network-related (ip, local ip, router mac, your mac, wifi access point name)
  • Whether or not you’re rooted/jailbroken
  • Some variants of the app had GPS pinging enabled at the time, roughly once every 30 seconds – this is enabled by default if you ever location-tag a post IIRC
  • They set up a local proxy server on your device for “transcoding media”, but that can be abused very easily as it has zero authentication

The stuff that I’ve listed above is pretty bad. But it gets worse:

Here’s the thing though.. they don’t want you to know how much information they’re collecting on you, and the security implications of all of that data in one place, en masse, are f**king huge. They encrypt all of the analytics requests with an algorithm that changes with every update (at the very least the keys change) just so you can’t see what they’re doing. They also made it so you cannot use the app at all if you block communication to their analytics host off at the DNS-level.

For what it’s worth I’ve reversed the Instagram, Facebook, Reddit, and Twitter apps. They don’t collect anywhere near the same amount of data that TikTok does, and they sure as hell aren’t outright trying to hide exactly whats being sent like TikTok is. It’s like comparing a cup of water to the ocean – they just don’t compare.

This is just downright scary. And this Reddit thread is gaining attention. Security company Zimperium had its own look at TikTok and it says its a security risk. Anonymous has said to “delete this Chinese spyware now.” The Pentagon advises that TikTok should be deleted from phones. Something that the US Army has taken heed of. And while this likely has more to do with a border issue between China and India, the latter has banned a pile of Chinese apps, which includes TikTok.

The point is that it’s pretty clear that TikTok is a security risk of epic proportions. If you value your security, I would read the Reddit thread and then make your own decision as to if TikTok deserves a place on your smartphone. Or your kids smartphone for that matter.

Canada Announces National Contact Tracing App…. What Are The Security And Privacy Concerns?

Posted in Commentary with tags , on June 19, 2020 by itnerd

Yesterday Prime Minister Justin Trudeau announced the federal government will begin testing a “completely voluntary” contact tracing app that can be used nationwide. You can get more details here. Every since that announcement concerns around security and privacy controls started to become top of mind. David Masson, Director of Enterprise Security for Darktrace shared with me his security concerns that are associated with contact tracing:

The debate over a centralized or a decentralized approach while using contact tracing apps continues. A decentralized approach would mean that the data stays on an individual’s phone, while a centralized one would mean that all the data from the app goes to one central body. Both approaches have their own merits.

In Canada, a unified approach to contact tracing led by the Federal Government, rather than by the individual Provinces and Territories, will relieve the Provinces and Territories of some legal and financial ramifications. A unified effort would also ensure a more collaborative process for building in security and privacy controls, and it would be more efficient for decision making. As the Federal Government makes declared decisions about the app and its development, security needs to remain a priority.  A centralized approach, however, needs to come with caveats and protections.

If it is the Federal Government ensuring that a sick person remains isolated and enforcing quarantine, there will be privacy trade-offs. We must be prepared for the future: what should we do with the data after this crisis is finally said and done? Sunset clauses should be put in place to assure the Canadian public that the highest consideration will be taken and that there will be transparency about what happens once the data is no longer needed. 

With regard to the collection of data centrally, scientists and health officials could leverage the data for good. They could use data from the apps to analyze how the virus spreads, how it impacts society, and more, which would improve our ability to deal with the outbreak. However, the Federal Government will need to ensure that any data shared for research is secure.

There will also need to be the ability to have some form of open and transparent redress for all citizens with regard to any contact tracing approach in Canada.

I then asked about the fact that this app will utilize the Apple/Google Exposure Notification API. You can find out more info about that here. The Apple/Google API is billed as best in class when it comes to privacy.does So my question was if the usage of this API made things safer? 

I think the question isn’t is it ‘safe’, but does it makes things more secure? Maybe, maybe not.

Privacy and security are not the same things. Privacy is about personal control of your own data, in particular your identity. Security is the tools that will help you control your data and some tools are better than others. Quite frequently when tools or applications are rushed to market without adequate testing, security vulnerabilities subsequently appear.

When rolling out an application that could be used by so many members of the population, governments should use the best available technology with the lowest risk for security or privacy concerns. However, even then it’s impossible to say that without a doubt an application is or is not safe and important to remember that ‘safe’ can mean different things in different contexts. 

For it to be a ‘safe’ application, the technology needs to be implemented correctly, and the app needs to be shut off when the pandemic is over. History has shown that both of these assumptions could prove to be flawed.

That’s an interesting view as reading over the details related to the Apple/Google Exposure Notification API would have had me assume that there was nothing to worry about. But clearly from what David Masson has said, I clearly hadn’t considered all the implications of what a contact tracing app like this one are. Thus I thank him for his insights on this. It’s given yours truly, as well as a lot of you a lot to think about.

Here’s How The Last 4 Digits Of Your Credit Card Can Be Used To Commit Fraud

Posted in Commentary with tags , on June 8, 2020 by itnerd

Following up on this story from last week where Bank Of Montreal or BMO was sending marketing material to customers using the last 4 digits of their credit card, I got a few people who emailed me asking what a miscreant can do with four digits of a credit card number.

Actually, quite a bit. The fact is that credit card numbers aren’t just random blocks of 16 digits. There are some mathematical relationships that hold between them. So if a miscreant knows the last four digits and those relationships, that narrows the attack surface considerably. Let me give you an example. If you know the last four digits up front, here’s what a miscreant can do:

  • All Visa cards start with 4 and all MasterCards start with 5, that’s one digit right there.
  • If you know the bank or the card issuer, that’s few more digits.
  • The type of card, be it gold or whatever, that can give you a couple more digits.

That leaves a miscreant with a handful of digits to figure out. Now, I will admit that this is still not a trivial exercise. But from my research on the dark web, this approach is successful way more often than you think. Which to be frank is quite scary. Sure they still have to figure the expiry date and the CCV number on the back of the card. But it is doable.

The fact is that a small amount of personal information can be used to perpetrate some sort of fraud. The information in question can be used to combine information that has been acquired separately. If there’s a large breach on social security numbers (For example, the Equifax hack), and credit card numbers (like some online store hack) you could link those together to perpetrate some sort of fraud. Which is why I put out the story on BMO’s use of the last 4 digits of customer’s credit cards in their marketing. It’s an attack vector. One that while is not easy to take advantage of, it is exploitable. Thus you need to make sure that you’re on the right side of this so that you don’t become the next victim.

On a related note, I have yet to hear back from BMO on my questions related to this topic. That’s a shame and I think it says something about how BMO views this situation.

Why Does BMO Use The Last For Digits Of Your Credit Card For Marketing Purposes?

Posted in Commentary with tags , on June 5, 2020 by itnerd

I became aware of something that I truly find bizarre. One of my PR contacts got some marketing material from the Bank Of Montreal, or better known as BMO. In that marketing material were the last four digits of her credit card number. She found that to be very odd which is why she pinged me on this.

But it doesn’t end there. When she reached out to BMO on Twitter to inquire as to why they were doing this, they said this:

“I can advise that with marketing offer, we ask that you provide certain information, so we can track who is taking advantage of the offers we send out. This information is only used by BMO and not provided to any third parties.”

Here’s my take.

BMO offers MasterCard branded cards and the format of the card number goes something like this:

5191-23xx-xxxx-xxxx

So if I were some sort of miscreant, having the last 4 digits of a credit card makes life a whole lot easier to guess what a card number might be. Sure it may take effort to get the full card number. And then you have to get the expiry date and perhaps even the CCV (the three digit security code on the back of the card) to exploit the card for fraud. So it would take some work. But it is possible to do. Beyond that, simply having the credit card number can be enough to grab personal information to commit some sort of fraud that isn’t related to going on a spending spree with someone’s credit card.

Both of those outcomes would of course be bad for the customer.

The other thing that I will point out is that there are many ways to track if a customer takes advantage of an offer or not. There are many tools like Pardot which is made by Salesforce for example that can do this transparently. And I am pretty sure that using a credit card number, even a partial one, is not a good way way of doing this. So I was very interested as to why BMO decided to go with using the last four digits to track if a customer takes advantage of an offer. So I decided to ask them.

If I get an answer, I will update this story. But on the surface, this sounds like a bit of a risk to customers. And perhaps BMO needs to take a second look at this, as we live in times where everyone should be risk adverse.

UPDATE: I have a screen shot of the piece of marketing that this person received. I have removed all the personal information and noted where the last four digits of the credit card number is located with the words “Last 4 Digits Of Credit Card Number Above” which of course I have removed.

An Adult Site Exposes User Data…. Which Is Not The Exposure That Users Wanted

Posted in Commentary with tags on May 6, 2020 by itnerd

CAM4, a popular adult platform that advertises “free live sex cams,” misconfigured an ElasticSearch production database so that it was easy to find and view heaps of personally identifiable information, as well as corporate details like fraud and spam detection logs. According to Wired, the database exposed 7 terabytes of names, sexual orientations, payment logs, and email and chat transcripts — 10.88 billions records in all

First of all, very important distinction here: There’s no evidence that CAM4 was hacked, or that the database was accessed by malicious actors. That doesn’t mean it wasn’t, but this is not an Ashley Madison–style meltdown. It’s the difference between leaving the bank vault door wide open (bad) and robbers actually stealing the money (much worse).

The mistake CAM4 made is also not unique. ElasticSearch server goofs have been the cause of countless high-profile data leaks. What typically happens: They’re intended for internal use only, but someone makes a configuration error that leaves it online with no password protection. “It’s a really common experience for me to see a lot of exposed ElasticSearch instances,” says security consultant Bob Diachenko, who has a long history of finding exposed databases. “The only surprise that came out of this is the data that is exposed this time.”

And there’s the rub. The list of data that CAM4 leaked is alarmingly comprehensive. The production logs Safety Detectives found date back to March 16 of this year; in addition to the categories of information mentioned above, they also included country of origin, sign-up dates, device information, language preferences, user names, hashed passwords, and email correspondence between users and the company.

This is not trivial. If you take the adult nature of what this site does out of the equation, this is a massive leak of data that could really have long term consequences for users of this site if this data was accessed. Which there isn’t evidence that it has been accessed. At least not at present. But if we start to see things like targeted attacks and extortion phishing emails start to pop up in users inboxes, then we’ll know that this has gone from bad to worse.

RCMP Cops To Using Clearview AI Facial Recognition Tech

Posted in Commentary with tags on February 28, 2020 by itnerd

You might recall that I posted a story on Toronto Police being ordered by their chief to stop using facial recognition tech from a controversial company named Clearview AI. This is a company who has a massive database of faces that they’ve scraped from places like Facebook and Twitter, violating those platforms terms of service in the process. Well, it now turns out the RCMP has been using this software too according to the CBC:

The RCMP acknowledged use of Clearview AI’s facial recognition technology in a statement Thursday, detailing its use to rescue children from abuse.

The force said it has used the technology in 15 child exploitation investigations over the past four months, resulting in the identification and rescue of two children.

The statement also mentioned that “a few units in the RCMP” are also using it to “enhance criminal investigations,” without providing detail about how widely and where. 

“While the RCMP generally does not disclose specific tools and technologies used in the course of its investigations, in the interest of transparency, we can confirm that we recently started to use and explore Clearview AI’s facial recognition technology in a limited capacity,” the statement said. 

“We are also aware of limited use of Clearview AI on a trial basis by a few units in the RCMP to determine its utility to enhance criminal investigations.”

CBC News has requested further details of where else the force is using Clearview AI, but has yet to receive a response. 

Fortunately, there appears to be some scrutiny coming to this issue. The Privacy Commissioner is investigating the company, and the House of Commons Standing Committee on Access to Information, Privacy and Ethics is going to have a look as well. Seeing as Canada has some of the more strict privacy laws on the planet, with the exception of the EU of course, there’s a real chance that the use of this software could be curtailed. Seeing as the company behind it appears to have stolen images for its database, and software like this tends to misidentify racialized groups with alarming frequency, and is a significant invasion of your privacy, that would be a good thing.

Barclays Employees Were Spied Upon Via Company Installed Spyware On Employees’ Computers…. WTF?

Posted in Commentary with tags on February 21, 2020 by itnerd

Barclays has been criticised by HR experts and privacy campaigners after the bank installed “Big Brother” employee monitoring software in its London headquarters. In other words, the company used spyware to spy on their employees:

Introduced as a pilot last week, the technology monitors Barclays workers’ activity on their computers, and in some instances admonishes staff in daily updates to them if they are not deemed to have been active enough — which is described as being in “the zone.” The system tells staff to “avoid breaks” as it monitors their productivity in real-time, and records activities such as toilet visits as “unaccounted activity.” A whistleblower at the banking giant told City A.M. that “the stress this is causing is beyond belief” and that it “shows an utter disregard for employee wellbeing.” “Employees are worried to step away from their desks, have full lunch breaks, take bathroom breaks or even get up for water as we are not aware of the repercussions this might have on our statistics,” they added. Big Brother Watch, a privacy campaign group, described the technology as “creepy.” The software, provided by Sapience, has been rolled out throughout the product control department within the investment bank division at the firm’s Canary Wharf headquarters.

Once this was made public, Barclays terminated the program. But I suspect that Barclays isn’t the only company doing something like this because employers want to maximize the productivity of their employees. And some will use methods like this to do it. The problem is that once this is out in the public domain, employees who have a problem with this will simply leave. Which of course hurts the company as the talent leaves which is expensive, and then you have to try and recruit new talent which is also expensive. Plus train them and the like, which is you guessed it, expensive. While you should have no expectation of privacy in the workplace, schemes like this go way too far. Thus if you’re an employer who things that this is a good idea, you might want to think again.

Seeing As The FBI Has Unlocked An iPhone 11, Why Do They Need Apple’s Help To Unlock An iPhone 5 & 7?

Posted in Commentary with tags , , on January 16, 2020 by itnerd

Following up on the latest Apple v. FBI fight where the FBI wants Apple to unlock an iPhone 5 and 7 that belongs to a suspect in a terror incident, despite they fact that the FBI has the ability to do this on their own without Apple’s involvement, comes news that the FBI has apparently got the capability to unlock an iPhone 11 which has far higher levels of security than the iPhone 5 and 7 that they want Apple to unlock:

Last year, FBI investigators in Ohio used a hacking device called a GrayKey to draw data from the latest Apple model, the iPhone 11 Pro Max. The phone belonged to Baris Ali Koch, who was accused of helping his convicted brother flee the country by providing him with his own ID documents and lying to the police. He has now entered a plea agreement and is awaiting sentencing.

Forbes confirmed with Koch’s lawyer, Ameer Mabjish, that the device was locked. Mabjish also said he was unaware of any way the investigators could’ve acquired the passcode; Koch had not given it to them nor did they force the defendant to use his face to unlock the phone via Face ID, as far as the lawyer was aware. The search warrant document obtained by Forbes, dated October 16 2019, also showed the phone in a locked state, giving the strongest indication yet that the FBI has access to a device that can acquire data from the latest iPhone. 

So given the facts above, why precisely does the FBI need Apple’s help to unlock an iPhone 5 and 7 given that they’ve unlocked something way more sophisticated from a security standpoint?

They don’t need Apple’s help. This is simply a stunt to get Congress to force companies like Apple to weaken the encryption on smartphones, computers, or anything else so that they can have access to them at any time for any reason. Or put another way, the FBI wants a backdoor into your device. As I have mentioned before, this is a bad idea. And as reports like these come out that show that this is an incredibly cynical attempt to push a political agenda, I would hope that the blowback that results makes those who are pushing this political agenda think twice.